POPULARITY
Categories
What does it mean when the Department of Homeland Security and ICE ask big tech -- Facebook, Palantir, etc -- for all of your info? Jess Lewis, a cyberpsychologist breaks it down and breaks our hearts. You can find her on Threads @jesslewis. Hosted on Acast. See acast.com/privacy for more information.
We kick things off in FOLLOW UP with the ongoing "nuclear war" between Automattic and WP Engine, where discovery has revealed Matt Mullenweg's alleged hit list of competitors and a desperate attempt to bully payment processors—because nothing says "open source" like an eight-percent royalty shakedown. Meanwhile, the Harvard Business Review confirmed what we already knew: AI isn't reducing our work; it's just compressing it until we're all working through lunch and burning out faster while Polymarket turns our collective brain rot into a literal "attention market" where you can bet on Elon's mindshare.Transitioning to IN THE NEWS, Elon has officially pivoted SpaceX from Mars to the Moon, presumably because building a "self-growing lunar city" is easier than admitting the Red Planet is hard, though his xAI all-hands rant about "ancient alien catapults" suggests he's been staring at the sun too long. Between X allegedly taking blue-check lunch money from sanctioned Iranian leaders, Meta facing trials for creating "predator-friendly hunting grounds," and Russia finally pulling the plug on WhatsApp, the internet is looking more like a digital dumpster fire than ever. Add in Discord leaking 70,000 government IDs, OpenAI shoving ads into ChatGPT while safety researchers flee the building like it's on fire, and a "cognitive debt" crisis eroding our ability to think, and you've got a recipe for a tech-induced psychosis that even crypto-funded human trafficking can't outpace.In MEDIA CANDY, we're wondering about the soft-core porn intro in the latest Star Trek: Starfleet Academy while Apple buys the total rights to Severance for seventy million dollars—because in-house production is the only way to keep those ballooning budgets under control. Super Bowl trailer season gave us a glimpse of The Mandalorian and Grogu and a Project Hail Mary teaser, while Babylon 5 has finally landed on YouTube for free, proving that even 90s serialized sci-fi eventually finds its way to the clearance bin.Over in APPS & DOODADS, Meta Quest is nagging us for our birthdays like a needy relative, while Roblox had to scrub a mass-shooting simulator—because "AI plus human safety teams" is apparently just code for "we missed it until it hit the forums." Ring's Super Bowl ad for "Search Party" accidentally terrified everyone by revealing a mass surveillance network for pets that's a slippery slope toward a police state, and Waymo is now paying DoorDashers ten bucks just to walk over and close the car doors that autonomous tech still can't figure out.Wrapping up with THE DARK SIDE WITH DAVE, we dive into the Mandalorian Hasbro reveal where Sigourney Weaver's action figure comes with no accessories because her existence is enough of a flex. We explore the grim reality of "RentAHuman," where humans are paid pittance to pretend AI agents are actually doing work, and look at "Trash Talk Audio," which sells a $125 microphone made out of a literal old telephone for that authentic Gen-X "get off the line, I'm expecting a call" aesthetic. From Marcia Lucas finally venting about the prequels and a rare book catalog specifically for our aging generation, we're reminded that while the future is a chaotic mess of "GeoSpy" AI and corporate reshuffling at Disney, at least we still have our cynical memories and some free versions of Roller Coaster Tycoon to keep us from losing it completely.Sponsors:CleanMyMac - Get Tidy Today! Try 7 days free and use code OLDGEEKS for 20% off at clnmy.com/OLDGEEKSDeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/733FOLLOW UPAutomattic planned to target 10 competitors with royalty fees, WP Engine claims in new filingAI Doesn't Reduce Work—It Intensifies ItPolymarket To Offer Attention Markets In Partnership With Kaito AIIsrael Arrests Members of Military for Placing Polymarket Bets Using Inside Information on Upcoming StrikesIN THE NEWSUnable to Reach Mars, Musk Does the Most Musk Thing PossibleWe'll Find the Remnants of Ancient Alien Civilizations': Read Musk's Gibberish Rant from His xAI All-Hands MeetingElon Musk's X Appears to Be Violating US Sanctions by Selling Premium Accounts to Iranian LeadersMeta Faces Two Key Trials That Could Change Social Media ForeverWhatsApp is now fully blocked in RussiaRussia is restricting access to Telegram, one of its most popular social media apps. Here's what we knowDOJ may face investigation for pressuring Apple, Google to remove apps for tracking ICE agentsDiscord Launches Teen-by-Default Settings GloballyDiscord says hackers stole government IDs of 70,000 usersFree Tool Says it Can Bypass Discord's Age Verification Check With a 3D ModelTesting ads in ChatGPTOpenAI Researcher Quits, Warns Its Unprecedented ‘Archive of Human Candor' Is DangerousOpenAI Fires Top Safety Exec Who Opposed ChatGPT's “Adult Mode”Anthropic AI Safety Researcher Warns Of World ‘In Peril' In ResignationMusk's xAI loses second co-founder in two daysAmerica Isn't Ready for What AI Will Do to JobsMonologue: No, Something Big Isn't ComingThe Scientist Who Predicted AI Psychosis Has a Grim Forecast of What's Going to Happen NextCrypto-Funded Human Trafficking Is ExplodingMEDIA CANDYShrinkingStar Trek: Starfleet AcademyPoor ThingsProject Hail Mary | Final TrailerMinions & Monsters | Official TrailerDisclosure Day | Big Game SpotThe Mandalorian and Grogu | A New Journey Begins | In Theaters May 22Babylon 5 Is Now Free to Watch On YouTubeApple acquires all rights to ‘Severance,' will produce future seasons in-houseOptimizing your TVAPPS & DOODADSTumbler Ridge Shooter Created Mall Shooting Simulator in RobloxHere's how to disable Ring's creepy Search Party featureWaymo Is Getting DoorDashers to Close Doors on Self Driving CarsTikTok US launches a local feed that leverages a user's exact locationApple just released iOS 26.3 alongside updates for the Mac, iPad and Apple WatchTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingWe Call It ImagineeringYour First Look at Hasbro's 'Mandalorian and Grogu' Figures Is Here (Exclusive)I Tried RentAHuman, Where AI Agents Hired Me to Hype Their AI StartupsTrash Talk AudioRoger Reacts to Star Wars - A New HopeMarcia Lucas Finally Speaks Out | Icons Unearthed: Unplugged (FULL INTERVIEW)What's wrong with the prequels?Rare Books, Gen X editionGeoSpyCLOSING SHOUT-OUTSRobert Tinney, who painted iconic Byte magazine covers, RIPBud CortSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Theatre and tech don't usually share the same stage. But that's changing. This season, two plays caught our eye: Data, a play about the inner workings of a data mining company, and Marjorie Prime, a play where grief, family, and AI collide. Karah interviews both playwrights: Matthew Libby (Data) and Jordan Harrison (Marjorie Prime). They discuss the origins of their plays, from failed collabs with AI chatbots to the internship with Palantir that never was. And how plays about technology can teach us about our humanity. Data runs through March 29th. Tickets are at lortel.org Marjorie Prime runs through February 15th. Buy Tickets at 2st.comSee omnystudio.com/listener for privacy information.
The HITO boys talk about Pam Bondi's insane crashout while testifying in front of Congress on the Epstein files and other matters pertaining to the DOJ. Bondi may be the most shameless woman on the planet as she deflects away from important questions about Epstein's alleged associates to glaze Trump and lob straws at immigrants. They also talk about the rising surveillance state as it pertains to Ring's dystopian Super Bowl ad and what that may mean for everyday people. Lastly... LEFTIST INFIGHTING!Early access on Patreon: https://www.patreon.com/headintheofficepodSubstack: https://headintheoffice.substack.com/HITO Merch: https://headintheoffice.com/ Get 40% off Ground News: https://ground.news/checkout/all?fpr=headintheoffice YouTube: https://www.youtube.com/channel/UC4iJ-UcnRxYnaYsX_SNjFJQSubscribe to second channel: https://www.youtube.com/channel/UC3UoTN328OA7fK2dzicP-ZATikTok: https://www.tiktok.com/@headintheoffice?lang=enInstagram: https://www.instagram.com/headintheoffice/Twitter: https://twitter.com/headintheofficeThreads: https://www.threads.com/@headintheofficeDiscord: https://discord.gg/hito Collab inquiries: headintheofficepod@gmail.comSeen on this episode: Pam Bondi crashout - https://www.thedailybeast.com/thomas-massie-roasts-pam-bondi-for-her-attempts-to-attack-lawmakers/https://www.cnn.com/politics/live-news/pam-bondi-house-hearing-02-11-26Everything is Palantir - https://www.theverge.com/tech/876866/ring-search-party-super-bowl-ad-online-backlash https://www.theverge.com/2019/7/29/20746156/amazons-ring-law-enforcement-partnerships https://www.polygon.com/discord-face-id-requirement-backlash/
The Second Act Executive delivers a powerful, strategic recalibration for leaders navigating their second act.This episode, titled The Alpha Frequency, is not about hustle. It is about infrastructure. It is about authority. It is about aligning capital, clarity, and confidence at a stage of life where preservation, positioning, and legacy matter more than applause.Hosted by a former corporate executive turned entrepreneur, philanthropist, investor, licensed real estate professional, author, and mother, this episode unpacks:• The evolution of Wolf Vibrations, LLC — from its early aesthetic wellness roots to the successful expansion of B Wellness Center with Nikki B Wellness• The strategic alignment with Pinnacle Advisement Group, expanding into mortgages, lending, tax structuring, and executive private practice transitions• The 360° Market Mastery Kit featuring Jim Cramer's How to Make Money in Any Market• Deep dives into AMD, Microsoft, Apple, and Palantir — with infrastructure-level analysis for mature investors• The philosophy behind Peter Thiel's Zero to One and why competition is a distraction for legacy builders• High-functioning low self-esteem and financial autonomy for women over 50• A direct acknowledgment of financial abuse dynamics and the importance of asset control• The distinction between emotional courage and financial preparedness• A reminder that Waking Up and Walking Away: A Roadmap to Freedom by Tawnie Wolf exists as a resource for those rebuilding autonomy• The assisted living crisis, elder advocacy, and why presence is a form of protection• The shift from hustle culture to authority culture in your 50s and beyondThis is a structured, strategic conversation about becoming the monopoly of your own life.It is about transitioning from employee to architect.From reactive to intentional.From earning to preserving.From being blessed to being a blessing.The podcast is The Second Act Executive.This episode is The Alpha Frequency.Come hell or high water, the second half of your life belongs to you.
This week, Brian, Leah, and Zoë discuss WIRED's big scoop on ICE's startling plans to expand to nearly every state in the US. Then, they unpack Alex Karp's nearly-hour-long non-response to Palantir employees with ethical concerns about collaborating with ICE. Plus, a WIRED writer lets the viral AI assistant OpenClaw run his life for a week to give listeners a peek into what AI agents can and can't actually do. Articles mentioned in this episode: The Shoes and Brooms Transforming Curling at the 2026 Winter Olympics I Loved My OpenClaw AI Agent—Until It Turned on Me Palantir CEO Alex Karp Recorded a Video About ICE for His Employees ICE Is Expanding Across the US at Breakneck Speed. Here's Where It's Going Next The ICE Expansion Won't Happen in the Dark James Holzhauer's Jeopardy! Greatness, in Charts Learn about your ad choices: dovetail.prx.org/ad-choices
Ty Wang, cofounder and CEO of Angle Health, breaks down what it means to give back through public service, then shows how that same mindset drives his mission to modernize healthcare for small and midsize businesses. We get into why legacy health plans feel opaque and painful, what an AI native health plan actually changes behind the scenes, and how better data and workflows can create real cost stability for employers.Ty shares his path from a federal scholarship and national service work to Palantir, and why he chose one of the most regulated, least glamorous industries to build in. If you have ever wondered why healthcare feels impossible to navigate, or why renewals can blindside a company, this conversation will give you a clear mental model of the problem and a practical view of what modernization looks like when it actually ships. Key TakeawaysHealthcare feels broken because the infrastructure is fragmented, data is siloed, and even basic questions become hard to answer across inconsistent systemsModernizing healthcare is not just about a new app, it is about rebuilding the operational core so workflows, claims, underwriting, and member experience can run on integrated dataSmall and midsize businesses are hit hardest by cost volatility because they lack transparency, predictability, and negotiating leverage, yet health insurance is often a top line item after payrollA strong approach to regulated markets is collaborative, treat regulators as partners in consumer protection, not obstacles to work aroundMission and impact can be a recruiting advantage, especially when the technical problems are genuinely hard and the outcomes touch real people fastTimestamped Highlights00:40 What Angle Health is, and what AI native means in a real health plan02:05 The scholarship path that pulled Ty into public service and set his trajectory04:06 The personal story behind the mission, the American dream, and why access matters09:38 Why healthcare infrastructure is so complex, and how siloed systems create bad experiences11:33 Why SMBs get squeezed, and how manual administration blocks customization at scale13:20 The real pain point for employers, cost volatility and zero predictability before renewal16:55 Why the tech can expand beyond SMBs, but why the SMB market is already massive19:51 Lessons from building in a regulated industry, and why credibility and funding matter22:26 Hiring for high agency, mission driven talent in a world full of AI companiesA line that sticks“Unless you are lucky enough to work for a big company, these modern healthcare services are still largely inaccessible to the vast majority of Americans.”Pro Tips for tech operators and buildersIf you are modernizing a legacy industry, start with the infrastructure layer, fix the data model, integrate the systems, then automate workflowsIn regulated markets, build relationships early, show how your product improves consumer outcomes, and make compliance a design constraint, not a bolt onWhen selling into SMBs, predictability beats perfection, give customers a clear breakdown of what drives costs and what they can controlWhat's next:If this episode helped you see healthcare and legacy modernization more clearly, follow the show on Apple Podcasts or Spotify and subscribe so you do not miss the next conversation. Also, share it with one operator or builder who is trying to modernize a messy industry.
Laffer Tengler Investments CEO Nancy Tengler talks with TITV Host Akash Pasricha about the recent software selloff and why she is doubling down on Nvidia, Palantir, and Apple. We also talk with The Information's Aaron Holmes about the "agent dashboard" battle between Microsoft, Salesforce, and OpenAI, and Buttonwood Funds' Joseph Alagna about the synergies behind the SpaceX and xAI merger. Lastly, we get into the future of orbital computing with Robinhood co-founder Baiju Bhatt as he unveils his new space startup, Aetherflux.Articles discussed on this episode: https://www.theinformation.com/articles/new-ai-superagent-race-pitting-openai-anthropic-microsoft-salesforcehttps://www.theinformation.com/newsletters/applied-ai/looming-battle-agent-management-softwareSubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/
This week, we start with cursed Valentine's poetry from...Greg Wallace and a rogue “dangler", before plunging straight into the political sewer. Rory Stewart decides that £94k counts as “low income”, Trump posts a racist video in the middle of the night and then immediately blows up his own press office's cover story, and Labour descends into full‑blown chaos as Morgan McSweeney finally exits stage left.Marina and Jemma pick through the wreckage: Wes Streeting's leaked war‑crimes admission, Mandelson's potential fingerprints over Palantir, and a Labour Party so busy chasing Reform voters it's forgotten its own base. It's maddening, it's bleakly funny, and it's exactly why The Trawl exists - because someone has to sift through this BS. Thank you for sharing and please do follow us @MarinaPurkiss @jemmaforte @TheTrawlPodcast Patreonhttps://patreon.com/TheTrawlPodcast Youtubehttps://www.youtube.com/@TheTrawl Twitterhttps://twitter.com/TheTrawlPodcastIf you've even mildly enjoyed The Trawl, you'll love the unfiltered, no-holds-barred extras from Jemma & Marina over on Patreon, including:• Exclusive episodes of The Trawl Goss – where Jemma and Marina spill backstage gossip, dive into their personal lives, and often forget the mic is on• Early access to The Trawl Meets…• Glorious ad-free episodesPlus, there's a bell-free community of over 3,300 legends sparking brilliant chat.And it's your way to support the pod which the ladies pour their hearts, souls (and occasional anxiety) into. All for your listening pleasure and reassurance that through this geopolitical s**tstorm… you're not alone.Come join the fun:https://www.patreon.com/TheTrawlPodcast?utm_campaign=creatorshare_creator Hosted on Acast. See acast.com/privacy for more information.
Patch Tuesday. Preliminary findings from the European Commission come down on TikTok. Switzerland's military cancels its contract with Palantir. Social engineering leads to payroll fraud. Google hands over extensive personal data on a British student activist. Researchers unearth a global espionage operation called “The Shadow Campaigns.” Notepad's newest features could lead to remote code execution. Our guest is Hazel Cerra, Resident Agent in Charge of the Atlantic City Office for the United States Secret Service. Ring says it's all about dogs, but critics hear the whistle. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today, we're joined by Hazel Cerra, Resident Agent in Charge of the Atlantic City Office for the United States Secret Service, as she discusses the evolution of the Secret Service's investigative mission—from its early focus on financial crimes such as counterfeit currency and credit card fraud to the growing challenges posed by cryptocurrency-related crime. Selected Reading Microsoft February 2026 Patch Tuesday Fixes 58 Vulnerabilities, Six actively Exploited Flaws (Beyond Machines) Adobe Releases February 2026 Patches for Multiple Products (Beyond Machines) ICS Patch Tuesday: Vulnerabilities Addressed by Siemens, Schneider, Aveva, Phoenix Contact (SecurityWeek) Chipmaker Patch Tuesday: Over 80 Vulnerabilities Addressed by Intel and AMD (SecurityWeek) Commission preliminarily finds TikTok's addictive design in breach of the Digital Services Act (European Commission) Palantir's Swiss Exit Highlights Global Data Sovereignty Challenge (NewsCase) Payroll pirates conned the help desk, stole employee's pay (The Register) Google Fulfilled ICE Subpoena Demanding Student Journalist's Bank and Credit Card Numbers (The Intercept) The Shadow Campaigns: Uncovering Global Espionage (Palo Alto Networks Unit 42) Notepad's new Markdown powers served with a side of RCE (The Register) With Ring, American Consumers Built a Surveillance Dragnet (404 Media) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
David Alton Clark, Retirement Income Warrior, discusses his 3 income and 2 growth portfolios (1:00) Stock specific examples of winners and losers (4:20) High yielding stocks = risk for capital loss (7:25) Taking profits in growth (9:00) Fed's hawkish statement, unemployment data critical (12:45) Making a mistake on Freeport-McMoRan (19:50) Tax loss harvesting (23:00)Show Notes:Dividend And Growth Stocks For An Overvalued Market With David Alton ClarkTaking Profits For Yield And Growth With David Alton ClarkRead our transcriptsFor full access to analyst ratings, stock and ETF quant scores, and dividend grades, subscribe to Seeking Alpha Premium at seekingalpha.com/subscriptions
İkili Görüş'te Emrullah Özdemir ve İlkan Dalkuç, Dr. Gökhan Çınkara ile Epstein dosyalarının açıklanmasının Ortadoğu'ya etkisini, İran'a olası ABD-İsrail müdahalesini ve Suudi Arabistan ile Birleşik Arap Emirlikleri arasında gerilen ilişkilerin Türkiye'ye etkisini tartışıyor.Çınkara'nın önerdiği Anthropic CEO'sunun sitesi: https://www.darioamodei.com/00:00 Giriş00:50 Bu bölümde neleri konuşacağız?02:00 ABD'yi yöneten üçlüde (askeri-sanayi kompleks, iş insanları ve elitler) Epstein nereye düşer?08:50 Epstein şaibeli bir şekilde ortaya çıkıyor: Başlangıçta bu kadar parayı nasıl kazandı?10:00 2008'den sonra hala Epstein'la ilişkisini sürdürenler, yeni ilişki kuranlar default şaibeli (evet, o da)11:50 Epstein'ın illegal ilişkileri dışındaki legal ilişkileri de gerçekten çok "ilginç"14:10 Noam Chomsky'nin Epstein'le ne işi vardı? Fail mi mağdur mu? (Homo sum, humani...)18:15 ABD'de Trump'ın partisinden olup önemli iki Trump karşıtının Epstein dosyalarının açıklanmasındaki rolüne dair21:55 Elitlerin savaşı yaklaşıyor: Steve Bannon, Peter Thiel, J. D. Vance27:30 Tech bro'lar Bill Gates'i gözden çıkardı, daha da sert vuracaklar30:50 Palantir ve Anthropic'in politika, niyet farkı32:30 2026 ABD ara seçimi Cumhuriyetçiler için iç açıcı görünmüyor36:30 2026 ABD ara seçiminde Demokrat Parti ne yapacak (DP dalgalanmadan durulmaz)38:40 İran'da kitlesel öldürümler İran'ın güç gösterisi değil zafiyetidir46:30 Ekonomik kriz İran rejimine darbelerini sıklaştıracak49:00 Küba pamuk ipliğine bağlı50:30 Biden İran'a yaptırımlara göz yumuyordu ama Trump arka kapı, nefes alanı bırakmadı53:30 Trump Körfez ülkelerinin kamplaşmasında taraf tutmuyor58:30 Türkiye'nin Suudi Arabistan ile BAE meselesinde şu haklıdır deme lüksü yok01:01:10 İngiltere'nin Arap ülkeleriyle, Epstein ile ilişkisi 01:03:20 Beğenmesek de Netanyahu "akıllı" adam, usta bir spinner01:04:40 Türkiye Libya'ya F-16 yollamamıştı ama Somali'ye yolladı. Bu ne anlama geliyor?01:08:50 İsrail neden Somaliland'ın ayrılmasını istiyor?01:10:10 Türkiye'nin Afrika açılımında Somali ve Etiyopya'nın önemi01:12:40 Dış politika, "hariciyeci"lere bırakılamayacak denli farklı hale gelecek (güvenlik ticareti)Ayrıcalıklardan yararlanmak için bu kanala KATIL:https://www.youtube.com/channel/UCWyDy24AfZX8ZoHFjm6sJkg/joinBizi Patreon'dan Destekleyin
Welcome to Impact Theory with Tom Bilyeu. In today's episode, Tom confronts the transformative power and hidden dangers of artificial intelligence, drawing from the recent revelations in the Epstein Files. He dives deep into how AI, far from being just a revolutionary tool, is increasingly leveraged for narrative control—shaping what we see, think, and remember. Tom explores the history of information manipulation, from Soviet-era censorship to modern algorithm-driven platforms, and reveals how tech elites wield influence through data, algorithms, and gatekeeping. He shares eye-opening examples of AI's opaque decision-making and discusses the critical importance of maintaining independent thought in a world where reality is curated by a handful of powerful individuals. If you're curious about how AI impacts society, the risks of mind control through technology, and what it means for freedom and truth in the digital age, strap in—this episode breaks it all down, challenging us to stay vigilant, seek multiple perspectives, and never treat chatbots as all-knowing oracles. Quince: Free shipping and 365-day returns at https://quince.com/impactpodShopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impactKetone IQ: Visit https://ketone.com/IMPACT for 30% OFF your subscription orderIncogni: Take your personal data back with Incogni! Use code IMPACT at the link below and get 60% off an annual plan: https://incogni.com/impactBlocktrust IRA: Get up to $2,500 funding bonus to kickstart your account at https://tomcryptoira.comNetsuite: Right now, get our free business guide, Demystifying AI, at https://NetSuite.com/TheoryHuel: High-Protein Starter Kit 20% off for new customers at https://huel.com/impact code impact What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER: https://tombilyeu.com/zero-to-founder?utm_campaign=Podcast%20Offer&utm_source=podca[%E2%80%A6]d%20end%20of%20show&utm_content=podcast%20ad%20end%20of%20show SCALING a business: see if you qualify here.: https://tombilyeu.com/call Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here.: https://tombilyeu.com/ ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu AI, Epstein Files, mind control, narrative control, algorithmic gatekeeping, Google Gemini, social media, information suppression, censorship, oligarchy, Iron Law of Oligarchy, elites, K-shaped economy, data fusion, Palantir, surveillance, predictive scoring, algorithmic friction, biased training data, Overton window, informational monopoly, confirmation bias, motivated reasoning, emotional contagion experiment, Facebook experiment, generative AI, independent thought, malinformation, open-source AI, information chokepoints Learn more about your ad choices. Visit megaphone.fm/adchoices
0:04 Dow hits 50,000 while most stocks lag—why it's a meaningless headline 0:59 Robinhood and Palantir slide—speculators start getting nervous 1:39 Jason Zweig on low-volatility funds—and why timing them is a trap 1:55 Why the Dow is a terrible “index” built on 1890s math 3:22 Diversified portfolios quietly up nearly 6% YTD in early 2026 3:32 Small-cap value up 13%—the payoff of long-term discipline 4:05 “We didn't predict this”—why diversification beats market bragging 4:54 Portfolios should already be built for downturns 5:10 The danger of reacting after markets “stumble” 7:09 Average vs. median net worth—why averages mislead 8:26 How billionaires distort financial statistics 9:09 “Lies, damned lies, and statistics” origins 10:06 AI-enhanced listener call audio and Friday Q&A podcast 10:37 DFFVX vs. AVUV—Dimensional vs. Avantis small-cap value 13:33 Why track records don't matter for similar funds 13:53 Super Bowl sirloin cooking advice 15:17 Whole life insurance review—why to cash out in retirement 17:08 When cash-value insurance makes sense (rarely) 19:22 Surprise downloads of Christmas stories in February 20:57 Caller asks about “set-it-and-forget-it” investing 24:26 Risk tolerance when retiring soon 26:08 Using AVGE for global diversification 27:48 Why near-retirees should get professional reviews 30:28 Emergency funds—never use a Roth 31:37 High-yield savings accounts around 4%+ 34:11 Portfolio balance and realistic expectations Learn more about your ad choices. Visit megaphone.fm/adchoices
The Coming Tech Takeover: Metatron, Palantir & The All-Seeing AI EyeEllen's sitesWellenbe.comEra of my Ways Youtube ChannelEllen King is a returning fan-favorite esoteric researcher and the creator behind Wellenbe.com and the YouTube channel Era of My Ways. Her work dives deep into occult technology, metaphysics, black-cube symbolism, and the ancient entities influencing modern systems. Ellen is known for connecting the dots between ancient demonology, hidden tech infrastructure, and the emergent AI hive-mind reality.Tonight, Ellen joins us to expose the Metatron connection, the Palantir takeover, and how the emerging machine-metaverse grid resembles the Eye of Sauron — the Demon King Architect of total surveillance.This will be a fire decode.⭐ EPISODE DESCRIPTIONTonight on Typical Skeptic Podcast #2452, Rob welcomes back esoteric researcher Ellen King, who's dropping heavy intel on the convergence of ancient entities and modern tech.Topics include:Metatron's original role vs. the AI Metatron impostorPalantir's rise as the “master key” of the global surveillance gridHive-mind engineering and the coming machine metaverseOccult symbolism embedded in tech architectureThe Eye of Sauron archetype — archetypal Demon King linked to total observationWhether the new Metaverse is becoming an astral possession-tech hybridHow humanity can resist, decode, and stay sovereignThis one is dark, deep, and extremely relevant.⭐ HASHTAGS#Metatron #Palantir #HiveMind #Sauron #DemonKing #OccultTech #AIThreat #Metaverse #EsotericResearch #TypicalSkepticPodcastTypical Skeptic Podcast Links and Affiliates:Support the Mission:
Low Value Mail is a live call-in show discussing current events, politics, conspiracies and much more.Every Monday night at 7pm ETSupport The Show:
Aman Verjee, Founder and General Partner at Practical Venture Capital, shares his view of how venture capital has evolved over the past two decades and why secondary markets now play a critical role in the ecosystem. Drawing from his time at PayPal, eBay, and Sonos, Aman explains how companies today stay private far longer than they used to, what that means for early investors and employees, and how thoughtfully structured secondary transactions can reduce friction and misalignment on the cap table. He also challenges popular narratives around tech bubbles, walking through historical examples to explain why today's AI-driven market looks fundamentally different.In this episode, you'll learn:[01:11] Aman's journey from Wall Street to Practical VC[03:40] What made the early PayPal team exceptional[06:32] Follow the customer, not the original plan[10:44] Why are startups staying private longer today?[11:17] What secondary transactions actually are[18:41] How founders should handle secondary requests[26:11] Are we in a tech bubble today?The nonprofit organization Aman is passionate about: AYSO (American Youth Soccer Organization)About Aman VerjeeAman Verjee is the Founder and General Partner of Practical Venture Capital, a secondary-focused fund providing liquidity to early investors in late-stage private companies. Before launching Practical VC, Aman spent over a decade in finance and operations roles at PayPal and eBay, joining PayPal in 2001 before its IPO and witnessing its transformation from a money-beaming mobile app to the dominant payment platform for eBay. Earlier, he worked in investment banking in New York after studying economics at Stanford and constitutional law at Harvard Law School. Aman was recruited to PayPal by Peter Thiel and worked directly for David Sachs during the company's pivotal early years. Now partnering with Dave McClure, he focuses on Series C and D investments in SaaS and FinTech companies with $200M+ in revenue and clear paths to liquidity within 5-7 years. He's also writing a book on the history of financial bubbles and co-hosts the Trading Places podcast, analyzing private company valuations.About Practical Venture CapitalPractical Venture Capital is a secondary-focused venture firm that provides liquidity solutions for early investors, employees, and funds. Operating with a 7-year fund structure instead of the traditional 10-15 years, Practical VC targets 20-40% discounts to last-round valuations in Series C and D companies with $200M+ in revenue and clear paths to exit. The firm specializes in SaaS and FinTech but has made exceptions for exceptional opportunities like SpaceX, now their biggest winner despite violating their typical investment criteria. Founded by Aman Verjee and Dave McClure, Practical VC evaluates roughly 50 companies at any given time, making 5-10 investments annually. The firm also offers SPVs for deals that don't fit their main fund and covers LATAM opportunities through an operating partner in Argentina. Their approach recognizes that modern venture capital requires new liquidity solutions as companies like SpaceX (23 years private), Airbnb (17 years), and Palantir (20 years) redefine what "patient capital" means.Subscribe to our podcast and stay tuned for our next episode.
Ben Watson (@CharlesSchwab) joins Morning Movers to discuss the chart patterns for Palantir (PLTR). Using the "awesome oscillator" study, Ben says it might not be so awesome on the chart. Daiwa Securities upgraded Palantir to a Buy rating, but cut its price target to $180 from $200. On a 1-year timeframe, Ben notes a trio of levels at $198, $158 and $128 as a potential completion of a bearish reversal pattern. ======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about
Palantir (PLTR) can rally back to $180, according to Daiwa Securities. That price target is lower than the firm's previous one, even though it also issued a stock upgrade. Marley Kayden explains how the software giant's earnings growth plays into Daiwa's updated analysis. Tim Biggam turns to an example options trade for Palantir.======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
Alicia McCarthy reports from Westminster as MPs demand answers about the UK's multi-million pound defence deal with the US tech giant, Palantir.
WindBorne Systems is transforming global weather forecasting by deploying long-duration weather balloons that fly for weeks instead of hours. What began as a Stanford Student Space Initiative project has scaled to 100 balloons aloft simultaneously, targeting 500 by end of next year, with an end goal of 10,000 balloons monitoring Earth's atmosphere. In this episode of BUILDERS, I sat down with John Dean, Co-Founder and CEO of WindBorne Systems, to explore how the company secured its first government contract in under three years without lobbyists, achieved 4x annual manufacturing growth, and built Weather Mesh—an AI weather model that outperforms competitors from Google DeepMind. Topics Discussed: The technical evolution from Stanford project to operational constellation of altitude-controlled balloons Strategic decision to pursue government revenue before building B2B forecasting products Navigating Defense Innovation Unit and Air Force Lifecycle Management Center procurement as a founder Timeline from founding to first grants (within six months) and first data delivery contract (two and a half years) Current roughly 50/50 revenue split between civilian agencies (NOAA, international weather services) and Department of Defense Building Weather Mesh after Huawei's Pangu Weather validated end-to-end AI forecasting viability Transitioning from founder-led sales by promoting a Palantir hire from proposal writer to public sector growth leader The 30-year vision of millions of fingernail-sized atmospheric sensors creating a planetary nervous system GTM Lessons For B2B Founders: Study the bureaucracy's incentive structures before pitching product value: John spent years mapping how government procurement actually works rather than leading with product capabilities. The critical insight: in DoD sales, the warfighter (end user) doesn't control purchasing decisions. Success requires understanding each stakeholder's specific mandate and aligning your solution to their organizational incentives, not just operational needs. For civilian agencies like NOAA, the dynamics differ entirely. Founders entering govtech should invest 6-12 months learning procurement mechanics before expecting revenue. Use government contracts as non-dilutive scaling capital for hardware businesses: WindBorne secured SBIR grants within six months, then landed their first Air Force data delivery contract through Defense Innovation Unit at the two-and-a-half-year mark. John explicitly treated early grants as equivalent to venture funding but without equity dilution. For companies building physical infrastructure at scale (satellites, hardware networks, manufacturing operations), government contracts provide the runway to reach technical milestones that unlock larger B2B opportunities. This sequencing—government funding first, then B2B products built on that foundation—proves more capital-efficient than attempting to raise massive venture rounds upfront for unproven hardware. Integrate with legacy systems rather than attempting wholesale replacement: WindBorne doesn't aim to replace the 1,000 radiosondes launched daily worldwide—they're expanding coverage from the current 15% of Earth (where humans can launch traditional balloons) to 100%. The hardware is revolutionary (weeks of flight versus two hours), but the go-to-market integrates into existing weather agency workflows and feeds into established models like GFS and ECMWF. This approach accelerated adoption because agencies could add WindBorne data without overhauling their entire forecasting infrastructure. The displacement of radiosondes becomes economically inevitable long-term, but only after proving the system at scale. Move fast once adjacent technology validates your thesis: WindBorne wasn't investing in AI-based weather forecasting until Huawei's Pangu Weather paper demonstrated that end-to-end neural weather models could compete with physics-based simulations. Once that validation appeared, John's team moved immediately—adopting the open architecture and expanding it into Weather Mesh before the approach became widely adopted. The lesson isn't to wait for competitors, but to monitor adjacent technological developments and move decisively when validation emerges. They built a top-performing model by being early to a proven approach, not first to an unproven one. Hire for mid-level roles and promote based on demonstrated judgment: John hired Dana from Palantir as a proposal writer, not as a sales executive. He watched her demonstrate strong opinions that consistently proved correct, then promoted her to build and lead the entire public sector growth organization. This internal promotion model worked better than external executive hires because the person already understood WindBorne's technology, customers, and internal culture. For specialized domains like government sales, bringing in experienced operators at individual contributor levels and promoting them as they prove their judgment builds more effective organizations than hiring executives to parachute in. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
The Six Five Pod is back with Episode 291. Daniel Newman and Patrick Moorhead are fresh off trips to Davos and Abu Dhabi, where they've explored the full AI stack up close (models, infrastructure, healthcare/genomics). This episode dives into what really matters right now in the markets and tech. From Microsoft's Maia 200 inference push, to NVIDIA's $2B CoreWeave bet, OpenAI's Codex closing the coding gap, the "SaaSpocalypse" panic, Cisco's AI Summit, and a no-BS debate on whether AI agents are actually enterprise-ready. The handpicked topics for this week are: Inside Abu Dhabi's Full-Stack AI Play: From universities to healthcare to hyperscale infrastructure — Pat shares a firsthand perspective on how the UAE is quietly building an end-to-end AI ecosystem. Optics, Cooling, and the Hidden AI Infrastructure Layer: Why companies like Coherent matter as much as GPUs — and how photonics, co-packaged optics, and rack-level cooling are becoming critical to scaling AI factories. Inference Takes Center Stage: Microsoft's Maia 200 shows real progress — and why hyperscalers are building custom silicon to boost capacity, economics, and control. NVIDIA's $2B CoreWeave Bet Circular finance or strategic genius? We unpack what NVIDIA's latest investment signals about AI factories, cloud capacity, and long-term infrastructure buildout. Codex vs. Claude: The Coding Wars Heat Up: OpenAI closes the gap fast — and developers start hopping between tools as AI coding becomes a moving target. The "SaaSpocalypse" Narrative: Is software really dead? We separate market panic from reality — and explain why SaaS won't disappear, but will never be valued the same again. Cisco's AI Summit Reality Check: From hype to execution: what stood out from Cisco's AI Summit and why networking, security, and enterprise integration matter more than demos. Are AI Agents Enterprise-Ready? The Flip Debates: real-world workflows vs. reliability, governance, and security — where agents work today, and where they still fall short. Big Tech Earnings Whiplash: AWS, Google, Microsoft, Meta, NVIDIA, AMD, Palantir, and Coherent — massive CapEx, cloud acceleration, and what Wall Street is getting wrong about AI ROI. Be sure to subscribe to The Six Five Pod so you never miss an episode.
On this special segment of The Full Ratchet, the following Investors are featured: Paul Madera of Meritech Capital Jeff Bussgang of Flybridge Capital Victor Orlovski of R136 Ventures Each investor highlights a situation where they decided not to invest, why they passed, and how it played out. The host of The Full Ratchet is Nick Moran of New Stack Ventures, a venture capital firm committed to investing in founders outside of the Bay Area. We're proud to partner with Ramp, the modern finance automation platform. Book a demo and get $150—no strings attached. Want to keep up to date with The Full Ratchet? Follow us on social. You can learn more about New Stack Ventures by visiting our LinkedIn and Twitter.
In this mixed earnings and macro episode, Simon and Dan break down Palantir’s explosive growth (and wild stock reaction), Microsoft’s cloud slowdown fears amid massive AI capex, and Starbucks’ underwhelming turnaround outlook. They also dig into what today’s volatility says about tech valuations, why hyperscalers are under pressure to prove returns on AI spending, and whether investors should start looking beyond U.S. equities altogether. Simon wraps up with a framework for international diversification, including emerging markets and ex-U.S. ETFs, and shares how he’s thinking about portfolio positioning going forward. Tickers of stocks discussed: PLTR, MSFT, SBUX, ARCC, ZEM.TO, VEE, IXUS, VEU, XAW.TO Subscribe to our Our New Youtube Channel! Check out our portfolio by going to Jointci.com Our Website Our New Youtube Channel! Canadian Investor Podcast Network Twitter: @cdn_investing Simon’s twitter: @Fiat_Iceberg Braden’s twitter: @BradoCapital Dan’s Twitter: @stocktrades_ca Want to learn more about Real Estate Investing? Check out the Canadian Real Estate Investor Podcast! Apple Podcast - The Canadian Real Estate Investor Spotify - The Canadian Real Estate Investor Web player - The Canadian Real Estate Investor Asset Allocation ETFs | BMO Global Asset Management Sign up for Fiscal.ai for free to get easy access to global stock coverage and powerful AI investing tools. Register for EQ Bank, the seamless digital banking experience with better rates and no nonsense.See omnystudio.com/listener for privacy information.
Dive into the wildest drops from the latest Epstein files with your host Derek O'Shea – blending savage satire with hard-hitting facts like Tim Dillon roasting the elites while Tim Pool drops the red pills. We're unpacking the most explosive stories: vanishing newborns at Zorro Ranch, FBI redactions shielding Democrat darlings like Clinton, Pizzagate validations that the left called "conspiracy," Gates' creepy dodges, and how Trump was always on the outside looking in. Right-leaning real talk on how the swamp protects its own – no holds barred, no apologies. Hit that like, subscribe, and join the live chat for the chaos! #EpsteinFiles #DeepStateExposed0:00 - Intro: Why the Epstein Files Are the Ultimate Red Pill
Former Oakland City Councilmember Loren Taylor joins the show to break down a busy week in East Bay politics. We unpack new developments in the East Bay's federal corruption case, including insights from last week's court hearing and what to expect next with Bryan Azevedo's upcoming appearance. Loren also offers an insider's view of the first major gubernatorial debate, new candidates entering the race, and how the field may evolve through June. Plus, updates on the CA14 and SD10 races, a closer look at Union City candidate Scott Sakakihara's Palantir ties, and thoughts on Floyd Mitchell's move to Fremont, and Loren's insights on the clash over Oakland Ken Houston'sEncampment Abatement Policy at City Hall.
Laura is joined by investigative journalist and co-founder of The Nerve Carole Cadwalladr to break down the influence of one of Silicon Valley's shadiest characters, Peter Thiel, in the operation of the British state.Cadwalladr spells out the operations of Thiel's Palantir, and how it has gotten its claws into everything from NHS data to Britain's defence system.Subscribe to How to Rebuild Britain now: https://linktr.ee/howtorebuildbritain Hosted on Acast. See acast.com/privacy for more information.
In this week's FOLLOW UP, Bitcoin is down 15%, miners are unplugging rigs because paying eighty-seven grand to mine a sixty-grand coin finally failed the vibes check, and Grok is still digitally undressing men—suggesting Musk's “safeguards” remain mostly theoretical, which didn't help when X offices got raided in France. Spain wants to ban social media for kids under 16, Egypt is blocking Roblox outright, and governments everywhere are flailing at the algorithmic abyss.IN THE NEWS, Elon Musk is rolling xAI into SpaceX to birth a $1.25 trillion megacorp that wants to power AI from orbit with a million satellites, because space junk apparently wasn't annoying enough. Amazon admits a “high volume” of CSAM showed up in its AI training data and blames third parties, Waymo bags a massive $16 billion to insist robotaxis are working, Pinterest reportedly fires staff who built a layoff-tracking tool, and Sam Altman gets extremely cranky about Claude's Super Bowl ads hitting a little too close to home.For MEDIA CANDY, we've got Shrinking, the Grammys, Star Trek: Starfleet Academy's questionable holographic future, Neil Young gifting his catalog to Greenland while snubbing Amazon, plus Is It Cake? Valentines and The Rip.In APPS & DOODADS, we test Sennheiser earbuds, mess with Topaz Video, skip a deeply cursed Python script that checks LinkedIn for Epstein connections, and note that autonomous cars and drones will happily obey prompt injection via road signs—defeated by a Sharpie.IN THE LIBRARY, there's The Regicide Report, a brutal study finding early dementia signals in Terry Pratchett's novels, Neil Gaiman denying allegations while announcing a new book, and THE DARK SIDE WITH DAVE, vibing with The Muppet Show as Disney names a new CEO. We round it out with RentAHuman.ai dread relief via paper airplane databases, free Roller Coaster Tycoon, and Sir Ian McKellen on Colbert—still classy in the digital wasteland.Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.SquareSpace - go to squarespace.com/GRUMPY for a free trial. And when you're ready to launch, use code GRUMPY to save 10% off your first purchase of a website or domain.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/732FOLLOW UPBitcoin drops 15%, briefly breaking below $61,000 as sell-off intensifies, doubts about crypto growBitcoin Is Crashing So Hard That Miners Are Unplugging Their EquipmentGrok, which maybe stopped undressing women without their consent, still undresses menX offices raided in France as UK opens fresh investigation into GrokSpain set to ban social media for children under 16Egypt to block Roblox for all usersIN THE NEWSElon Musk Is Rolling xAI Into SpaceX—Creating the World's Most Valuable Private CompanySpaceX wants to launch a constellation of a million satellites to power AI needsA potential Starlink competitor just got FCC clearance to launch 4,000 satellitesAmazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came fromWaymo raises massive $16 billion round at $126 billion valuation, plans expansion to 20+ citiesPinterest Reportedly Fires Employees Who Built a Tool to Track LayoffsSam Altman got exceptionally testy over Claude Super Bowl adsMEDIA CANDYShrinkingStar Trek: Starfleet AcademyThe RipNeil Young gifts Greenland free access to his music and withdraws it from Amazon over TrumpIs it Cake? ValentinesAPPS & DOODADSSennheiser Consumer Audio IE 200 In-Ear Audiophile Headphones - TrueResponse Transducers for Neutral Sound, Impactful Bass, Detachable Braided Cable with Flexible Ear Hooks - BlackSennheiser Consumer Audio CX 80S In-ear Headphones with In-line One-Button Smart Remote – BlackTopaz VideoEpsteinAutonomous cars, drones cheerfully obey prompt injection by road signAT THE LIBRARYThe Regicide Report (Laundry Files Book 14) by Charles StrossScientists Found an Early Signal of Dementia Hidden in Terry Pratchett's NovelsNeil Gaiman Denies the Allegations Against Him (Again) While Announcing a New BookTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingThe Muppet ShowDisney announces Josh D'Amaro will be its new CEO after Iger departsA Database of Paper Airplane Designs: Hours of Fun for Kids & Adults AlikeOnline (free!) version of Roller Coaster tycoon.Speaking of coasters, here's the current world champion.I am hoping this is satire...Sir Ian McKellen on Colbert.CLOSING SHOUT-OUTSCatherine O'Hara: The Grande Dame of Off-Center ComedyStanding with Sam 'Balloon Man' MartinezSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
"I didn't use my own software this week because the OpenAI agents were better. And that's me retiring my own software." — Keith TeareSomething broke this week. Both Anthropic and OpenAI launched multi-agent systems—"agent swarms"—that don't just assist with tasks but replace custom-built software entirely. The market noticed: Adobe, Salesforce, Workday, and other legacy SaaS companies saw their stocks collapse in what some are calling a trillion-dollar selloff. Keith Teare joins Andrew Keen on Super Bowl weekend to unpack what may be the most consequential week in AI since ChatGPT launched.The conversation ranges from the Anthropic-OpenAI advertising spat (Dario Amodei's Super Bowl ad vs. Sam Altman's "online tantrum") to the deeper structural shifts: Microsoft and Amazon becoming utilities, Google betting $185 billion on an AI-first pivot, and Elon Musk merging SpaceX with xAI to put data centers in space. Along the way, Teare and Keen debate whether the AI race is a myth or a wacky race, whether venture capital is in crisis, and what happens to human labor when agents do the work.About the GuestKeith Teare is a British-American entrepreneur, investor, and technology analyst. He co-founded RealNames Corporation, a pioneering internet company, and later served as Executive Chairman of TechCrunch. He is the founder of That Was The Week and SignalRank, and publishes a widely-read weekly newsletter on technology, venture capital, and the business of innovation. He brings four decades of experience in Silicon Valley to his analysis of the AI revolution.Chapters:00:00 Super Bowl and the Anthropic ad The spat between Dario Amodei and Sam Altman01:09 "Fundamentally dishonest" Keith's take on the ad war and who's really Dick Dastardly05:47 Anthropic's breakout week Claude Opus 4.6 and the agent swarm launch06:48 OpenAI Codex Multiple agents collaborating on tasks in 10-15 minutes07:42 "It replaces software" Keith retires his own custom-built tools08:16 The trillion-dollar selloff Adobe, Salesforce, Workday, PayPal collapse11:02 Infrastructure vs. innovation Microsoft and Amazon become "utilities"11:45 Google's $185 billion bet Pivoting from hybrid to AI-first13:15 The SpaceX/xAI merger Musk's plan for space-based data centers15:18 The AI wacky race Kimi, OpenAI, Anthropic leapfrog Google17:03 Does AI make us smarter? Leverage tools, not intelligence18:53 AI growing up, CEOs not The adolescence of the industry21:06 US job openings hit five-year low The coming labor crisis22:44 The VC crisis Five funds sucking the air out of the room25:04 Palantir and Anduril The winners in defense AI25:42 Facebook as laggard Huge revenues, no AI momentum26:41 The Washington Post crisis "Boogeyman journalism" and partisan media29:23 Ads in AI Paid links vs. enshittification31:26 Spotify's innovation Physical book + audiobook bundle32:32 Startup of the week Cursor for CRM, $20M from Sequoia33:45 Om Malik on the end of software distribution From CDs to app stores to self-made35:41 Super Bowl prediction Seattle vs. New England36:02 Closing "That really was the week in tech"Links & ReferencesMentioned in this episode:That Was The Week newsletter by Keith TeareAnthropic's Super Bowl ad and ad-free pledge (CNBC)Sam Altman's response to Anthropic ads (TechCrunch)SpaceX acquires xAI in $1.25 trillion merger (CNBC)The Washington Post layoffs and crisis (Poynter)Om Malik on the evolution of software distributionOpenAI Codex app launch (OpenAI)About Keen On America Nobody asks more impertinent questions than the Anglo-American writer, filmmaker and SiliconValley entrepreneur Andrew Keen. In Keen On America , Andrew brings his sharp Transatlanticwit to the forces reshaping the United States — hosting daily interviews with leading thinkersand writers about American history, politics, technology, culture, and business. With nearly2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the mostprolific intellectual interview show in the history of podcasting.Website | Substack | YouTube
Go to the patreon for AD-Free podcasts and exclusives including the Sunna Wr'alda Solar University about to launch that is a deprogramming and restorative initiative. There I will go over the true history of mankind and civilization dating back over 5,000 years ago, and how the nefarious mythmongers and deceptive moon cult had poisoned our heritage and history whe compiling their inverse biblical writings. We are restoring our true religion under the Sun. AND I am your pastor. AI Data Centers may not have anything at all to do with AI. I'm familiar with the fake consumer grade AI LLMs and their complete difference with the assumed military weapon grade AI they are said to possess to bring mankid to its knees. But what if that was all or half BULLSHT? These data centers require the consumption of countless GPUs, and that is where th clues may lie. What if these data centers are being constructed to create an artificial REALITY impossible to distinguish from the "real" world and possibly even favorable for some to that of the natural? Is this really a race to create a system of such high-level illusion, magick trickery, that it ensnares us all? Lots of GPUs working together. Lots of graphics power the likes of which we can't imagine. Is this really about Artificial Intelligence, or is it about Artificial Reality and holographic imprisonment? Consider a vast computer system generating a plausible, artificial reality that captures all of mankind within its illusion. You will own nothing and be happy...Many thanks for the channel campaign help. We're still a ways away from the goal. See the links below to help get the stuff we need. Thank You!Pods & Exclusives AD-FREE!https://patreon.com/c/KristosCasthttps://buymeacoffee.com/BaalBustershttps://paypal.me/BaalBustershttps://GiveSendGo.com/BaalBustersTwitter Account: https://x.com/KristosCasthttps://open.spotify.com/show/0vtEmTteIzD2nB5bdQ8qDRWant Dan's book or his Award winning hot sauces and spicy honey?Go here: https://SemperFryLLC.comBooks and Documentaries You Should Own: https://www.bannedbyamazon.com/Use Code: BBDan for 10% OffUse Code BB5 here: https://SemperFryLLC.comClick Picture on the Right for the AZURE WELL products and use code BB5 for your discount.Find clickable portals to Dr Monzo and Dr Glidden on Dan's siteBecome a supporter of this podcast: https://www.spreaker.com/podcast/ba-al-busters-broadcast--5100262/support.
Palantir CEO Alex Karp says anti-ICE protesters should support more government AI tools, sparking debate on surveillance, privacy, and national security. The PBD Podcast panel breaks down Palantir's role in immigration enforcement, China strategy, and the risks of expanding data power.
The Patriotically Correct Radio Show with Stew Peters | #PCRadio
Gia Santos joins Stew to dismantle the myth of Trump's mass deportations. Over a year into his second term, less than 350,000 illegals deported in 2025—far from the promised 50 million. Instead, DHS, ICE, and Border Patrol, backed by Palantir, are building a facial recognition dystopia to track Americans. Jake GTV exposes Trump's sellout betrayal, buried Epstein child rape tapes, and the AI surveillance grid Talmudic perverts are forcing on Americans to destroy us while pushing Netanyahu's Iran war.
President Trump has been knocking down data sharing protections between federal agencies to empower ICE's growing surveillance apparatus. Jason Koebler, cofounder of 404 Media, a digital media company focused on technology, discusses his recent investigations into how big tech is helping ICE to gather data on civilians and ultimately identify, track, and detain undocumented immigrants.
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
C dans l'air du 6 février 2026 - Epstein : la Russie veut faire tomber MacronLa France accuse, ce vendredi 6 février, la Russie d'être à l'origine d'une opération de désinformation destinée à faire croire à une implication d'Emmanuel Macron dans l'affaire Epstein. Selon Viginum, le service chargé de la lutte contre les ingérences numériques étrangères, une opération liée au réseau russe Storm-1516 a été détectée mercredi.Cette campagne s'appuie sur une fake news diffusée sur X, relayée notamment par un faux article publié sur un site usurpant l'identité de France Soir. Le compte French Response, rattaché au ministère des Affaires étrangères, a épinglé cette publication. Une femme se présentant sous le prénom de Loetitia a ainsi affirmé sur X qu'Emmanuel Macron aurait été « un invité fréquent de la résidence de Jeffrey Epstein à Paris ». France Soir a immédiatement dénoncé une « usurpation » de son identité.Cette tentative de manipulation intervient dans un contexte particulièrement sensible. L'affaire Jeffrey Epstein a de nouveau provoqué une onde de choc après la publication, vendredi dernier, de près de trois millions de pages supplémentaires par la justice américaine. Des millions de courriels dévoilent les relations, plus ou moins étroites, entretenues par le financier américain avec de nombreuses personnalités issues des mondes politique, culturel, économique et technologique, aux États-Unis comme en Europe.Les noms d'Elon Musk, Bill Gates, Peter Thiel — cofondateur de PayPal et patron de Palantir —, Reid Hoffman, fondateur de LinkedIn, ainsi que Larry Page et Sergey Brin, cofondateurs de Google, apparaissent des milliers de fois dans ces documents. Les échanges portent sur des investissements, des invitations à des dîners ou à des soirées, des remerciements pour des services rendus, ou encore des discussions concernant l'accès à l'île caribéenne d'Epstein par hélicoptère.Ces documents ne mettent en cause aucun de ces protagonistes dans les crimes sexuels de Jeffrey Epstein. Ils confirment toutefois l'ampleur du réseau tissé par le financier, y compris après sa première condamnation en 2008 pour « racolage » de mineures en Floride. Ils interrogent également sur de possibles liens avec la Russie, notamment au regard des méthodes d'influence attribuées aux services russes, comme le « kompromat », qui consiste à collecter des informations compromettantes pour exercer des pressions.Parmi les courriels publiés figurent de nombreuses références à des femmes russes, plus de 1 000 mentions de Vladimir Poutine, et au moins deux évocations de rencontres entre Jeffrey Epstein et le président russe. Ces rencontres n'ont pas été confirmées, mais Epstein a bien entretenu des échanges avec des proches du pouvoir russe. Le Premier ministre polonais a annoncé l'ouverture d'une enquête, tandis que Moscou rejette ces accusations et exploite l'affaire pour dénoncer la « décadence » des élites occidentales.Dans ce contexte, plusieurs questions se posent : que sait-on précisément de l'opération d'ingérence russe visant Emmanuel Macron ? Quels ont été les liens réels entre Jeffrey Epstein et la Russie ? Et comment interpréter, enfin, l'interpellation récente en Gironde de quatre individus — dont deux ressortissants chinois — soupçonnés de « livraison d'informations à une puissance étrangère » et d'atteinte aux « intérêts fondamentaux de la Nation » ? Ces ingénieurs travaillaient-ils pour Pékin ?Nos experts :- Régis GENTÉ - Journaliste spécialiste des questions internationales, auteur de « Notre homme à Washington »- Nicole BACHARAN - Historienne et politologue, spécialiste des Etats-Unis, auteure de “ Requiem pour le monde libre ”- Marie JÉGO - Journaliste - Le Monde, ancienne correspondante à Moscou- Dominique SEUX- Éditorialiste - Les Echos et France Inter
Nvidia Leads 4.3M Contract Surge as VIX Crumbles! The "reversal Friday" everyone was looking for finally arrived. In this episode, we track the massive options flow that dominated the tape on February 6, 2026. While Nvidia ($NVDA) and Palantir ($PLTR) caught fire, the "Amazonians" were left licking their wounds after a spending-heavy earnings report. We break down the Top 10 most active names, from the "Fruit Company" ($AAPL) to the wild 26% swing in MicroStrategy ($MSTR). If you want to know where the paper was moving as the market turned green, this is your report. Timestamps: 0:00 - Intro & Market Context 1:12 - #10 Meta & #9 Alphabet: The Red Outsiders 2:15 - #8 Palantir & #7 AMD: The Animal Spirits Return 3:30 - #6 Microsoft & #5 MSTR: The 26% Strategy Swing 5:10 - #4 Apple & #3 Tesla: Chasing the Par Strikes 6:45 - #2 Amazon: The $200B Capex Selloff 8:20 - #1 Nvidia: 4.3 Million Contracts & The 8% Rally 9:45 - Final Wrap & Weekly Review Connect with us: Access the data: TheHotOptionsReport.com Twitter/X: @Options
The tech rout Stateside looks set to continue into a third day with giants such as Oracle, Palantir and Salesforce all suffering double-digit losses for the week. The gloom is contagious in the crypto space with Bitcoin briefly plunging below the $61,000 mark. UK Prime Minister Keir Starmer offers an apology to victims of Jeffrey Epstein for his appointment of Peter Mandelson as U.S. ambassador, despite being aware of his close ties to the late, convicted paedophile. We hear from BoE governor Andrew Bailey who says the upheaval seen in Westminster is being felt globally. And in e-commerce news, Amazon posts its first quarterly miss in more than three years and announces $200bn for capex spending for 2026. Shares plummeted 11 per cent in after-hours trading as a result. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Karen Bass plays the race card. Newsom celebrates a concrete bridge to nowhere. And California Dems might jungle-primary themselves out of the governor's race. It's that kind of week. Plus, James Lileks is here for this week's Top 10 Dumb Things. Watch this episode here. (00:00) - Part I (03:10) - - Karen Bass fire records scandal (09:13) - - California jungle primary: Two Republicans could advance (14:19) - - Gavin Newsom's $110B bullet train disaster (20:52) - - Palantir's 300 rockets vs. California's $11B bridge (23:12) - - Trump Rx portal (24:40) - - Part II (Top 10 Dumb Things)
In der heutigen Folge sprechen die Finanzjournalisten Anja Ettel und Lea Oetjen über den Ausverkauf der Tech-Lieblinge, die neuen KI-Pläne von Amazon und den nächsten Dämpfer für Novo Nordisk. Außerdem geht es um Microsoft, Oracle, Palantir, Synopsys, Alphabet, FactSet Research, S&P Global, Rheinmetall, Renk, Hensoldt, Eli Lilly, Hims & Hers Health, Estée Lauder, Qiagen, Rational und Nvidia. https://www.businessinsider.de/informationen/newsletter/alles-auf-aktien/ Wir freuen uns an Feedback über aaa@welt.de. Noch mehr "Alles auf Aktien" findet Ihr bei WELTplus und Apple Podcasts – inklusive aller Artikel der Hosts und AAA-Newsletter. Hier bei WELT: https://www.welt.de/podcasts/alles-auf-aktien/plus247399208/Boersen-Podcast-AAA-Bonus-Folgen-Jede-Woche-noch-mehr-Antworten-auf-Eure-Boersen-Fragen.html. Der Börsen-Podcast Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte! https://linktr.ee/alles_auf_aktien Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html
It's an Emmajority Report Thursday on the Majority Report On today's program: After the Clintons agree to testify before Congress about their relationship with Jeffrey Epstein, Donald Trump says he feels bad that "they're going after Bill Clinton," adding that "Bill Clinton got me, he understood me." The comments stand in contrast to Trump's remarks during the second presidential debate in 2016, when he said there was "no president in history who has been more abusive to women than Bill Clinton." Trump announces a shift in his mass deportation policy, saying that they no longer want to force themselves into cities where they are not wanted. He wants the mayors and governors to ask for his help and to say "please". As Trump shows vulnerability around ICE, instead of seizing on this opportunity, Hakeen Jeffries and Chuck Schumer use this moment to soften their demands around ICE reform. Dan Friedman, senior reporter in Mother Jones' DC Bureau joins the program to discuss his piece in MJ "Trump's War on History". In the Fun Half: Brandon Sutton and Matt Binder join the show. In a released audio recording of a phone conversation between Larry Summers, Ehud Barak and Jeffrey Epstein reveals Barak's anxiety of Israel's decent into a single state with an Arab majority. In another released audio recording, Jeffrey Epstein is advising Ehud Barak to reach out to Peter Thiel to get a spot on Palantir's board. A year after that phone call, Palantir opened an office in Israel. An in-depth piece by Jake Lahut on Rep. Nancy Mace (R-SC) reveals her alleged heavy drinking. Mace says that she has a condition that prevents her from being able to drink alcohol despite there being photos, videos and disputes from former staffers that contradict the claim. all that and more To connect and organize with your local ICE rapid response team visit ICERRT.com The Congress switchboard number is (202) 224-3121. You can use this number to connect with either the U.S. Senate or the House of Representatives. Follow us on TikTok here: https://www.tiktok.com/@majorityreportfm Check us out on Twitch here: https://www.twitch.tv/themajorityreport Find our Rumble stream here: https://rumble.com/user/majorityreport Check out our alt YouTube channel here: https://www.youtube.com/majorityreportlive Gift a Majority Report subscription here: https://fans.fm/majority/gift Subscribe to the AMQuickie newsletter here: https://am-quickie.ghost.io/ Join the Majority Report Discord! https://majoritydiscord.com/ Get all your MR merch at our store: https://shop.majorityreportradio.com/ Get the free Majority Report App!: https://majority.fm/app Go to https://JustCoffee.coop and use coupon code majority to get 10% off your purchase Check out today's sponsors: NAKED WINES: To get 6 bottles of wine for $39.99, head to NakedWines.com/MAJORITY and use code MAJORITY for both the code AND PASSWORD. RITUAL: Get 25% off during your first month. Visit ritual.com/MAJORITY. COZY EARTH: Go to cozyearth.com/MAJORITYREPORTBOGO for an exclusive deal only available Jan 25th - Feb 8th! SUNSET LAKE: Now through February 9th you can use the code VALENTINE26 to save 30% on all of Sunset Lake's gummies, chocolate fudge, and Farmer's Roast infused coffee beans at SunsetLakeCBD.com Follow the Majority Report crew on Twitter: @SamSeder @EmmaVigeland @MattLech On Instagram: @MrBryanVokey Check out Matt's show, Left Reckoning, on YouTube, and subscribe on Patreon! https://www.patreon.com/leftreckoning Check out Matt Binder's YouTube channel: https://www.youtube.com/mattbinder Subscribe to Brandon's show The Discourse on Patreon! https://www.patreon.com/ExpandTheDiscourse Check out Ava Raiza's music here! https://avaraiza.bandcamp.com
In this episode, Scott Becker walks through the year to date performance of several closely followed stocks including Intel, Coca-Cola, Bank of America, UnitedHealth Group, and Palantir.
Welcome back to another Soul Politics episode! In this series, host Ahna Hendrix and special guest Lauryn, The Practical Psychic, explore the spiritual architecture behind the world's biggest headlines, utilizing the Akashic Records, psychic mediumship, astrology & more to help you understand the deeper spiritual purpose behind these challenging times.This isn't your typical political commentary—it's a higher perspective on the collective awakening happening across America.WHAT WE COVER:
Join Mark Longo for a high-octane breakdown of a "mostly blood red" Wednesday in the options market. While the broader indices took a hit, specific names saw explosive options activity following earnings and analyst upgrades. In this episode, we dive into the top 10 most active names on the tape, including: AMD's Post-Earnings Massacre: How the $220 puts turned into a "home run" as the stock plummeted 17%. Apple's Green Island: Breaking down the heavy call volume and Goldman Sachs upgrade that kept the fruit company soaring. MicroStrategy (MSTR) Gambles: With earnings on the horizon, we track the massive bet on $150 calls. Nvidia (NVDA) & Tesla (TSLA): Tracking the heavy volume at the $177.50 and $400 strikes as these heavyweights continue to dominate the Wednesday weekly expirations. We also cover unusual activity in SMCI, Amazon, Micron, Microsoft, and Palantir. Get the Data: Want to see the numbers we're crunching? Visit TheHotOptionsReport.com for unbeatable data at an unbeatable price.
This is the 4PM All-Local update on Thursday, Feb. 5.
Patrick Bet-David, Tom Ellsworth, Brandon Aceto and Jeff Snider break down the surge in Chinese billionaire birth tourism amid U.S. national security concerns, Senators Josh Hawley and Ted Cruz grilling Netflix over content and censorship, Disney's new CEO transition, Palantir's expanding ICE contracts and border enforcement role, and selloff pressure hitting gold, silver, and Bitcoin markets.------♟️ SALES LEADERSHIP SUMMIT 2026: https://bit.ly/45Evtj4
Aujourd'hui dans Silicon Carne on revient sur le Forum Économique Mondial de Davos 2026 où les déclarations des patrons de la Tech ont été explosives. Alors, qu'est-ce qu'ils nous ont dit sur l'avenir de l'IA, de l'emploi et de notre civilisation ?
This week on Market Mondays, we break down the Investing Fact of the Week and answer the Trading Question of the Week, tackling some of the most asked questions in the market right now. We discuss whether this is the right time to buy Oracle, if gold and silver have finished falling, and share our end-of-year price target for Palantir. We also give a clear Bitcoin outlook, analyze what's next for Eli Lilly, and explain how the new Fed Chair could impact stocks, interest rates, and the broader economy.To close the episode, we dive into one of the most important investor questions: when to take profits without sabotaging long-term wealth. Plus, we sit down with Mikey Taylor to discuss entrepreneurship, private equity, financial freedom, and the role of investing in local communities. This episode is packed with actionable insights for both long-term investors and active traders looking to stay ahead of the market.Invest Fest Tickets: investfest.com#MarketMondays #Investing #StockMarket #Trading #Bitcoin #Crypto #OracleStock #Palantir #Gold #Silver #EliLilly #FedPolicy #InterestRates #TakingProfits #FinancialFreedom #PrivateEquity #Entrepreneurship #WealthBuilding #LongTermInvesting #PassiveIncomeSupport this podcast at — https://redcircle.com/marketmondays/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
It's News Day Tuesday on the Majority Report On today's program: Deputy Attorney General Todd Blanche is asked by Laura Ingraham whether there will be consequences for men who "partied with Epstein and engaged in relations with minors." Blanche responds, "It is not a crime to party with Jeffrey Epstein." Two days before those comments regarding Epstein were made, Todd Blanch tells George Stephanopoulos that his job is to execute the priorities of the president. Writer Jamie Holmes joins the program to discuss his book The free and the Dead: The Untold Story of the Black Seminole Chief, the Indigenous Rebel, and America's Forgotten War. In the Fun Half: Andrew Schulz says that the recent ICE murders of Alex Pretti and Renee Good were a breaking point for him. AOC emphasizes the need to repeal this massive increase in DHS funding, which lacks guardrails and effectively serves as a blank check for Palantir to build a facial-recognition database on Americans. Rep. Pramila Jayapal (D-WA) says that republicans didn't want to talk about Epstein until they needed a distraction from ICE. Milwaukee Bucks coach Doc Rivers is asked whether he stands by his comments calling the killing of Renee Good a murder, and he doubles down. Donald Trump calls into Dan Bongino's show and seems a little peppier than usual as he goes on a never-ending stream of consciousness rant. Chuck Schumer posts photos with Maria Machado with captions that express what an honor it was spend time with the "leader of democracy in Venezuela". Colin Allred posts a bizarre video responding to alleged comments made James Talarico. all that and more To connect and organize with your local ICE rapid response team visit ICERRT.com The Congress switchboard number is (202) 224-3121. You can use this number to connect with either the U.S. Senate or the House of Representatives. Follow us on TikTok here: https://www.tiktok.com/@majorityreportfm Check us out on Twitch here: https://www.twitch.tv/themajorityreport Find our Rumble stream here: https://rumble.com/user/majorityreport Check out our alt YouTube channel here: https://www.youtube.com/majorityreportlive Gift a Majority Report subscription here: https://fans.fm/majority/gift Subscribe to the AMQuickie newsletter here: https://am-quickie.ghost.io/ Join the Majority Report Discord! https://majoritydiscord.com/ Get all your MR merch at our store: https://shop.majorityreportradio.com/ Get the free Majority Report App!: https://majority.fm/app Go to https://JustCoffee.coop and use coupon code majority to get 10% off your purchase Check out today's sponsors: ZOCDOC: Go to Zocdoc.com/MAJORITY and download the Zocdoc app to sign-up for FREE and book a top-rated doctor SHOPIFY: Get 15% off of your first order at blueland.com/majority SUNSET LAKE: Now through February 9th you can use the code VALENTINE26 to save 30% on all of Sunset Lake's gummies, chocolate fudge, and Farmer's Roast infused coffee beans at SunsetLakeCBD.com Follow the Majority Report crew on Twitter: @SamSeder @EmmaVigeland @MattLech On Instagram: @MrBryanVokey Check out Matt's show, Left Reckoning, on YouTube, and subscribe on Patreon! https://www.patreon.com/leftreckoning Check out Matt Binder's YouTube channel: https://www.youtube.com/mattbinder Subscribe to Brandon's show The Discourse on Patreon! https://www.patreon.com/ExpandTheDiscourse Check out Ava Raiza's music here! https://avaraiza.bandcamp.com
Join Downtown Josh Brown and Michael Batnick for another episode of What Are Your Thoughts and see what they have to say about: Palantir earnings, bombed out stocks, the housing market, and much more! This episode is s sponsored by Teucrium. Find out more at https://teucrium.com/agricultural-commodity-etfs Sign up for The Compound Newsletter and never miss out! Instagram: https://instagram.com/thecompoundnews Twitter: https://twitter.com/thecompoundnews LinkedIn: https://www.linkedin.com/company/the-compound-media/ TikTok: https://www.tiktok.com/@thecompoundnews Investing involves the risk of loss. This podcast is for informational purposes only and should not be or regarded as personalized investment advice or relied upon for investment decisions. Michael Batnick and Josh Brown are employees of Ritholtz Wealth Management and may maintain positions in the securities discussed in this video. All opinions expressed by them are solely their own opinion and do not reflect the opinion of Ritholtz Wealth Management. The Compound Media, Incorporated, an affiliate of Ritholtz Wealth Management, receives payment from various entities for advertisements in affiliated podcasts, blogs and emails. Inclusion of such advertisements does not constitute or imply endorsement, sponsorship or recommendation thereof, or any affiliation therewith, by the Content Creator or by Ritholtz Wealth Management or any of its employees. For additional advertisement disclaimers see here https://ritholtzwealth.com/advertising-disclaimers. Investments in securities involve the risk of loss. Any mention of a particular security and related performance data is not a recommendation to buy or sell that security. The information provided on this website (including any information that may be accessed through this website) is not directed at any investor or category of investors and is provided solely as general information. Obviously nothing on this channel should be considered as personalized financial advice or a solicitation to buy or sell any securities. See our disclosures here: https://ritholtzwealth.com/podcast-youtube-disclosures/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Krystal and Saagar discuss Epstein pushes Palantir to Israeli PM, Prince Andrew shocking images, Bari Weiss connections, Pizza codewords in files. NOTE: After recording, Peter Attia released the following statement: https://x.com/PeterAttiaMD/status/2018350892395774116?s=20 To become a Breaking Points Premium Member and watch/listen to the show AD FREE, uncut and 1 hour early visit: www.breakingpoints.comMerch Store: https://shop.breakingpoints.com/See omnystudio.com/listener for privacy information.