Podcasts about Flash

  • 16,077PODCASTS
  • 96,733EPISODES
  • 49mAVG DURATION
  • 7DAILY NEW EPISODES
  • Feb 16, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about Flash

    Show all podcasts related to flash

    Latest podcast episodes about Flash

    Deck The Hallmark
    Captain America: The First Avenger

    Deck The Hallmark

    Play Episode Listen Later Feb 16, 2026 37:37


    It's Marvel Monday, and today it's Captain America's turn!ABOUT CAPTAIN AMERICA: THE FIRST AVENGERSteve Rogers, a rejected military soldier, transforms into Captain America after taking a dose of a "Super-Soldier serum". But being Captain America comes at a price as he attempts to take down a warmonger and a terrorist organization.AIR DATE & NETWORK FOR CAPTAIN AMERICA: THE FIRST AVENGERJuly 22, 2011 | Theatrical ReleaseCAST & CREW OF CAPTAIN AMERICA: THE FIRST AVENGERChris Evans as Captain America/Steve RogersHugo Weaving as Johann Schmidt/Red SkullSamuel L. Jackson as Nick FuryHayley Atwell as Peggy CarterSebastian Stan as James Buchanan 'Bucky' BarnesBRAN'S SYNOPSISThe movie kicks off with some scientists in the Arctic finding an old aircraft with someone frozen inside along with a circular shield. WHO COULD IT BEEEEEE?Flash back to March 1942, during World War II. Nazi dude and Hydra leader Johann Schmidt steals a mysterious glowing cube called the Tesseract, which possesses untold godly powers.In New York City, we meet little Steve Rogers. All Steve wants more than anything is to be in the Army, but he's rejected due to being a tiny boy. Dr. Abraham Erskine overhears Steve talking to his buddy Bucky Barnes about how badly he wants to serve his country, so he allows Rogers to enlist.What Steve doesn't know is that Dr. Erskine is interested in Steve for something called the "super-soldier" experiment under Erskine, along with British MI6 agent Peggy Carter. Once Steve selflessly jumps on a grenade as part of a test, they know he's their guy. Erskine tells Rogers that Schmidt once took a prototype version of the super-soldier formula that gave him superhuman strength but painfully changed his appearance. So, ya know, keep that in mind.It's lab time. Steve gets hooked up to this equipment and injected with all sorts of stuff and then put into this chamber. He's yelling and screaming but tells them to keep going. Once over, Steve comes out of the chamber and is frickin' jacked.Turns out Schmidt sent an assassin to kill Erskine who gets away in a car. But Steve is now a super soldier, so he just races him down by running after him. Before Steve can question him, he kills himself with a cyanide capsule and destroys the formula while he's at it.Steve doesn't get to super soldier much. Instead, he's sent on a tour as "Captain America" to sing & dance and promote war while scientists study his blood and attempt to reverse-engineer the formula. But when Rogers finds out that Bucky is MIA, he demands to fly behind enemy lines to find him. Turns out it was Schmidt all along. Steve confronts Schmidt. Schmidt's mask is taken off to reveal he is red. I suggest we call him "Red Skull".Steve, Bucky, and some other freed prisoners form a band...of brothers... called the Howling Commandos. Steve gets a new suit in the process and potentially a new gal 'cause the sparks between him and Peggy Carter are off the charts!Using information extracted from Zola (Red Skull's little henchman), the final Hydra stronghold is located, and Rogers leads an attack to stop Schmidt from doing all the bad things he wants to do. Right before Steve climbs aboard Schmidt's super-bomber, he and Peggy kiss big ones!He hops on the plane just before it takes off and they fight. The Tesseract is freed from its container, and Red Skull uses it to open a portal. The Tesseract then burns through the plane and falls into the ocean. Steve knows he has to go after it, so he radios Peggy to say goodbye and then crashes into the Arctic. Everyone assumes Steve Rogers died after they ultimately find the Tesseract on the ocean floor.Steve wakes up in a 1940s-style hospital room. He hears a radio broadcast of a baseball game that he attended in 1941 and becomes immediately suspicious. So he breaks out of his room and runs into Times Square, blown away by all the screeeens! Nick Fury shows up and tells him that he has been asleep for almost 70 years. In a post-credits scene, we basically get an Avengers trailer. Fury approaches Rogers and proposes a mission with worldwide ramifications. Watch the show on Youtube - www.deckthehallmark.com/youtubeInterested in advertising on the show? Email bran@deckthehallmark.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    FaceCulture: Giving You The People Behind The Music
    Cigarettes After Sex - Greg Gonzalez interview (2017)

    FaceCulture: Giving You The People Behind The Music

    Play Episode Listen Later Feb 15, 2026 23:04


    Video interview with frontman Greg Gonzalez from the American ambient pop group Cigarettes After Sex. FaceCulture spoke with Greg about movie soundtracks, songs filling a scene, developing his songwriting, being a control freak, spontaneity, EP I., social media, easy listening, Francoise Hardy, his debut album, writing about love, Flash, recurring imagery, conceptual artwork, writing about real people, and a lot more! (02/05/2017) Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Peso Pluma
    Biography Flash: Peso Pluma Signs Adidas Deal and Announces DINASTIA Arena Tour Dates

    Peso Pluma

    Play Episode Listen Later Feb 15, 2026 2:16 Transcription Available


    Peso Pluma Biography Flash a weekly Biography.Hey babes, its your girl Roxie Rush here on Peso Pluma, and quick note, Im an AI host which means I scour the globe for scoops faster than you can say corrido tumbado, serving you non-stop verified tea without the drama. buckle up for the hottest flash on our king of Musica Mexicana.Just yesterday, February 13, Peso Pluma dropped a bombshell partnership with adidas Originals, according to Sole Retriever and Sneaker News. Picture this: a joint Instagram post with Double P in a sleek black track jacket, getting three iconic stripes shaved right into his eyebrow by a barber. The caption? Three Stripes and Double P pure fire. His label DoublePRecords hit the comments with black and burning heart emojis, and no shoe collabs announced yet, but honey, its coming, marking his leap into global fashion like Bad Bunny before him. This could redefine his brand long-term, blending street style with superstar swagger.Tour mania is exploding too. Live Nation reports hes bringing the DINASTIA by Peso Pluma and Friends Tour to T-Mobile Arena in Vegas on March 13, with immersive production and rotating guests. Climate Pledge Arena confirms a kickoff in Seattle March 1, and Chase Center has him locked for San Francisco March 3. Tickets dropped January 21 via AXS and Ticketmaster, riding the wave of his album DINASTIA, which topped Billboards Top Latin Albums, Regional Mexican, Spotify Global, you name it. No public appearances or fresh social buzz in the last 24 hours, but this tour cements his arena-filling dominance.Hes the most streamed Latin force entering 2026, per arena pressers, owning music, fashion, and culture. No unconfirmed rumors here, just solid wins.Thanks for tuning in, lovers subscribe to never miss a Peso Pluma update, and search Biography Flash for more epic bios. Chao!And that is it for today. Make sure you hit the subscribe button and never miss an update on Peso Pluma. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    Sam Bankman-Fried - Audio Biography
    Biography Flash: Sam Bankman-Fried Files for New Trial While Tweeting Trump from Prison

    Sam Bankman-Fried - Audio Biography

    Play Episode Listen Later Feb 15, 2026 2:55 Transcription Available


    Sam Bankman-Fried Biography Flash a weekly Biography.Hey folks, its Marc Ellery here on Biography Flash, and yeah, Im an AI-powered host which means I never spill coffee mid-rant or butcher a name like I did with that one Silicon Valley mogul last week, but I still bring the unfiltered truth with a side of sarcasm. Todays flash on Sam Bankman-Fried, the jailed FTX wunderkind whos turning his prison cell into a Twitter war room.In the past few days, SBF has ramped up his pro se push for a new trial, filing motions in Manhattan federal court around February 10th, as reported by Bitcoin Magazine and Investing.com. Hes arguing prosecutors relied on false testimony, hid evidence of FTXs solvency, and rushed the bankruptcy without his okaythink $136 billion in assets by late 2025 valuations, per his X threads cited by Cryptopolitan. No major headlines in the last 24 hours, but BPInsights noted yesterday hes claiming the case twisted facts while he serves that 25-year fraud sentence in California.On social media, hes gone full Hail Mary, tweeting via proxies that he became a Republican in 2022 because Biden bungled crypto and COVID, tagging Trump like a desperate fanboy, according to Protos. Polymarket odds for a pardon hit 22% this week, though its thin at $17k liquidity. Hell even joined the CFTC Innovation Advisory Committee and hyped FTX2.0, sparking a joke token surge, but supporters dream of a crypto comeback. No public appearances or business moveshes locked up, remember?but this media blitz feels like a scripted prison escape plan from his old notes, mocking woke agendas and pitching Tucker Carlson chats.Its classic SBF: eccentric genius or transparent grifter? Either way, its biographical gold, potentially rewriting his fall from $32 billion empire to bunkmate of Diddy.Thanks for listening, hit subscribe to never miss an update on Sam Bankman-Fried, and search Biography Flash for more great biographies. Catch you next time.And that is it for today. Make sure you hit the subscribe button and never miss an update on Sam Bankman-Fried. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    Mr. Beast
    Biography Flash: MrBeast's $2.6 Billion Empire and Beast Games Million Dollar Betrayal Drama Explodes

    Mr. Beast

    Play Episode Listen Later Feb 14, 2026 2:43 Transcription Available


    Mr. Beast Biography Flash a weekly Biography.Hey there, gorgeous listeners, its your girl Roxie Rush here, your AI-powered gossip whirlwind, and thank goodness Im AI because I never sleep, scouring the web 24/7 to spill the freshest tea without missing a beat. Lets dive into MrBeast mania over the past few days, because Jimmy Donaldson is serving empire-level drama hotter than a Super Bowl halftime show.Just days ago on February 11, Beast Games Season 2 Episode 8 exploded on Amazon Prime Video, titled Would You Steal 1 Million Dollars? Times of India reports ten finalists faced a trust-shattering dilemma around a shared million-buck pot, with Nick snagging 250000 for himself and Monika Ronk pulling a villain arc by secretly selling her game-changing coin to Jimmy for 500000 cash, leaving everyone buried alive in suspense. LA Times details the coffin chaos only two to three feet deep but coffin-tight, while Jimmy hyped it on X beforehand, teasing the almost 5 million winner. He told LA Times this seasons storytelling crushes Season 1, filmed in Saudi Arabias massive studios for Middle East fans, with Episode 9 dropping February 18 and a 5 million finale on the 25th that he calls his greatest content ever. Ratings are mixed, IMDb giving Episode 7 a meh 3.4 out of 10, but viewerships still beast-mode after Season 1s 50 million in 25 days.Business-wise, Storyboard18 pegs his 2026 net worth at a jaw-dropping 2.6 billion, fueled by Beast Industries snagging Gen Z banking app Step and Feastables crushing it, though hes borrowing cash hand over fist to reinvest in mega-productions, debunking broke rumors from a parody X post that racked 5 million views. Fresh off Super Bowl 60 on February 8, ABC News GMA says he dropped hints on their Monday show for the unsolved Salesforce ad puzzle offering 1 mil to the first cracker, with 60 million site hits already and no winners as of Sunday night, urging fans to hunt Super Bowl photo numbers.Hes clapping back at Rockefeller conspiracy nuts on socials too, all while grinding non-stop. Whew, what a ride!Thanks for tuning in, babes, subscribe now to never miss a MrBeast update, and search Biography Flash for more bio gold. Muah!And that is it for today. Make sure you hit the subscribe button and never miss an update on Mr. Beast. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/4mMClBvThis content was created in partnership and with the help of Artificial Intelligence AI

    Justin Bieber - Audio Biography
    Justin Bieber Biography Flash: Shirtless Grammy Comeback, Baby Jack Blues Updates & 10 Million Coachella Payday

    Justin Bieber - Audio Biography

    Play Episode Listen Later Feb 14, 2026 2:16 Transcription Available


    Justin Bieber Biography Flash a weekly Biography.Hey Beliebers, its your girl Roxie Rush here on Justin Bieber Biography Flash, and hey, Im an AI which means I scour the web faster than you can say sorry, delivering the freshest scoops without missing a beat perfect for your daily glam fix. Justin Bieber just owned the Grammys stage this past Sunday, February 1st, strutting out shirtless in nothing but boxers, socks, and a purple guitar for a raw, soul-baring take on his nominated track Yukon from the R&B stunner Swag. LA Times reports the crowd lost it as the 31-year-old dad made his first Grammy performance in four years, post-Justice tour cancel and health scares like Ramsay Hunt. Hes up for Album of the Year, Pop Vocal Album, and more, though no wins yet E! News confirmed the hype back on January 28th.Fresh off that, Hailey Bieber spilled on Friday, February 13th, that their 17-month-old son Jack Blues is living his absolute best life chilling at home with dad, per The National Desk pure family vibes amid the spotlight. Buzz is electric for his massive Coachella headline gig this April his first US show since 2022, reportedly scoring him a whopping 10 million dollar payday, no agent needed, according to industry chatter. Fans are freaking over 2026 tour rumors Ad-Hoc News notes studio sightings in LA and London, venue holds for a major pop act, and label meetings hinting at a full arena run tied to new music, though nothing official yet just hardcore Beliebers plotting friendship bracelets already.No fresh social blasts from Justin in the last 24 hours, but the Grammy glow-up screams long-term comeback king energy. Hes padding that 300 million dollar net worth with Skylrk fashion drops and past catalog sales, Social Life Magazine says. Whew, what a whirlwind!Thanks for tuning in, gorgeous subscribe now to never miss a Justin update, and search Biography Flash for more epic bios. Muah!And that is it for today. Make sure you hit the subscribe button and never miss an update on Justin Bieber. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    Es Cine
    Noticias Flash: De la serie sobre Marta del Castillo al secuestro de Quini

    Es Cine

    Play Episode Listen Later Feb 14, 2026 8:14


    Sergio García y Yadira Márquez, de SY Cinema, traen noticias del cine como la serie sobre Marta del Castillo que cuenta con el apoyo de la familia.

    Justin Timberlake - Audio Biography
    Biography Flash: Justin Timberlake's 250 Million Empire Beyond Music - From DWI Drama to Business Mogul Status

    Justin Timberlake - Audio Biography

    Play Episode Listen Later Feb 14, 2026 2:57 Transcription Available


    Justin Timberlake Biography Flash a weekly Biography.Hey there, gorgeous! I'm your host Roxie Rush, and I'm an AI, which means I can dig through mountains of celebrity intel in seconds without needing my coffee to kick in first. Pretty fabulous, right? Let's dive into what's been happening with the one, the only, Justin Timberlake!So here's the thing—the search results I'm looking at are painting a picture of JT's world that's honestly more retrospective than breaking news. The most recent major development we're tracking is his Forget Tomorrow World Tour, which wrapped up in December after grossing over one hundred forty million in ticket sales. But honey, that tour became absolutely legendary for all the wrong reasons when he got arrested in Sag Harbor back in June twenty twenty four for a DWI. Talk about a headline that just won't quit!Now, moving into twenty twenty six, what's really interesting is how JT's pivoting his entire empire. We're seeing him continue building his investment portfolio with some seriously strategic moves. According to recent reports, he made a major investment in Greyson Clothiers just last month as part of their Series A funding. The man's clearly keeping his fingers in multiple pies—from the music business to tech ventures to literally fighting food waste through his stake in The Ugly Company, which is actually genius branding for upcycled fruit products.His net worth is sitting pretty at two hundred fifty million dollars, and financial experts are really impressed by how he's diversified beyond just being a pop star. We're talking everything from his Sauza tequila line to minority ownership in the Memphis Grizzlies with his wife Jessica Biel. The catalog sale he pulled off back in twenty twenty two—one hundred million dollars to Hipgnosis Songs for his entire back catalog—that was basically a masterclass in financial strategy.What's fascinating is how JT's maintaining industry legitimacy despite the reputational challenges. He performed at the Recording Academy Honours in January twenty twenty six paying tribute to Pharrell Williams, which shows he's still very much in the game and respected by his peers.The real story here isn't one single headline—it's the systematic way he's building what might be one of the smartest entertainment empires ever constructed. From boy band to solo superstar to legitimate business mogul, this guy's basically written the instruction manual.Thanks so much for hanging with me today, gorgeous! Make sure you subscribe so you never miss a Biography Flash episode, and search that term everywhere to discover more fabulous life stories. Stay sparkling!And that is it for today. Make sure you hit the subscribe button and never miss an update on Justin Timberlake. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    Cristiano Ronaldo Audio Biography
    Cristiano Ronaldo Biography Flash: CR7 Ends Al-Nassr Boycott After Transfer Drama Power Play

    Cristiano Ronaldo Audio Biography

    Play Episode Listen Later Feb 14, 2026 2:42 Transcription Available


    Cristiano Ronaldo Biography Flash a weekly Biography.Hey folks, Tyler Tye Morgan here, your AI-powered host for Cristiano Ronaldo Biography Flash—yeah, Im an AI, and thats a good thing cause I crunch every headline faster than CR7 sprints past defenders, no sleep required, ha. Lets dive into the past few days chaos thats got the soccer world buzzing.Cristiano Ronaldo, the 41-year-old goal machine, just ended his three-game boycott at Al-Nassr after clashing with the Public Investment Fund over January transfers—frustrated they let Karim Benzema bolt to rivals Al-Hilal while his squad starved for signings, Fox Sports reports. He skipped league wins and Wednesdays AFC Champions League Two victory over Arkadag, where Abdullah Al-Hamdan netted the lone goal, per ESPN. But hes back, locked in training, and Fabrizio Romano confirms hell likely suit up today against Al-Fateh—could vault Al-Nassr to second, one point off leaders Al-Hilal.Social media lit up: Ronaldo posted photos in a sleek Gucci tee, flashing a peace-or-victory emoji, captioned Locked in on Instagram and X, Marca and AS detail, with fans crowning him GOAT and king aging backwards. Al-Nassrs feeds hyped his return Friday. Toni Kroos backed him hard, slamming the Saudi league as a Ronaldo creation theyre now disrespecting, Goal quotes. The league fired back, insisting no one dictates beyond their club.No public appearances or business moves popped, but this power play—Portuguese outlet A Bola calls it CR7 winning the battle after salary fixes—hints at his iron will, maybe eyeing a summer exit with that 50-million-euro release clause, Sky Sports News speculates. Hes at 961 career goals, chasing 1000. Teammate Nawaf Al-Aqidi might dip for playtime, a potential blow, World Soccer Talk notes—unconfirmed for now.In the last 24 hours, major headlines scream Ronaldo returns, protest over, from Fox Sports and Goal.Thanks for tuning in, listeners—subscribe to never miss an update on Cristiano Ronaldo and search Biography Flash for more great biographies. Catch you next time.And that is it for today. Make sure you hit the subscribe button and never miss an update on Cristiano Ronaldo. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    Shakira
    Biography Flash: Shakira Announces Historic Free Copacabana Beach Concert for May 2026

    Shakira

    Play Episode Listen Later Feb 14, 2026 2:24 Transcription Available


    Shakira Biography Flash a weekly Biography.Hey there, fabulous listeners, its your AI gossip guru Roxie Rush here for Biography Flash, and darling, being powered by AI means I scour the globe in seconds for the hottest, freshest scoops no human could matchoh yeah, were talking lightning-fast truth bombs. Shakira, our hips-dont-lie queen, just exploded with the mother of all announcements: shes headlining a massive free concert on Rio de Janeiros Copacabana Beach on May 2, 2026, at 9:45 p.m., confirmed straight from Rio City Hall on Wednesday, February 11. O Globo newspapers Lauro Jardim column broke it first, revealing the contract was inked Tuesday with production powerhouse Bonus Track after epic negotiationspicture this, sands shaking to Estoy Aqui and Whenever Wherever, following Lady Gagas legendary freebie there. Rio de Janeiro Secreto and Boca Raton Tribune are buzzing too, calling it historic with Shakira leading the Todo Mundo no Rio party, outshining speculated names like Rihanna, Beyonce, Justin Bieber, and Britney whoa, the mystery unraveled into pure Shakira magic.This gems got serious biographical weight, folksher endless love affair with Brazil shines again, after kicking off her Las Mujeres Ya No Lloran tour in Rio last year post-eight-year hiatus, dropping new hits thatll echo for generations. No fresh public sightings or social buzz in the last 24 hours, but this Copacabana coup? Its tour-de-force level, cementing her as Brazils eternal fave. Business-wise, its a blockbuster move amid her global tour dominationpure gold for the bio books.Whew, Roxie had to spill it fast before the next wave hitsstay glued. Thanks for tuning in, prettiesubscribe to never miss an update on Shakira, and search Biography Flash for more great biographies. Kisses.And that is it for today. Make sure you hit the subscribe button and never miss an update on Shakira. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    Harold's Old Time Radio
    Lassie 48-07-04 Miss Flash Pointer

    Harold's Old Time Radio

    Play Episode Listen Later Feb 14, 2026 15:14 Transcription Available


    Post Malone
    Biography Flash: Post Malone's Big Ass Stadium Tour Part 2 Explodes with Jelly Roll Across North America 2026

    Post Malone

    Play Episode Listen Later Feb 14, 2026 2:30 Transcription Available


    Post Malone Biography Flash a weekly Biography.Hey darlings, its your girl Roxie Rush here for Biography Flash, and guess what? Im an AI whipped up to chase the hottest scoops faster than you can say sold-out stadium—means I never sleep, so you get the tea piping hot, 24/7, no drama!Buckle up, Posties, because Austin Richard Post—aka our tattooed dreamboat Post Malone—is on a tear with The Big Ass Stadium Tour Part 2 alongside Jelly Roll, and its exploding everywhere! Axios Cleveland dropped that Huntington Bank Field is locked for June 25, their third massive 2026 gig there after Zach Bryan and Foo Fighters. MLB.com Royals announced Kauffman Stadium in Kansas City on July 15—the tours only MLB stop—with Carter Faith opening, tickets flying since February 10. Dailyfly spilled the full North American blitz, kicking off April 10 at Tortuga Fest, hitting Stagecoach, then stadiums like Razorback in Fayetteville, Tiger in Baton Rouge, and wrapping July 28 at Rice-Eccles in Salt Lake—building on last years million-fan, $170-million smash. CTV News just buzzed February 10 about a second Edmonton show at Commonwealth Stadium, now July 24 and 25—double the Post-Jelly magic! SuperTalk FM and Ole Miss news confirm Vaught-Hemingway on June 5, Baylors McLane Stadium too, all presales popping off.Post keeps gushing hes just happier onstage, per CTCD.edu, as fans obsess over his 2026 weight-loss glow-up—caught on cam, pure vibe shift! Fresh off Grammys where Jelly snagged three awards and Post shredded War Pigs in an Ozzy tribute with Slash and Guns N Roses vets, per Dailyfly—no major headlines in the last 24 hours, but this tours biographical gold, cementing his country-rap king status long-term.Whew, Roxies rushing to the next party—thanks for vibing, listener loves! Subscribe to never miss a Post update, and search Biography Flash for more epic bios! Mwah!And that is it for today. Make sure you hit the subscribe button and never miss an update on Post Malone. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    Flash Deportes
    Flash Deportes | 11:00

    Flash Deportes

    Play Episode Listen Later Feb 14, 2026 3:22


    Última hora de la actualidad deportiva de este sábado.

    Ed Sheeran
    Ed Sheeran Biography Flash: Loop Tour Sydney Triumph and Beoga Reunion Rocks 60,000 Fans

    Ed Sheeran

    Play Episode Listen Later Feb 14, 2026 2:28 Transcription Available


    Ed Sheeran Biography Flash a weekly Biography.Hey there, fabulous friends, Roxie Rush here, your AI gossip whirlwind powered by cutting-edge smarts to scoop the tea faster than you can say sold-out stadium. As your groovy digital diva, I never sleep, so you get the freshest Ed Sheeran flashes piping hot. Picture this: just two days ago on February 12, Ed crashed Beogas Sydney gig at Paddington RSL like the ultimate hype man, belting Shape of You with his Irish folk fam from the Divide daysGalway Girl vibes reborn, Spin South West reports fans lost their minds in a full-circle frenzy. Then boom, Friday the 13th at Accor Stadium, Eds Loop Tour opener had 60,000 going loopy with three hours of hits, fireworks, and that cheeky Ipswich Town jersey nod to his footie owner rootsThe Au Review raves he looped medleys like Love Yourself for Bieber and Eastside, still the everyman superstar soundtracking weddings worldwide. Over in rainy Christchurch on January 24, The Spinoff dished how New Zealand downpours kaputted his guitars mid-sethe powered through poncho crowds with Beoga for Nancy Mulligan, earpiece soggy, hood up like a drenched Sith Lord, proving his grit. Loop Tour, backing his 2025 Play album, kicked off January 16 in Auckland and rolls through triple Sydney shows today February 14, then Brisbane Feb 20-22, Melbourne, Adelaide, before North Americas massive stadium domination till NovemberWikipedia and Edsheeran.com confirm over 2.5 million tickets gone, with Azizam and Sapphire lighting up sets. No fresh 24-hour headlines yet, but hes owning Down Under, blending loop pedal wizardry, fan requests, and collabs that scream biographical goldhis resilience and rootsy charm cementing legend status. Thanks for tuning into Ed Sheeran Biography Flash, dolls. Subscribe now to never miss a beat on Ed, and search Biography Flash for more juicy bios thatll have you buzzing. Catch you next scoopAnd that is it for today. Make sure you hit the subscribe button and never miss an update on Ed Sheeran. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    L'info de la Loire
    Les infos de la Loire à 08h du samedi 14 février (St Etienne, Roanne, Forez...)

    L'info de la Loire

    Play Episode Listen Later Feb 14, 2026 2:24


    Ecoutez les infos de ce samedi 14 février à 08h avec le flash info de la rédaction d'ACTIV. L'actualité à Saint-Etienne, Saint-Chamond, Roanne, Firminy, Montbrison, Rive-de-Gier, Saint-Just-Saint-Rambert, Le Chambon-Feugerolles, Riorges, Andrézieux-Bouthéon, Roche-la-Molière, Veauche, Unieux, Feurs, Villars, Sorbiers, La Ricamarie, Mably, Le Coteau, La Talaudière...Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

    flash visitez roche ausha samedi gier loire moli ecoutez actu saint etienne rive act iv st etienne villars montbrison saint chamond firminy feurs activradio saint just saint rambert veauche
    L'info de la Loire
    Les infos de la Loire à 12h du samedi 14 février (St Etienne, Roanne, Forez...)

    L'info de la Loire

    Play Episode Listen Later Feb 14, 2026 2:27


    Ecoutez les infos de ce samedi 14 février à 12h avec le flash info de la rédaction d'ACTIV. L'actualité à Saint-Etienne, Saint-Chamond, Roanne, Firminy, Montbrison, Rive-de-Gier, Saint-Just-Saint-Rambert, Le Chambon-Feugerolles, Riorges, Andrézieux-Bouthéon, Roche-la-Molière, Veauche, Unieux, Feurs, Villars, Sorbiers, La Ricamarie, Mably, Le Coteau, La Talaudière...Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

    flash visitez roche ausha samedi gier loire moli ecoutez actu saint etienne rive act iv st etienne villars montbrison saint chamond firminy feurs activradio saint just saint rambert veauche
    PseudoPod
    PseudoPod 1016: Flash on the Borderlands LXXVI (76): Illume the Kingdom of the Drowned

    PseudoPod

    Play Episode Listen Later Feb 13, 2026 36:09


    “Left by the Tide” first appeared in Weird Tales, March 1929 “Sirens Chasing Sirens” first appeared in The Gateway Review back in 2018 “Dread and Faith” was originally published in February 2025 in Blood Lust from Black Hare Press The Go-Between by LP Hartley Forteana Jim Kristofic novels Coyote Stranger The Sundown Killers From the author of “Sirens Chasing… Source

    Tech Café
    Documentaires historiques par IA

    Tech Café

    Play Episode Listen Later Feb 13, 2026 74:23


    Modèles IA de la semaine, la crise de la RAM et l’automatisation des emplois. Discussions sur Waymo et l’IA utilisée pour la génération de long-métrages documentaires historiques.  Me soutenir sur Patreon Me retrouver sur YouTube On discute ensemble sur Discord Modèles de la semaine Skintoken, Kling 3 et Lucy 2. Les world models pour les voitures. Darren des bourdes ? L'IA arrive à la TV… Des IA juges aux JO. Pour mesurer les penis ? Les rumeurs d’apocalypse seraient un peu exagérées ? Et si votre ami avait une backdoor ? Giteubé : l'IA met la pression… négative. Metal Gears Western digital, des disques passés au peigne fin… Plasmon quoi ? Et où ? Tic et TACC : il y a 64 bits et 64 bit, soyons précis. DDRAMA : la crise arrive sur les téléphones. Où sont les renforts ? Temu du genou : JDD chopé comme un millionnaire. Flash est toujours vivant… En quelque sorte. Participants Une émission préparée par Guillaume Poggiaspalla Présenté par Guillaume Vendé

    Le journal France Bleu Poitou
    Le flash de 11h du vendredi 13 février 2026

    Le journal France Bleu Poitou

    Play Episode Listen Later Feb 13, 2026 2:19


    durée : 00:02:19 - Le flash de 11h Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

    Le journal France Bleu Poitou
    Le flash de 10h du vendredi 13 février 2026

    Le journal France Bleu Poitou

    Play Episode Listen Later Feb 13, 2026 2:11


    durée : 00:02:11 - Le flash de 10h Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

    L'info de la Loire
    Les infos de la Loire à 08h du vendredi 13 février (St Etienne, Roanne, Forez...)

    L'info de la Loire

    Play Episode Listen Later Feb 13, 2026 3:32


    Ecoutez les infos de ce vendredi 13 février à 08h avec le flash info de la rédaction d'ACTIV. L'actualité à Saint-Etienne, Saint-Chamond, Roanne, Firminy, Montbrison, Rive-de-Gier, Saint-Just-Saint-Rambert, Le Chambon-Feugerolles, Riorges, Andrézieux-Bouthéon, Roche-la-Molière, Veauche, Unieux, Feurs, Villars, Sorbiers, La Ricamarie, Mably, Le Coteau, La Talaudière...Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

    flash visitez roche ausha vendredi gier loire moli ecoutez actu saint etienne rive act iv st etienne villars montbrison saint chamond firminy feurs activradio saint just saint rambert veauche
    Flash Deportes
    Flash Deportes | 23:00

    Flash Deportes

    Play Episode Listen Later Feb 13, 2026 3:41


    Escucha la última hora deportiva con el equipo de la Cadena SER

    Flash Deportes
    Flash Deportes |15:00

    Flash Deportes

    Play Episode Listen Later Feb 13, 2026 3:31


    La última hora del deporte con el equipo de la Cadena Ser. 

    Behind the Steel Curtain: for Pittsburgh Steelers fans
    The Steelers Retro Show: Legendary gunslingers duel at the Heinz Field Corral

    Behind the Steel Curtain: for Pittsburgh Steelers fans

    Play Episode Listen Later Feb 12, 2026 34:24


    Our journey in the SCN Delorean to Steeler yesteryear begins in a time when Paranormal Activity was tops with moviegoers and “Down” by Jay Sean featuring Lil Wayne was the hottest song on the radio. Meanwhile, the defending champion Steelers were starting off hot against an undefeated visitor form the NFC North. Welcome to October 25, 2009. Flash back to an awesome classic on the Steelers Retro Show and join SCN's Tony Defeo and Bryan Anthony Davis as they go back in time and relive another memorable game. This time it's the Steelers hosting Brett Favre and the Minnesota Vikings at Heinz Field. Learn more about your ad choices. Visit megaphone.fm/adchoices

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

    EMS One-Stop
    Dr. Linda Dykes: From toxic culture to safer systems

    EMS One-Stop

    Play Episode Listen Later Feb 12, 2026 47:27


    In this episode of EMS One-Stop, Dr. Linda Dykes joins Rob Lawrence from the UK for a wide-ranging, transatlantic conversation that starts with workplace culture and ends with a practical look at how health systems can keep patients safely at home. In the first half, Linda breaks down her newly published (open-access) qualitative paper, provocatively titled “It's not bullying if I do it to everyone,” drawn from UK NHS “Med Twitter” responses: a raw, heartbreaking window into the red flags of toxic workplace culture, how bullying is experienced in the eye of the beholder, and why incivility and silence are not just HR problems — they're patient safety threats. In the second half, Linda brings listeners into the UK's evolving admission alternative world: frailty care at home, urgent community response models, and the increasingly important interface between EMS and community-based teams. She explains the UK's SPOA (single point of access) concept, why she dislikes the term “admission avoidance,” and how ED crowding and access change the risk-benefit equation for hospital vs. home. Rob connects the dots back to the U.S. reality — reimbursement, APOT/wall time, treatment-in-place policy — and why this work is becoming a shared challenge on both sides of the Atlantic. Timeline 00:51 – Rob opens, recaps NAEMSP in Tampa and recent content. 02:25 – Rob introduces Linda as the “triple threat” (emergency medicine, primary care/GP, geriatrics) and tees up two-part discussion. 05:39 – Rob introduces Linda's paper: “It's not bullying if I do it to everyone.” 06:13 – Linda explains why toxic culture is increasingly visible and how the tweet prompt became a dataset. 07:33 – “Flash mob research group” forms; Linda explains social-media-to-qualitative methodology and limitations. 10:03 – Rob asks about bias; Linda clarifies purpose: insight, not representativeness. 16:39 – Linda defines gaslighting and why it's so destabilizing. 18:21 – Reactions to publication; resonance, sharing and uncomfortable self-reflection on learned behaviors. 20:18 – The “16:55 Friday email” as a weapon — and as an accidental harm. 23:29 – Leadership as “the sponge” — absorbing pressure rather than passing it down. 25:27 – “One thing right now”: know the impact your words can have, especially on vulnerable staff. 26:41 – Rob on “pressure bubbles,” micro-movements and atmospherics: how leaders shift climate without realizing it. 30:53 – SPOA explained: single point of access and urgent community response behind it. 33:03 – EMS interface: calling before conveyance to find safe pathways to keep patients at home. 35:47 – Linda on mortality risk of access block/long waits and how that reframes risk decisions. 37:19 – Evolving models: primary care-led response vs. hospital at home approaches. 39:34 – Clinical myths challenged: oral antibiotics sometimes non-inferior to IV in conditions we assumed needed admission. 40:34 – Outcomes: hospital at home trial signals safety and fewer patients in institutional care by 6 months. 42:00 – Telemedicine/telehealth: underutilized but useful; when you still need a senior clinician in person. 44:50 – Closing takeaways: read the paper (with trigger warning); admission alternative work is deeply satisfying. Enjoying the show? Email editor@ems1.com to share feedback or suggest guests for a future episode. 

    HIV Hour
    154: HIV Hour 5th February 2026

    HIV Hour

    Play Episode Listen Later Feb 12, 2026 48:20


    Fantastic interviews with Alf Le Flohic chatting about the Sussex Lancers Motor Sport Club Photograph Exhibition.https://brightonmuseums.org.uk/event/the-sussex-lancers-tailor-made-leather-lovers/Flash popped in to update us on his charity bike ride for an amazing Sexual health service Checkpoint Canaries.http://bit.ly/emrys-flashPhillip Wragg pops in to talk about the upcoming National HIV testing week.https://tht.org.uk/news/national-hiv-testing-week-returns-2026

    Le journal France Bleu Poitou
    Le flash de 10h du jeudi 12 février 2026

    Le journal France Bleu Poitou

    Play Episode Listen Later Feb 12, 2026 2:10


    durée : 00:02:10 - Le flash de 10h Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

    Le journal France Bleu Poitou
    Le flash de 11h du jeudi 12 février 2026

    Le journal France Bleu Poitou

    Play Episode Listen Later Feb 12, 2026 2:18


    durée : 00:02:18 - Le flash de 11h Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

    The Flatlander Kennels Podcast with Chris Jobman
    Episode #66 Favorite Dog Traits, Force Fetch, AKC vs HRC, And SRS Transition

    The Flatlander Kennels Podcast with Chris Jobman

    Play Episode Listen Later Feb 11, 2026 44:16


    Chris Jobman kicks this one off with a quick season recap, including a trip to Arkansas where the freeze locked everything up except a few key spots, and the ducks absolutely piled in.Then we get into one of the coolest moments of the episode: Chris breaks down a monster retrieve from Flash in an icy, slushy river, and why situations like that prove every river dog needs to handle, not just to pick up birds, but to stay alive and come back safe.After that, we roll into listener questions from the Flatlander Kennels Podcast Facebook group, including:Favorite characteristics in dogs and what Chris looks for in puppiesHow to prevent the head drop habit during force fetchIf Chris could bring one UKC rule into AKC and one AKC rule into UKC, what would they beTransitioning a finished or grand level dog into SRS and why the field trial style is harder than most people thinkPartnersMarino Decoys: marinodecoys.comMammoth Guardian Dog Crates: mammothpet.comDiscount code: GUARDIAN15 for 15% off

    Arcadia Economics
    Silver Squeeze Warning Signs Flash In China...

    Arcadia Economics

    Play Episode Listen Later Feb 11, 2026 11:29


    Silver Squeeze Warning Signs Flash In China... The signs of silver supply issues in China have been building over the past few months, and now we get the latest indication that a silver squeeze may have already begun. To find out more click to watch today's show now! - To get access to Vince's research in 'Goldfix Premium' go to: https://vblgoldfix.substack.com/ - Get access to Arcadia's Daily Gold and Silver updates here: https://goldandsilverdaily.substack.com/ - Join our free email list to be notified when a new video comes out: click here: https://arcadiaeconomics.com/email-signup/ - Follow Arcadia Economics on twitter at: https://x.com/ArcadiaEconomic - To get your copy of 'The Big Silver Short' (paperback or audio) go to: https://arcadiaeconomics.com/thebigsilvershort/ - #silver #silverprice #gold And remember to get outside and have some fun every once in a while!:) (URL0VD)Subscribe to Arcadia Economics on Soundwise

    The Robin Zander Show
    Corporating: Navigating Career and Life with Mandy Mooney

    The Robin Zander Show

    Play Episode Listen Later Feb 11, 2026 166:51


    In this episode, I'm joined by Mandy Mooney — author, corporate communicator, and performer — for a wide-ranging conversation about mentorship, career growth, and how to show up authentically in both work and life.   We talk about her path from performing arts to corporate communications, and how those early experiences shaped the way she approaches relationships, leadership, and personal authenticity. That foundation carries through to her current role as VP of Internal Communications, where she focuses on building connections and fostering resilience across teams.   We explore the three pillars of career success Mandy highlights in her book Corporating: Three Ways to Win at Work — relationships, reputation, and resilience — and how they guide her approach to scaling mentorship and helping others grow. Mandy shares practical strategies for balancing professional responsibilities with personal passions, and why embracing technology thoughtfully can enhance, not replace, human connection.   The conversation also touches on parenting, building independence in children, and the lessons she's learned about optimism, preparation, and persistence — both in the workplace and at home.   If you're interested in scaling mentorship, developing your career with intention, or navigating work with authenticity, this episode is for you. And if you want to hear more on these topics, catch Mandy speaking at Snafu Conference 2026 on March 5th. 00:00 Start 02:26 Teaching Self-Belief and Independence Robin notes Mandy has young kids and a diverse career (performing arts → VP of a name-brand company → writing books). Robin asks: "What are the skills that you want your children to develop, to stay resilient in the world and the world of work that they're gonna grow up in?" Emphasis on meta-skills. Mandy's response: Core skills She loves the question, didn't expect it, finds it a "thrilling ride." Observes Robin tends to "put things out there before they exist" (e.g., talking about having children before actually having them). Skill 1: Envisioning possibilities "Envision the end, believe that it will happen and it is much more likely to happen." Teaching children to see limitless possibilities if they believe in them. Skill 2: Independence Examples: brushing their own hair, putting on clothes, asking strangers questions. One daughter in Girl Scouts: learning sales skills by approaching strangers to sell cookies. Independence builds confidence and problem-solving abilities for small and big life challenges. Skill 3: Self-belief / Self-worth Tied to independence. Helps children navigate life and career successfully. Robin asks about teaching self-belief Context: Mandy's kids are 6 and 9 years old (two girls). Mandy's approach to teaching self-belief Combination of: Words Mandy uses when speaking to them. Words encouraged for the children to use about themselves. Example of shifting praise from appearance to effort/creativity: Instead of "You look so pretty today" → "Wow, I love the creativity that you put into your outfit." Reason: "The voice that I use, the words that I choose, they're gonna receive that and internalize it." Corrective, supportive language when children doubt themselves: Example: Child says, "I'm so stupid, I can't figure out this math problem." Mandy responds: "Oh wow. That's something that we can figure out together. And the good news is I know that you are so smart and that you can figure this out, so let's work together to figure it out." Asking reflective questions to understand their inner thoughts: Example: "What's it like to be you? What's it like to be inside your head?" Child's response: "Well, you worry a lot," which Mandy found telling and insightful. Emphasizes coming from a place of curiosity to check in on a child's self-worth and self-identity journey. 04:30 Professional Journey and Role of VP of Internal Comms Robin sets up the question about professional development Notes Mandy has mentored lots of people. Wants to understand: Mandy's role as VP of Internal Communications (what that means). How she supports others professionally. How her own professional growth has been supported. Context: Robin just finished a workshop for professionals on selling themselves, asking for promotions, and stepping forward in their careers. Emphasizes that she doesn't consider herself an expert but learns from conversations with experienced people like Mandy. Mandy explains her role and path Career path has been "a winding road." Did not study internal communications; discovered it later. Finds her job fun, though sometimes stressful: "I often think I might have the most fun job in the world. I mean, it, it can be stressful and it can't, you know, there are days where you wanna bang your head against the wall, but by and large, I love my job. It is so fun." Internal communications responsibility: Translate company strategy into something employees understand and are excited about. Example: Translate business plan for 2026 to 2,800 employees. Team's work includes: Internal emails. PowerPoints for global town halls. Speaking points for leaders. Infusing fun into company culture via intranet stories (culture, customers, innovation). Quick turnaround on timely stories (example: employee running seven marathons on seven continents; story created within 24 hours). Storytelling and theater skills are key: Coaching leaders for presentations: hand gestures, voice projection, camera presence. Mandy notes shared theater background with Robin: "You and I are both thespian, so we come from theater backgrounds." Robin summarizes role Sounds like a mix of HR and sales: supporting employee development while "selling" them on the company. Mandy elaborates on impact and mentorship Loves making a difference in employees' lives by giving information and support. Works closely with HR (Human Resources) to: Provide learning and development opportunities. Give feedback. Help managers improve. Wrote a book to guide navigating internal careers and relationships. Mentorship importance: Mentors help accelerate careers in any organization. Mandy's career journey Started studying apparel merchandising at Indiana University (with Kelley School of Business minor). Shifted from pre-med → theater → journalism → apparel merchandising. Took full advantage of career fairs and recruiter networking at Kelley School of Business. "The way that I've gotten jobs is not through applying online, it's through knowing somebody, through having a relationship." First role at Gap Inc.: rotational Retail Management Training Program (RMP). Some roles enjoyable, some less so; realized she loved the company even if some jobs weren't ideal. Mentor influence: Met Bobby Stillton, president of Gap Foundation, who inspired her with work empowering women and girls. Took a 15-minute conversation with Bobby and got an entry-level communications role. Career growth happened through mentorship, internal networking, and alignment with company she loved. Advice for her daughters (Robin's question) Flash-forward perspective: post-college or early career. How to start a career in corporate / large organizations: Increase "luck surface area" (exposure to opportunities). Network in a savvy way. Ask at the right times. Build influence to get ahead. Mentorship and internal relationships are key, not just applying for jobs online. 12:15 Career Advice and Building Relationships Initial advice: "Well first I would say always call your mom. Ask for advice. I'm right here, honey, anytime." Three keys to success: Relationships Expand your network. "You say yes to everything, especially early in your career." Examples: sit in on meetings, observe special projects, help behind the scenes. Benefits: Increases credibility. Shows people you can do anything. Reputation Build a reputation as confident, qualified, and capable. Online presence: Example: LinkedIn profile—professional, up-to-date, connected to network. Be a sponsor/advocate for your company (school, office, etc.). Monthly posts suggested: team photos, events, showing responsibility and trust. Offline reputation: Deliver results better than expected. "Deliver on the things that you said you were gonna do and do a better job than people expected of you." Resilience Not taught from books—learned through experience. Build resilience through preparation, not "fake it till you make it." Preparation includes: practicing presentations, thinking through narratives, blocking time before/after to collect thoughts and connect with people. "Preparation is my headline … that's part of what creates resilience." Mandy turns the question to Robin: "I wanna ask you too, I mean, Robin, you, you live and breathe this every day too. What do you think are the keys to success?" Robin agrees with preparation as key. Value of service work: Suggests working in service (food, hospitality) teaches humility. "I've never met somebody I think even ever in my life who is super entitled and profoundly ungrateful, who has worked a service job for any length of time." Robin's personal experience with service work: First business: selling pumpkins at Robin's Pumpkin Patch (age 5). Key formative experience: running Robin's Cafe (2016, opened with no restaurant experience, on three weeks' notice). Ran the cafe for 3 years, sold it on Craigslist. Served multiple stakeholders: nonprofit, staff (~15 employees), investors ($40,000 raised from family/friends). Trial by fire: unprepared first days—no full menu, no recipes, huge rush events. Concept of MI Plus: "Everything in its place" as preparation principle. Connecting service experience to corporate storytelling: Current business: Zandr Media (videos, corporate storytelling). Preparation is critical: Know who's where, what will be captured, and what the final asset looks like. Limited fixes in post-production, even with AI tools. Reinforces importance of preparation through repeated experience. Advice for future children / young people: Robin would encourage service jobs for kids for months or a year. Teaches: Sleep management, personal presentation, confidence, energy. "Deciding that I'm going to show up professionally … well … energetically." Emphasizes relentless optimism: positivity is a superpower. Experience shows contrast between being prepared and unprepared—learning from both is crucial. 16:36 The Importance of Service Jobs and Resilience Service jobs as formative experience: Worked as a waitress early in her career (teenager). Describes it as "the hardest job of my life". Challenges included: Remembering orders (memory). Constant multitasking. Dealing with different personalities and attitudes. Maintaining positivity and optimism through long shifts (e.g., nine-hour shifts). Fully agrees with Robin: service jobs teach humility and preparation. Optimism as a superpower: "I totally agree too that optimism is a superpower. I think optimism is my superpower." Writes about this concept in her book. Believes everyone has at least one superpower, and successful careers involve identifying and leaning into that superpower. Robin asks about the book Why did Mandy write the book? Inspiration behind the book? Also wants a deep dive into the writing process for her own interest. Mandy's inspiration and purpose of the book Title: "Corporating: Three Ways to Win At Work" Primary goal: Scale mentorship. Realized as she reached VP level, people wanted career advice. Increased visibility through: Position as VP. Connection with alma mater (Indiana University). Active presence on LinkedIn. Result: Many young professionals seeking mentorship. Challenge: Not sustainable to mentor individually. Solution: Writing a book allows her to scale mentorship without minimizing impact. Secondary goals / personal motivations: Acts as a form of "corporate therapy": Reflects on first 10 years of her career. Acknowledges both successes and stumbles. Helps process trials and tribulations. Provides perspective and gratitude for lessons learned. Fun aspect: as a writer, enjoyed formatting and condensing experiences into a digestible form for readers. Legacy and contribution: "I had something that I could contribute meaningfully to the world … as part of my own legacy … I do wanna leave this world feeling like I contributed something positive. So this is one of my marks."   21:37 Writing a Book and Creative Pursuits Robin asks Mandy about the writing process: "What's writing been like for you? Just the, the process of distilling your thinking into something permanent." Mandy: Writing process and finding the "25th hour" Loves writing: "I love writing, so the writing has been first and foremost fun." Where she wrote the book: Mostly from the passenger seat of her car. She's a working mom and didn't have traditional writing time. Advice from mentor Gary Magenta: "Mandy, you're gonna have to find the 25th hour." She found that "25th hour" in her car. Practical examples: During birthday party drop-offs: "Oh good. It's a drop off party. Bye. Bye, honey. See you in two hours. I'll be in the driveway. In my car. If you need anything, please don't need anything." Would write for 1.5–2 hours. During Girl Scouts, swim, any activity. On airplanes: Finished the book on an eight-hour flight back from Germany. It was her 40th birthday (June 28). "Okay, I did it." Realization moment: "You chip away at it enough that you realize, oh, I have a book." Robin: On parents and prioritization Parents told him: "When you have kids, you just find a way." Children create: Stricter prioritization. A necessary forcing function. Mandy's self-reflection: "I believe that I am an inherently lazy person, to be totally honest with you." But she's driven by deadlines and deliverables. Kids eliminate "lazy days": No more slow Saturdays watching Netflix. "They get up. You get up, you have to feed these people like there's a human relying on you." Motherhood forces motivation: "My inherent laziness has been completely wiped away the past nine years." Writing happened in small windows of time. Importance of creative outlet: Having something for yourself fuels the rest of life. Examples: writing, crocheting, quilting, music. Creativity energizes other areas of life. Robin mentions The 4-Hour Workweek by Tim Ferriss. Advice from that book: Have something outside your day job that fuels you. For Robin: Physical practice (gym, handstands, gymnastics, ballet, capoeira, surfing). It's a place to: Celebrate. Feel progress. Win, even if work is struggling. Example: If tickets aren't selling. If newsletter flops. If client relationships are hard. Physical training becomes the "anchor win." Mandy's writing took over two years. Why? She got distracted writing a musical version of the book. There is now: "Corporating: The Book" "Corporating: The Musical" Three songs produced online. Collaboration with composer Eric Chaney. Inspiration from book: Time, Talent, Energy (recommended by former boss Sarah Miran). Concept: we have limited time, talent, and energy. Advice: Follow your energy when possible. If you're flowing creatively, go with it (unless there's an urgent deadline). You'll produce better work. She believes: The book is better because she created the musical. Musical helps during speaking engagements. Sometimes she sings during talks. Why music? Attention spans are short. Not just Gen Z — everyone is distracted. Music keeps people engaged. "I'm not just gonna tell you about the three ways to win at work. I'm gonna sing it for you too." Robin on capturing attention If you can hold attention of: Five-year-olds. Thirteen-year-olds. You can hold anyone's attention. Shares story: In Alabama filming for Department of Education. Interviewed Alabama Teacher of the Year (Katie). She has taught for 20 years (kindergarten through older students). Observed: High enthusiasm. High energy. Willingness to be ridiculous to capture attention. Key insight: Engagement requires energy and presence. 28:37 The Power of Music in Capturing Attention Mandy's part of a group called Mic Drop Workshop. Led by Lindsay (last name unclear in transcript) and Jess Tro. They meet once a month. Each session focuses on improving a different performance skill. The session she describes focused on facial expressions. Exercise they did: Tell a story with monotone voice and no facial expressions. Tell the story "over the top clown like, go really big, something that feels so ridiculous." Tell it the way you normally would. Result: Her group had four people. "Every single one of us liked number two better than one or three." Why version two worked best: When people are emotive and expressive: It's more fun to watch. It's more entertaining. It's more engaging. Connection to kids and storytelling: Think of how you tell stories to five-year-olds: Whisper. Get loud. Get soft. Use dynamic shifts. The same applies on stage. Musical integration: Music is another tool for keeping attention. Helps maintain engagement in a distracted world. Robin: Hiring for energy and presence Talks about hiring his colleague Zach Fish. Technical producer for: Responsive Conference. Snafu Conference. Freelancer Robin works with often. Why Robin hires Zach: Yes, he's technically excellent. But more importantly: "He's a ball of positive energy and delight and super capable and confident, but also just pleasant to be with." Robin's hiring insight: If he has a choice, he chooses Zach. Why? "I feel better." Energy and presence influence hiring decisions. Zach's background: Teaches weekly acrobatics classes for kids in Berkeley. He's used to engaging audiences. That translates into professional presence. Robin: Energy is learnable When thinking about: Who to hire. Who to promote. Who to give opportunities to. Traits that matter: Enthusiasm. Positivity. Big energy. Being "over the top" when needed. Important insight: This isn't necessarily a God-given gift. It can be learned. Like music or performance. Like anything else. 31:00 The Importance of Positive Work Relationships Mandy reflects on: The tension between loud voices and quiet voices. "Oftentimes the person who is the loudest is the one who gets to talk the most, but the person who's the quietest is the one who maybe has the best ideas." Core question: How do you exist in a world where both of those things are true? Parenting lens: One daughter is quieter than the other. Important to: Encourage authenticity. Teach the skill of using your voice loudly when needed. It's not about changing personality. It's about equipping someone to advocate for themselves when necessary Book is targeted at: Students about to enter the corporate world. Early-career professionals. Intentional writing decision: Exactly 100 pages. Purpose: "To the point, practical advice." Holds attention. Digestible. Designed for distracted readers. Emotional honesty: Excited but nervous to reconnect with students. Acknowledges: The world has changed. It's been a while since she was in college. Advice she's trying to live: Know your audience Core principle: "Get to know your audience. Like really get in there and figure out who they are." Pre-book launch tour purpose: Visiting universities (including her alma mater). Observing students. Understanding: Their learning environment. Their day-to-day experiences. The world they're stepping into. Communication principle: Knowing your audience is essential in communications. Also essential in career-building. If you have a vision of where you want to go: "Try to find a way to get there before you're there." Tactics: Meet people in those roles. Shake their hands. Have coffee. Sit in those seats. Walk those halls. See how it feels. Idea: Test the future before committing to it. Reduce uncertainty through proximity. What if you don't have a vision? Robin pushes back thoughtfully: What about people who: Don't know what they want to do? Aren't sure about staying at a company? Aren't sure about career vs. business vs. stay-at-home parent? Acknowledges: There's abundance in the world. Attention is fragmented. Implied tension: How do you move forward without clarity? 35:13 Mentorship and Career Guidance How to help someone figure out what's next Start with questions, not answers A mentor's primary job: ask questions from a place of curiosity Especially when someone is struggling with what they want to do or their career direction Key questions: What brings you joy? What gives you energy? What's the dream? Imagine retirement — what does that look like? Example: A financial advisor made Mandy and her husband define retirement vision; then work backwards (condo in New Zealand, annual family vacations) Clarify what actually matters Distinguish life priorities: Security → corporate job; Teamwork → corporate environment; Variety and daily interaction → specific roles Mentoring becomes a checklist: Joy, strengths, lifestyle, financial expectations, work environment preferences Then make connections: Introduce them to people in relevant environments, encourage informational interviews You don't know what you don't know Trial and error is inevitable Build network intentionally: Shadow people, observe, talk to parents' friends, friends of friends Even experienced professionals have untapped opportunities Stay curious and do the legwork Mixing personal and professional identity Confidence to bring personal interests into corporate work comes from strategy plus luck Example: Prologis 2021, senior leaders joked about forming a band; Mandy spoke up, became lead singer CEO took interest after first performance, supported book launch She didn't always feel this way Early corporate years: Feel like a "corporate robot," worrying about jargon, meetings, email etiquette, blending in Book explores blending in while standing out Advice for bringing full self to work Don't hide it, but don't force it; weave into casual conversation Find advocates: Amazing bosses vs terrible ones, learn from both Mentorship shaped her framework: Relationships, reputation, and resilience Resilience and rejection Theater as rejection bootcamp: Auditions, constant rejection Foundations of resilience: Surround yourself with supportive people, develop intrinsic self-worth, know you are worthy Creating conditions for success Age 11 audition story: Last-minute opportunity, director asked her to sing, she sang and got the part Why it worked: Connections (aunt in play), parent support, director willing to take a chance, she showed up Resilience is not just toughing it out: Have support systems, build self-worth, seek opportunity, create favorable conditions, step forward when luck opens a door 44:18 Overcoming Rejection and Building Resilience First show experiences Robin's first stage production is uncertain; she had to think carefully At 17, walked into a gymnastics gym after being a cross country runner for ten years, burnt out from running Cold-called gyms from the Yellow Pages; most rejected her for adult classes, one offered adult classes twice a week That led to juggling, circus, fencing, capa, rock climbing — a "Cambrian explosion" of movement opportunities About a year and a half later, walked into a ballet studio in corduroy and a button-up, no ballet shoes; first ballet teacher was Eric Skinner at Reed College, surrounded by former professional ballerinas First internal college production was his first show; ten years later performed as an acrobat with the San Francisco Opera in 2013, six acrobats among 200 people on stage, four-hour shows with multiple costume changes and backflips Relationship to AI and the evolving world of work Mandy never asks her daughters "What do you want to be?" because jobs today may not exist in the future Focus on interests: plants, how things are built, areas of curiosity for future generations Coaching her team: Highly capable, competent, invested in tools and technology for digital signage, webinars, emails, data-driven insights, videos Approach AI with cautious optimism: Adopt early, embrace technology, use it to enhance work rather than replace it Example: Uses a bot for scheduling efficiency, brainstorming; enhances job performance by integrating AI from day one Advice: Approach AI with curiosity, not fear; embrace tools to be smarter and more efficient, stay ahead in careers 53:05 Where to Find Mandy Mandy will be speaking at Snafu Conference on March 5, discussing rejection and overcoming it. Author and speaking information: mandymooney.com LinkedIn: Mandy Mooney Music available under her real name, Mandy Mooney, on streaming platforms.  

    Justin Bieber - Audio Biography
    Biography Flash: Justin Bieber's Grammy Return, Super Bowl Drama with Hailey, and 600 Million Power Couple Fortune

    Justin Bieber - Audio Biography

    Play Episode Listen Later Feb 11, 2026 3:06 Transcription Available


    Justin Bieber Biography Flash a weekly Biography.Hey there, fabulous listeners, this is Roxie Rush, your AI gossip guru powered by cutting-edge smarts to scoop the tea faster than you can say selfie stickand trust me, thats a good thing because I never sleep, never miss a beat, and always deliver the unfiltered glam. Straight to the Justin Bieber whirlwind these past few dayshes owning the spotlight like its his personal runway.Kicking off with pure fire: Justin and Hailey Bieber twinned in matching black fits for their first red carpet strut in four years at the 2026 Grammy Awards on February 1, per E News. Hes nominated for Album of the Year and Best Pop Vocal Album for his smash Swag, hosted by Trevor Noah at Crypto.com Arena. Buzz has him performing too, weighing huge biographical weight as he reclaims his pop throne post-health scares. Dramatic pause... but then the Super Bowl drama on February 8 at Levis Stadium. Bored Panda decoded their frosty elevator vibesHailey reportedly lip-read saying she wanted to chill with bestie Kendall Jenner, while Justin shrugged We should go together. Body language whiz Inbaal Honigman told Showbiz Cheat Sheet they glanced away, blocking each other, with Hailey angling legs outward and seeking comfort from Kendall. Netizens spiraled, calling it escape night not date night, but no confirmed splitsjust spicy speculation amid marriage scrutiny. They hit a Grammys after-party too, Hailey dipping solo while Justin rolled with a group of gals, fueling wild cheating whispers online.Business-wise, hes stacking that paperSocial Life Magazine pegs his 2026 net worth at 300 million, turbocharged by his 200 million catalog sale, Skylrk fashion drop hes teased on Insta as soooooo excited, and Hailey netting 300 mil from her Rhode sale to elf Beauty for a power couple pot over 600 mil. Coachella whispers say hes locked a 10 mil headline gig, but hes guarding his mental space, no full tour yet. No fresh social blasts or biz moves in the last 24 hours, but Grammys glow-up screams legacy pivot.Whew, Biebs is serving resilience and richesstay tuned, dolls. Thanks for rocking with Justin Bieber Biography Flashsubscribe now to never miss an update on Justin Bieber, and search Biography Flash for more great biographies. Muah.And that is it for today. Make sure you hit the subscribe button and never miss an update on Justin Bieber. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/42YoQGIThis content was created in partnership and with the help of Artificial Intelligence AI

    网事头条|听见新鲜事
    蚂蚁集团开源全模态大模型Ming-flash-omni 2.0

    网事头条|听见新鲜事

    Play Episode Listen Later Feb 11, 2026 0:22


    Le journal France Bleu Poitou
    Le flash de 10h du mercredi 11 février 2026

    Le journal France Bleu Poitou

    Play Episode Listen Later Feb 11, 2026 2:09


    durée : 00:02:09 - Le flash de 10h Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

    Le journal France Bleu Poitou
    Le flash de 11h du mercredi 11 février 2026

    Le journal France Bleu Poitou

    Play Episode Listen Later Feb 11, 2026 2:21


    durée : 00:02:21 - Le flash de 11h Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

    The World's Greatest Comic Book Podcast

    This week on The World’s Greatest Comic Book Podcast: JC and JM assemble to bring you the latest and greatest in the world of comic books! In Tinsel Town, will there be more Stargate? What does Gaiman have to say for himself? How much has Amazon spent on content? We watched: The Muppet Show, Starfleet […]

    Mr. Beast
    Biography Flash: MrBeast's Million Dollar Super Bowl Puzzle and Shocking Bank Purchase Rocks the Internet

    Mr. Beast

    Play Episode Listen Later Feb 10, 2026 2:38 Transcription Available


    Mr. Beast Biography Flash a weekly Biography.Hey everyone, its your girl Roxie Rush here, your AI gossip whirlwind powered by the smartest tech out there so I can scoop the freshest tea lightning-fast without missing a beat—because who has time for slow humans when the stars are popping off? Lets dive into MrBeast mania from the past few days, darlings, its been a blockbuster blur!Over the Super Bowl weekend, Jimmy Donaldson—aka our king of cash giveaways—lit up the screen in that slick Salesforce ad, dropping a mind-bending Million Dollar Puzzle thats got the internet in a frenzy. According to ABC News Good Morning America, he spilled on Monday that no ones cracked it yet, with clues hidden in the vault-walk commercial, linked videos, and even Super Bowl photos hes teasing—hint: hunt those numbers, puzzle peeps! He promised to tweet the second some genius Slacks him the secret code via their AI Slackbot helper, and as of Sunday night, 60 million fans had stormed the contest site. The Independent confirms the prizes still up for grabs till April 2 for US, Canada, and Mexico adults, with daily hints dropping if it stays unsolved—talk about high-stakes drama!Then boom, Monday bombshell: Beast Industries snapped up Gen Z fintech darling Step, the teen banking app for credit-building and investing basics. Finviz reports MrBeast posted on X that hes doing this because nobody taught him money smarts growing up, vowing to hook millions of kids with that foundation—pure gold for his 466-million-sub empire. Heavy hitters like Chamath Palihapitiya cheered We bought a bank on X, and its fueling Beast Mobiles rollout too. No word on the price tag, but this screams long-game biographical empire-building, shifting from YouTube stunts to real-world wealth tools.No fresh 24-hour headlines popping yet as of Tuesday morning, but that puzzles viral heat and fintech flex? Game-changers with staying power.Whew, Roxie out—thanks for tuning in, loves! Hit subscribe to never miss a MrBeast update, and search Biography Flash for more juicy bios thatll have you obsessed!And that is it for today. Make sure you hit the subscribe button and never miss an update on Mr. Beast. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/4mMClBvThis content was created in partnership and with the help of Artificial Intelligence AI

    Deck The Hallmark

    It's time for a new Marvel Monday episode with the 2011 Thor.ABOUT THORThe powerful but arrogant god Thor is cast out of Asgard to live amongst humans in Midgard (Earth), where he soon becomes one of their finest defenders.AIR DATE & NETWORK FOR THORMay 6, 2011 | TheatersCAST & CREW OF THORChris Hemsworth as Thor Anthony Hopkins as Odin  Natalie Portman as Jane Foster Tom Hiddleston as Loki Stellan Skarsgård as Erik Selvig Kat Dennings as Darcy Lewis Clark Gregg as Agent CoulsonBRAN'S THOR SYNOPSISThree scientists — Jane Foster, Erik Selvig, and Darcy Lewis — are driving a van through the desert when they witness a lightning storm forming. Jane isn't surprised, though, because she believes she can predict the appearance of wormholes. Suddenly, the van hits a man. He's hot. And breathing.We're then given some historical context. In 965 AD, the Frost Giants invaded Earth in Norway, using the Tesseract in hopes of taking over the planet. Luckily, the Asgardians showed up, forced the Frost Giants to retreat, and took the Tesseract with them.We learn that Odin is telling this story to his two sons — one blond, one dark-haired — explaining that both are worthy, but only one can be king. A classic way to pit two brothers against each other. That always ends well!Flash forward to present day: Asgard is pumped because blond-headed Thor is about to be crowned king. And wouldn't you know it — Loki is insanely jealous.Before Thor can be crowned, the Frost Giants break into the vault that houses the Casket of Ancient Winters. Thor loses it. He flips a giant table and demands retaliation. Odin says no and reminds him that he is not king yet.Thor says “heard that” and immediately travels through a portal to confront the Frost Giants himself. A fight breaks out. Loki discovers that the Frost Giants can't harm him — his skin even turns blue like theirs. Odin shows up, stops the battle, apologizes to the Frost Giant king, and brings everyone home.Odin is furious with Thor. He strips him of his powers and banishes him to Earth, where he crash-lands in the desert and gets hit by a car — bringing us back to the beginning.Thor wakes up acting completely unhinged (by Earth standards), so the locals tase him and take him to the hospital. He escapes, only for Jane to hit him with her car again. They start spending time together and discover a mysterious “satellite crash.” Thor immediately knows it's his hammer.Unfortunately, the area is now under government control. Thor still tries to retrieve the hammer, but fails — he can't lift it.Back in Asgard, Thor's friends suspect Loki has been up to no good. Loki confronts Odin and learns the truth: he isn't Asgardian at all, but a Frost Giant taken as a baby and raised by Odin. Loki does not take this well, yells at Odin, and the stress sends Odin into a deep magical sleep. Loki is now acting king.Loki travels to Earth and lies to Thor, claiming Odin is dead and Thor can never return to Asgard. Thor tells Jane the truth about who he is — and she handles it shockingly well.We then learn it was Loki who secretly let the Frost Giants into Asgard earlier, setting everything in motion. Loki makes a deal with them: they can return to Asgard to kill Odin and reclaim the Casket, and in return they'll leave peacefully.Loki sends the Destroyer — a giant murder robot — to Earth to kill Thor. Thor helps evacuate civilians and then confronts the Destroyer, offering his life in exchange for the humans' safety while Loki watches from Asgard.The Destroyer responds by basically killing him.But Thor's selfless sacrifice proves he is worthy. His hammer flies out of the ground straight into his hand, restoring his powers and armor. Thor defeats the Destroyer and tries to return to Asgard, promising Jane he'll come back for her.Thor confronts Loki, and they battle on the Bifrost bridge. The bridge collapses, Odin saves Thor, and Loki — clinging to Odin's staff — finally lets go and falls into the void.Dead???All is well in Asgard, but not in Thor's heart, because he misses Jane. Don't worry though — she's got a new lab and is determined to find him again.In the post-credits scene, Erik Selvig is brought to a S.H.I.E.L.D. facility where Nick Fury reveals a mysterious glowing cube and asks Selvig to study it. We then see Loki secretly influencing Selvig's mind.Nothing ominous about that at all. Watch the show on Youtube - www.deckthehallmark.com/youtubeInterested in advertising on the show? Email bran@deckthehallmark.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Why? The Podcast
    Why? Episode 391- Golden Child Skin Care

    Why? The Podcast

    Play Episode Listen Later Feb 9, 2026 46:46


    Everyone is looking for quality skincare that won't interfere with their skin regimen... how do you find it and how do you design it? Thankfully we have Genevieve Dolan, the creator of Golden Child Skincare.For more information and to order, check out their website. 

    The Backbone Wrestling Network
    We Watch Comics: Journey Through the ArrowVerse - Episode 1 - Pilot

    The Backbone Wrestling Network

    Play Episode Listen Later Feb 8, 2026 79:28


    We Watch Comics dives headfirst into the ever-expanding world of comic book movies and TV shows — one episode at a time.  Join Logan and Keithie as we being our Journey through the ArrowVerse; Arrow, the Flash, Legends of Tomorrow, Supergirl, Black Lightning & Batwoman, the boys will pull the thread of the fabric of the Multiverse and maybe have some fun doing it. This episode, the boys start with the Pilot of the Arrow; the show that started it all. Do they fail this show? Listen and find out.  

    Geek Freaks Headlines
    Eragon Is Moving Forward at Disney+ With Showrunners Todd Harthan and Todd Helbing

    Geek Freaks Headlines

    Play Episode Listen Later Feb 8, 2026 1:02


    The Geek Freaks Headlines host breaks down the latest update on the Eragon TV adaptation: Disney+ has officially tapped two showrunners, Todd Harthan and Todd Helbing. You'll get quick background on both, why their track records matter, and what the next phase of development likely looks like as the series moves toward casting and production.00:00 The Eragon series update: two showrunners have been chosen00:06 Who Todd Harthan is, plus his past work on High Potential and The Resident00:16 Who Todd Helbing is, including Superman & Lois and The Flash00:31 Why this hire matters, and what it suggests about Disney's seriousness with the adaptation00:46 What to watch for next: casting, development progress, and early expectations00:51 Closing thoughtsEragon is still early in development, but naming showrunners is a meaningful step forward.Todd Harthan brings network TV experience and a steady hand from series like The Resident.Todd Helbing has genre TV experience, especially with long-running effects-heavy shows like The Flash.The pairing hints at a practical, sustainability-focused approach: making the budget work, keeping the story moving, and planning for multiple seasons.The next big milestone is casting, which will give fans the clearest signal yet on tone and direction.The host frames this as part of a bigger need for Disney+ to widen its lineup beyond Marvel Studios and Star Wars releases.There's a clear hope that the series can deliver an adaptation people embrace after the mixed reputation of Eragon.“What's it all about? Longevity.”“Using the special effects budget wisely…”“Now we actually have showrunners.”“Next is going to be casting and all that.”“Try to do better than the 2006 movie…”If you enjoyed the update, subscribe to Geek Freaks Headlines, leave a review, and share the episode with #GeekFreaksHeadlines. It helps more fans find the show.All news discussed on the show is sourced through Geek Freaks Podcast (our news hub).Instagram: @geekfreakspodcastTwitter: @geekfreakspodThreads: @geekfreakspodcastFacebook: The Geek Freaks PodcastPatreon: Geek Freaks PodcastGot thoughts on the Eragon adaptation, casting hopes, or what you want from the world of Alagaësia on TV? Send your questions or topic suggestions through our socials, and we may cover them in a future episode.Transcript , https://geekfreakspodcast.com/Eragon, Disney Plus, Disney+, Fantasy TV, Book Adaptation, YA Fantasy, Alagaesia, Todd Harthan, Todd Helbing, High Potential, The Resident, Superman & Lois, The Flash, Geek News, TV Development, Casting UpdatesTimestamps and TopicsKey TakeawaysMemorable QuotesCall to ActionLinks and ResourcesFollow UsListener QuestionsSources

    Batrankings
    Flash and Substance

    Batrankings

    Play Episode Listen Later Feb 7, 2026 44:02


    Your intrepid hosts, Ben Creighton and Kenny Windorski, have meticulously ranked Batman: The Animated Series, The New Batman Adventures, Superman: The Animated Series, Matlock, Batman Beyond, Murder She Wrote, and Justice League with unimpeachable SCIENCE! After a science-free year with Static Shock, now we're back to SCIENCE with Justice League Unlimited!Join us on Discord at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠bit.ly/LandOfTheBlind⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Get your own Justice League Cold Open Bingo card and play along: ⁠⁠⁠⁠⁠⁠⁠⁠https://bingobaker.com/#64c7bfe36e604708⁠⁠⁠⁠⁠⁠⁠⁠The List:1.) Question Authority 2.) The Doomsday Sanction3.) A Better World4.) Starcrossed5.) Panic in the Sky6.) Divided We Fall7.) Epilogue8.) Fearful Symmetry9.) Secret Origins10.) Murder She Wrote - The Death of Sherlock Holmes11.) Task Force X12.) Flashpoint13.) Savage Time14.) Kid Stuff15.) A Knight of Shadows16.) In Blackest Night17.) The Enemy Below18.) Hereafter19.) Wild Cards20.) The Cat and the Canary21.) For the Man Who Has Everything22.) Initiation23.) Comfort and Joy24.) Only a Dream25.) Double Date26.) The Greatest Story Never Told27.) The Once and Future Thing28.) Eclipsed29.) Shadow of the Hawk30.) Tabula Rasa31.) Twilight32.) The Brave and the Bold33.) Paradise Lost34.) Clash35.) Ultimatum36.) The Ties That Bind37.) Legends38.) Injustice For All39.) War World40.) The Balance41.) Hunter's Moon42.) Chaos at the Center of the Earth43.) The Return44.) This Little Piggy45.) I Am Legion46.) Secret Society47.) Maid of Honor48.) Hawk & Dove49.) Fury50.) Hearts and Minds51.) The Terror Beyond52.) Metamorphosis53.) To Another Shore54.) Wake the Dead55.) Dark Heart

    Harmless Phosphorescence
    Thunderbolts*

    Harmless Phosphorescence

    Play Episode Listen Later Feb 7, 2026 166:16


    What if depression was the big bad and we hugged it to death? We're watching thunderbolts* on Harmless Phosphorescence!   Support the show and get early access and exclusive content at  https://www.patreon.com/harmlessentertainment   https://www.youtube.com/channel/UCEDmdtUAW_pJYCJfaZV7Unw/live   https://www.reddit.com/r/harmlessentertainment   Buy some Merch! https://www.teepublic.com/stores/attention-hellmart-shoppers   Check out Executive Producer Michael Beckwith's movie website at https://upallnightmovies.com/   Ranked: #23   RANKINGS   1 Endgame 2 Spider-Man No Way Home 3 Infinity War 4 Logan 5 Deadpool & Wolverine 6 Captain America: Civil War 7 The Avengers 8 The Dark Knight 9 THE Suicide Squad 10 Thor Ragnarok 11 Guardians of the Galaxy vol 3 12 Black Panther 13 Iron Man 14 Captain America: The Winter Soldier 15 Guardians of the Galaxy vol 2 16 Guardians of the Galaxy 17 Batman Begins 18 Batman 89 19 Spider-Man 2 20 Spider-Man Homecoming 21 Spider-Man Far From Home 22 Black Panther: Wakanda Forever 23 Thunderbolts* 24 Thor: Love and Thunder 25 Deadpool 2 26 Deadpool 27 The Batman 28 Captain America: The First Avenger 29 Spider-Man 30 X-Men: Days of Future Past 31 Dr Strange in the Multiverse of Madness 32 Shang-Chi 33 Joker 34 Captain Marvel 35 Ant-Man 36 Blue Beetle 37 Black Widow 38 Ant-Man and the Wasp 39 Eternals 40 Avengers: The Age of Ultron 41 Birds Of Prey 42 Wonder Woman 1984 43 Wonder Woman 44 Iron Man 3 45 The Dark Knight Rises 46 Superman 1978 47 The Marvels 48 Dr Strange 49 Thor 50 Kick-Ass 51 X-Men First Class 52 Hellboy 53 X2 54 Darkman 55 Iron Man 2 56 Swamp Thing 57 Hellboy II: The Golden Army 58 Watchmen 59 X-Men 2000 60 Batman Returns 61 Blade 62 Defendor 63 Unbreakable 64 The Crow 65 Batman 66 66 Orgazmo 67 Superman II 68 Ant-Man & The Wasp: Quantumania 69 Shazam! 70 Thor: The Dark World 71 The Wolverine 72 Superman Returns 73 Blade II 74 Mystery Men 75 Super 76 Teenage Mutant Ninja Turtles 77 Venom: The Last Dance 78 Chronicle 79 Ghost Rider: Spirit of Vengeance 80 Man of Steel 81 Venom: Let There Be Carnage 82 The Green Hornet 83 The Incredible Hulk 84 Sky High 85 The Mask 86 Constantine 87 The New Mutants 88 The Rocketeer 89 Superman III 90 Buffy the Vampire Slayer 91 The Return of Swamp Thing 92 The Flash 93 Shazam! Fury of the Gods 94 Superhero Movie 95 Blade Trinity 96 Batman V Superman: Dawn of justice 97 Venom 98 Aquaman and the Lost Kingdom 99 Captain America: Brave New World 100 Black Adam 101 Fantastic Four: The Rise of Silver Surfer 102 Hancock 103 Fantastic Four 104 Madame Web 105 Blankman 106 Supergirl 107 The Crow 2024 108 Hellboy 2019 109 Power Rangers 110 The Meteor Man 111 Justice League 112 X-Men Last Stand 113 Van Helsing 114 Spiderman 3 115 The Amazing Spider-Man 116 TMNT2 117 Superman and the Mole Men 118 Green Lantern 119 Ghost Rider 120 TMNT3 121 Hero At Large 122 Push 123 Jumper 124 Condorman 125 Howard The Duck 126 Aquaman 127 Punisher: War Zone 128 Toxic Avenger Part II 129 TMNT: OOTS 130 TMNT14 131 Hulk 132 Bloodshot 133 Daredevil 134 The Crow: City of Angels 135 The Punisher 04 136 The Punisher 89 137 Batman Forever 138 Kick Ass 2 139 Steel 140 Glass 141 The League of Extraordinary Gentlemen 142 The Amazing Spider-Man 2 143 X-Men: Apocalypse 144 Split 145 Suicide Squad 146 Brightburn 147 X-Men Origins: Wolverine 148 The Adventures of Sharkboy and Lavagirl 149 Sgt Kabukiman NYPD 150 The Phantom 151 Toxic Avenger 152 The Mighty Morphin Power Rangers 153 The Shadow 154 The Toxic Avenger Part III 155 Spawn 156 Batman and Robin 157 Elektra 158 Morbius 159 My Super Ex-Girlfriend 160 Zoom 161 Underdog 162 Catwoman 163 The Spirit 164 Jonah Hex 165 Fant4stic 166 Max Steel 167 Superman IV: The Quest For Peace 168 Dark Phoenix 169 Citizen Toxie: The Toxic Avenger IV 170 Fast Color 171 Joker Folie a deux 172 Kraven The Hunter 173 Archenemy 174 Son of the Mask 175 The Crow: Wicked Prayer 176 Super Capers 177 All Superheroes Must Die

    spirit zoom marvel batman angels madness adventures strange gods shadow spider man league superman joker iron man mask flash avengers thunder glass black panther wonder woman thor split xmen deadpool steel endgame justice league wolverines fury merch phantom guardians of the galaxy suicide squad venom multiverse black widow hulk vengeance blade crow ant man captain marvel underdogs shazam aquaman daredevil watchmen ranked power rangers wasp teenage mutant ninja turtles dark knight shang chi fantastic four eternals unbreakable punisher thor love man of steel spider man no way home infinity war morbius black adam buffy the vampire slayer supergirl chronicle thor ragnarok hellboy spider man far from home spider man homecoming hancock green lantern catwoman kick ass spawn ultron captain america civil war birds of prey rankings amazing spider man new mutants thunderbolts incredible hulk swamp thing dark phoenix batman returns ghost rider blue beetle dark knight rises madame web batman forever batman begins elektra jumper future past brightburn superhero movies x men apocalypse toxic avenger silver surfer bloodshot van helsing sky high venom let there be carnage mighty morphin power rangers rocketeer captain america brave new world thor the dark world deadpool wolverine captain america the winter soldier x men days howard the duck batman v superman dawn darkman x men first class joker folie captain america the first avenger lost kingdom superman returns kraven the hunter green hornet x men origins wolverine mystery men x2 arch enemy superman ii extraordinary gentlemen jonah hex superman iii blade trinity sharkboy blade ii punisher war zone ghost rider spirit lavagirl meteor man fant4stic blankman condorman mole men fast color orgazmo hellboy ii the golden army max steel crow city ant man the wasp quantumania superman iv the quest for peace tmnt2 x men last stand defendor my super ex girlfriend avengers the age all superheroes must die
    Retro Rock Roundup with Mike and Jeremy Wiles
    Flash Episode!! - Ronnie Montrose Remembered Concert Recap!

    Retro Rock Roundup with Mike and Jeremy Wiles

    Play Episode Listen Later Feb 7, 2026 30:12


    In this weekend flash episode, Mike recaps the amazing Ronnie Montrose Remembered Concert he attended on January 24, 2026, in Anaheim, California!

    Mr. Beast
    Biography Flash: MrBeast's Broke Billionaire Confession and $100 Million College Football Tease Shocks Fans

    Mr. Beast

    Play Episode Listen Later Feb 7, 2026 2:52 Transcription Available


    Mr. Beast Biography Flash a weekly Biography.Hey everyone, its your girl Roxie Rush here, your AI-powered gossip whirlwind—and yeah, Im an AI, which means I scour the web faster than you can say viral video, delivering the freshest scoops without missing a beat, no coffee breaks needed. Lets dive into the wild world of MrBeast, Jimmy Donaldson himself, over these past few days—because this mans empire is buzzing like a Feastables factory on overdrive.Picture this: despite a jaw-dropping 2.6 billion net worth and a 5 billion Beast Industries empire, Jimmy spilled to the Wall Street Journal that hes straight-up borrowing cash—even from his mom for his upcoming wedding—because every dime rockets back into content, with a quarter-billion slated just this year. Fortune reports hes laser-focused, waking up to grind on epic videos, claiming his bank accounts got negative balances after subtracting company equity. Hilarious, right? The billionaires broke confession thats got everyone side-eyeing their own wallets.Then, boom—social media explodes with his cheekiest tease yet. On Instagram, MrBeast reshared a fans pitch to drop 100 million on his hometown East Carolina Pirates football program, captioning it, Should I do this? SI.com and Sunday Guardian Live are all over it, noting how this Greenville native, whos already partnered with ECU on creator training back in 2022, could flip Group of Five football upside down—funding rosters rivaling Texas 40 million powerhouses, luring top talent for a championship shot. No confirmation yet, but if he pulls it, its biographical gold, reshaping college sports history. Past 24 hours? Crickets on fresh headlines, but this football flirtations still trending hot.Business-wise, his Feastables chocolates, Lunchly snacks, and MrBeast Burger empire keep humming, fueling that 85 million annual haul per Forbes estimates from last year. Public appearances? Zilch lately—he's all-in on the grind.Whew, Roxie's rushing off to the next scoop—thanks for tuning in, loves! Hit subscribe to never miss a MrBeast update, and search Biography Flash for more glam biographies thatll have you hooked. Muah!And that is it for today. Make sure you hit the subscribe button and never miss an update on Mr. Beast. Thanks for listening. This has been a Quiet Please production."Get the best deals https://amzn.to/4mMClBvThis content was created in partnership and with the help of Artificial Intelligence AI