Podcasts about Chips

  • 9,239PODCASTS
  • 17,553EPISODES
  • 48mAVG DURATION
  • 3DAILY NEW EPISODES
  • Feb 15, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about Chips

    Show all podcasts related to chips

    Latest podcast episodes about Chips

    Idle Red Hands
    The Weekly Podcast no.323 – Blades 68, Pathfinder Beginner Box & STLs, Avatar Card Game and Hasbro Loves AI

    Idle Red Hands

    Play Episode Listen Later Feb 15, 2026 55:41


    Blades ’68 is an official 450-page expansion for the TTRPG Blades in the Dark. This supplement advances the timeline 100 years to the “Swinging Sixties” in the city of Doskvol, an age of electroplasmic fusion and “Bluetime” spy games. The expansion introduces new playbooks, crews, and a revamped setting, alongside new rules for Harm, Resistance, Keys, Deadlocks, and an adapted Trouble Engine. The campaign has been overwhelmingly funded, with an expected delivery date of August 2026. Paizo, the publisher of Pathfinder, announced the new Pathfinder Beginner Box: Secrets of the Unlit Star, an all-inclusive entry point to Pathfinder Second Edition set for release on May 6th, 2026. The box set features a solo adventure, a 72-page Hero’s Handbook, an 88-page Game Master’s Guide, and updated rules for character options and adventure scenarios. Additionally, Paizo confirmed its commitment to the Pathfinder 2E Remaster with the release of Dark Archive Remastered and the announcement of Season of Ghosts Remastered. The company also partnered with One Page Rules to launch Paizo Printables, a new line of 3D printable wargaming miniature STLs compatible with the Age of Fantasy system, starting in Spring 2026. Maestro Media unveiled Avatar: Pandora's Power, a two-player asymmetric lane-battling card game based on the Avatar films. The game pits the resource-extractive RDA against the adaptive, land-rooted Na'vi factions, with the goal of reaching 30 points to decide Pandora’s fate. CEO Javon Frazier emphasized that the core experience is the asymmetry, with each faction playing a distinctly different game. Designed for ages 12 and up, the game plays in approximately 20-45 minutes and includes 170 Faction Cards, 18 Location Cards, and various tokens. Hasbro CEO Chris Cocks touted the company’s AI integration as a “clear success” during a recent earnings call, though he primarily referred to its deployment in non-creative, operational workflows such as financial planning, supply chains, and general productivity. Cocks stated that AI, in partnership with platforms like Google Gemini and OpenAI, is expected to free up over 1 million hours of lower-value work within the year. While he maintains a “human-centric creator-led approach,” Wizards of the Coast (WotC) has an explicit policy prohibiting its artists and writers from using generative AI for final D&D products, a stance that aligns with a user survey indicating over 60% of consumers would not buy D&D products made with AI. #blades68 #pathfinder #paizo #hasbro Blades ‘68 on Backerkit: https://www.backerkit.com/c/projects/evil-hat/blades-68 40-page Preview on DTRPG: https://www.drivethrurpg.com/ja/product/553040/blades-68-preview?affiliate_id=2081746 Empire of Bones on Kickstarter: https://www.kickstarter.com/projects/thepaintedwastelands/empire-of-bones Preview: https://www.drivethrurpg.com/en/product/554430?affiliate_id=2081746 Call of Cthulhu Bundle: https://humblebundleinc.sjv.io/Xmz13G Warmachine on MyMiniFactory: https://mmf.io/upturned Mantic Companion App: https://companion.manticgames.com/ Use our Referral code: MCTXEE Support us by Shopping at Miniature Market (afilliate link): https://miniature-market.sjv.io/K0yj7n Support Us by Shopping on DTRPG (afilliate link): https://www.drivethrurpg.com?affiliate_id=2081746 Matt’s DriveThruRPG Publications: https://www.drivethrurpg.com/browse.php?author=Matthew%20Robinson https://substack.com/@matthewrobinson3 Chris on social media: https://hyvemynd.itch.io/​​ Jeremy's Links: http://www.abusecartoons.com/​​ http://www.rcharvey.com ​​Support Us on Patreon: https://www.patreon.com/upturnedtable Give us a tip on our livestream: https://streamlabs.com/upturnedtabletop/tip​ Donate or give us a tip on Paypal: https://www.paypal.com/ncp/payment/2754JZFW2QZU4 Intro song is “Chips” by KokoroNoMe https://kokoronome.bandcamp.com/

    Harold's Old Time Radio
    33 Half Moon Street [SA] 650520 02 Chips For The Fish Monger_OTRRPG

    Harold's Old Time Radio

    Play Episode Listen Later Feb 15, 2026 24:57 Transcription Available


    Harold's Old Time Radio
    33_half_moon_street_1965-05-20_chips_for_the_fish_monger

    Harold's Old Time Radio

    Play Episode Listen Later Feb 15, 2026 24:57 Transcription Available


    9malls
    Rusty's Chips Variety Pack Snack Taste Test Review

    9malls

    Play Episode Listen Later Feb 14, 2026 10:43


    Watch the 9malls review of Rusty's Chips Variety Pack. Watch as I rank Rusty's Chips Sea Salt, Black Pepper, Chili Lime, Corn Chips, and Bacon Fat Fried Chips. Which one did I like best? Watch the hands on taste test to find out. #cornchips #potatosnacks #potatochips #snacks #snackreview Find Rusty's Chips Variety Pack On Amazon: https://www.amazon.com/dp/B0D9K1FDFR?ref=t_ac_view_request_product_image&campaignId=amzn1.campaign.19UC6KBRFZV19&linkCode=tr1&tag=getpaid4surfcom&linkId=amzn1.campaign.19UC6KBRFZV19_1771044515530 Find As Seen On TV Products & Gadgets at the 9malls Store: https://www.amazon.com/shop/9malls Please support us on Patreon! https://www.patreon.com/9malls Disclaimer: I may also receive compensation if a visitor clicks through to 9malls, or makes a purchase through Amazon or any affiliate link. I test each product on site thoroughly and give high marks to only the best. In the above video I received a free product sample to test. We are independently owned and the opinions expressed here are our own.

    WSJ Minute Briefing
    U.S. Inks Trade Pact With Taiwan Tied to Chips and Security

    WSJ Minute Briefing

    Play Episode Listen Later Feb 13, 2026 2:44


    Plus: Goldman Sachs' top lawyer Kathryn Ruemmler steps down amid the Epstein files fallout. And Coinbase posts a big loss as Bitcoin's fall drags down the wider crypto market. Daniel Bach hosts. Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

    Remnant Finance
    E86 - "Everything They Sold You Is Fake" — He Quit His Job to Prove It | Van Man

    Remnant Finance

    Play Episode Listen Later Feb 13, 2026 68:13


    VanMan: ⁠https://vanman.shop/⁠Book a call: ⁠https://remnantfinance.com/calendar⁠ ! Out Print the Fed with 1% per week: https://remnantfinance.com/optionsEmail us at info@remnantfinance.com or visit https://remnantfinance.com for more informationFOLLOW REMNANT FINANCEYoutube: @RemnantFinance (https://www.youtube.com/@RemnantFinance )Facebook: @remnantfinance (https://www.facebook.com/profile.php?id=61560694316588 )Twitter: @remnantfinance (https://x.com/remnantfinance )TikTok: @RemnantFinanceDon't forget to hit LIKE and SUBSCRIBEIf you've been in the health-conscious space online, you've seen Van Man products everywhere — tallow balm, eggshell tooth powder, fluoride-free mouthwash. But most people don't know the story behind the brand.In this episode, Jeremy Ogorek sits down with Hans to talk about losing everything in a New York tech startup, moving back in with his mom, buying a van, and accidentally stumbling into a health brand that's now replacing every product in your bathroom — and soon, your pantry too. We also get into the "everything is a lie" awakening, why fluoride was his first red flag, what's actually in the products you put on your skin, and how he's now selling $6 grass-fed smash burgers out of a restaurant in Pacific Beach that keeps selling out.If you've been rethinking what you put on and in your body, this one's for you.Chapters: 00:00 – Opening segment 01:25 – Van's background: CPA, quitting his first job, joining a NYC tech startup 05:15 – The startup collapse: $8M raised, celebrity investors, and losing everything 08:55 – Fluoride as the first red flag and the origin of the eggshell tooth powder 14:05 – How the tallow balm was born and why it went viral 19:00 – "Your skin is a mouth" — the philosophy behind Van Man products 21:25 – Product lineup: deodorant, sunscreen, bug balm, soap, shampoo, eye cream 30:30 – The Van Man restaurant in Pacific Beach: $6 grass-fed burgers 36:00 – The business model: restaurants, gas stations, and movie theaters as product "stunts" 43:25 – Other clean brands: Masa Chips, Orum, Rosie's Chips 53:00 – Vaccines, home birth, and the broader health awakening 57:00 – What's next: tallow popcorn, clean Snickers bars, cough drops, and an RFK collab1:04:15 – Closing segmentKey Takeaways:Tallow isn't a trend — it's a return to what worked for thousands of years. People are reporting cleared rosacea, vanishing acne, and healed scars from a balm made of five ingredients you could eat. Meanwhile, the dermatologist-recommended steroid creams weren't solving the same problems in a decade.Your skin is your largest organ, and it absorbs what you put on it. If you wouldn't eat the ingredients in your lotion or deodorant, ask yourself why you're comfortable rubbing them into your skin — especially in high-absorption areas like your armpits.Fluoride was the first domino. It's the only non-opt-in medication — it's in your tap water, your toothpaste, and it's free. Once you ask why they care so much about your cavities, the rest of the questioning begins.The restaurant isn't really about the restaurant. Van Man Burgers in Pacific Beach sells $6 grass-fed smash burgers at near break-even. The real play is getting clean products in front of new customers. Every "stunt" — restaurant, gas station, movie theater — is a storefront for the mission.You don't need permission to start. Van went from credit card debt and a van to building a brand, a restaurant, and a product line — all by following his gut, tweeting his thoughts, and making products he wanted to use himself. The XP comes from doing, not reading.

    Liberty Nation with Tim Donner
    Party Switches and Cheap Chips

    Liberty Nation with Tim Donner

    Play Episode Listen Later Feb 13, 2026 39:51 Transcription Available


    Seg 1 – A Party Switch on Immigration NarrativesSeg 2 – The Subsidy TrapSeg 3 – TrumpRx – Checking the Price PointSeg 4 – No Bail for Illegal Immigrants

    Cutting The Distance with Remi Warren
    Ep. 27: High Ground Beef Chips - Always Take the High Ground

    Cutting The Distance with Remi Warren

    Play Episode Listen Later Feb 12, 2026 49:30 Transcription Available


    On this episode of In Pursuit with Rich Froning, we’ve got the guys from High Ground Beef Chips in the studio, Dylan Larson and Josh McCandless. It started with a problem that kicked it all off: finding real, lightweight nutrition on the move between deployments, long days, and no patience for filler ingredients. Dylan breaks down why they built High Ground as a meat chip: simple label, big protein, whole-food fuel that actually works in the backcountry. From there, the conversation goes into why hunting/training with the boys can be a reset when life gets loud. We also get the meaning behind the name High Ground and their bigger goal: supporting Gold Star families and telling the stories that shouldn’t get forgotten. Connect with Rich Froning MeatEater on Instagram, Facebook, Twitter, Youtube, and Youtube Clips Subscribe to The MeatEater Podcast Network on YouTubeSee omnystudio.com/listener for privacy information.

    BSD Now
    650: Korn Chips

    BSD Now

    Play Episode Listen Later Feb 12, 2026 57:21


    AT&T's $2000 shell, ZFS Scrubs and Data Integrity, FFS Backups, FreeBSD Home Nas, and more. NOTES This episode of BSDNow is brought to you by Tarsnap and the BSDNow Patreon Headlines One too many words on AT&T's $2,000 Korn shell and other Usenet topics Understanding ZFS Scrubs and Data Integrity News Roundup FFS Backup FreeBSD: Home NAS, part 1 – configuring ZFS mirror (RAID1) 8 more parts! Beastie Bits The BSD Proposal UNIX Magic Poster Haiku OS Pulls In Updated Drivers From FreeBSD 15 FreeBSD 15.0 VNET Jails Call for NetBSD testing Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Gary - Links Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv Join us and other BSD Fans in our BSD Now Telegram channel

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

    Investing Experts
    Tech tug of war: fear vs. greed

    Investing Experts

    Play Episode Listen Later Feb 12, 2026 35:43


    Tech Contrarians take on tech's tug of war between fear and greed (0:45) Software stock sell-off (2:45) Valuation concerns on Nvidia and others (9:10) Entry point alert example for Credo (17:00) What's going to happen with China? (18:30) Investing timelines (24:30)Show Notes:AI Spending Surge, Contrarian Take On Tech StocksNvidia And The H200 Landscape; Broadcom's Strategic PositioningRead our transcriptsFor full access to analyst ratings, stock and ETF quant scores, and dividend grades, subscribe to Seeking Alpha Premium at seekingalpha.com/subscriptions

    TD Ameritrade Network
    Follow the Money: How Big Tech CapEx Benefits Chips, Networking & MU

    TD Ameritrade Network

    Play Episode Listen Later Feb 12, 2026 5:46


    "What we see is clear," says Cory Johnson, Big Tech will spend big on CapEx. He urges investors to follow the money, pointing to chipmakers, networking, and data centers as some of the biggest beneficiaries. Cory adds that risk comes with hyperscalers like Microsoft (MSFT), Alphabet (GOOGL), and Amazon (AMZN) not knowing the scope of AI's future impact. One corner of the tech trade he sees as a continuing winner: memory chips like Micron (MU). ======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about

    Innovate or cry
    #35 Digitale Souveränität - Wie kritisch steht es um unsere wirtschaftliche Autonomie?

    Innovate or cry

    Play Episode Listen Later Feb 12, 2026 48:00


    Digitale Souveränität – zwischen Machtverschiebung und RealitätsschockDigitale Souveränität war lange ein politisches Schlagwort.Jetzt ist sie strategischer Imperativ.In dieser Folge diskutieren wir, warum das Thema gerade massiv an Relevanz gewinnt – und warum es kein IT-Projekt, sondern eine Machtfrage ist.Worum es in dieser Episode gehtWir zerlegen das Thema in drei Ebenen:1. IndividuumAbhängigkeit von Plattformen, Cloud-Services, KI-Tools.Warum Ausstieg theoretisch einfach – praktisch aber extrem schwer ist (Netzwerkeffekte).2. UnternehmenCloud-First, Hyperscaler, KI-Integration.Was passiert, wenn Infrastruktur nicht mehr neutral ist?Warum digitale Souveränität Teil der KI-Governance werden muss.3. Europa als WirtschaftsraumRegulierung ist vorhanden (AI Act etc.) –aber Regulierung ersetzt keine technologische Substanz.Wo stehen wir wirklich bei Cloud, Chips, Energie und Foundation Models?Chapters00:00 Einführung in die digitale Souveränität02:39 Warum das Thema gerade jetzt eskaliert05:50 Regulierung vs. Bürokratisierung in Europa08:40 Netzwerkeffekte, Monopolisierung und Abhängigkeit11:32 Europäische Alternativen – Wunsch oder realistische Option?14:13 Wirtschaftliche Souveränität als strategische Notwendigkeit17:27 Cloud-Infrastruktur als geopolitischer Hebel20:21 Konkrete Implikationen für Unternehmen23:09 Zukunftsperspektiven europäischer KI-Modelle Hosted on Acast. See acast.com/privacy for more information.

    Economia dia a dia
    Bruxelas inaugura a NanoIC: como funciona esta nova “fábrica experimental” de chips?

    Economia dia a dia

    Play Episode Listen Later Feb 12, 2026 4:13


    A União Europeia inaugurou na Bélgica uma infraestrutura onde empresas podem testar como fabricar chips antes de investirem à escala industrial. Chama-se NanoIC e é a maior linha-piloto europeia de semicondutores. Mas como vai funcionar esta nova “fábrica experimental” e o que muda, na prática, para a indústria europeia?See omnystudio.com/listener for privacy information.

    The CyberWire
    When Windows breaks and chips crack.

    The CyberWire

    Play Episode Listen Later Feb 11, 2026 32:40


    Patch Tuesday. Preliminary findings from the European Commission come down on TikTok. Switzerland's military cancels its contract with Palantir. Social engineering leads to payroll fraud. Google hands over extensive personal data on a British student activist. Researchers unearth a global espionage operation called “The Shadow Campaigns.” Notepad's newest features could lead to remote code execution. Our guest is Hazel Cerra, Resident Agent in Charge of the Atlantic City Office for the United States Secret Service. Ring says it's all about dogs, but critics hear the whistle. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today, we're joined by Hazel Cerra, Resident Agent in Charge of the Atlantic City Office for the United States Secret Service, as she discusses the evolution of the Secret Service's investigative mission—from its early focus on financial crimes such as counterfeit currency and credit card fraud to the growing challenges posed by cryptocurrency-related crime. Selected Reading Microsoft February 2026 Patch Tuesday Fixes 58 Vulnerabilities, Six actively Exploited Flaws (Beyond Machines) Adobe Releases February 2026 Patches for Multiple Products (Beyond Machines) ICS Patch Tuesday: Vulnerabilities Addressed by Siemens, Schneider, Aveva, Phoenix Contact (SecurityWeek) Chipmaker Patch Tuesday: Over 80 Vulnerabilities Addressed by Intel and AMD (SecurityWeek) Commission preliminarily finds TikTok's addictive design in breach of the Digital Services Act (European Commission) Palantir's Swiss Exit Highlights Global Data Sovereignty Challenge (NewsCase) Payroll pirates conned the help desk, stole employee's pay (The Register) Google Fulfilled ICE Subpoena Demanding Student Journalist's Bank and Credit Card Numbers (The Intercept) The Shadow Campaigns: Uncovering Global Espionage (Palo Alto Networks Unit 42) Notepad's new Markdown powers served with a side of RCE (The Register) With Ring, American Consumers Built a Surveillance Dragnet (404 Media) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

    Get Up in the Cool
    Episode 494: Sparrow Smith (Writing in the Old Time Idiom)

    Get Up in the Cool

    Play Episode Listen Later Feb 11, 2026 64:42


    Welcome to Get Up in the Cool: Old Time Music with Cameron DeWhitt and Friends. This week's friend is Sparrow Smith! We recorded this on Monday at my home in Portland, Oregon. Tunes in this episode: Katie Morey (0:46) Jewel of the Blue Ridge (Sparrow Smith original) (10:32) Undone in Sorrow (31:47) Young Sally (Sparrow Smith original) (42:34) Chips and Sauce (Ira Bernstein original) (1:00:42) BONUS TRACK: Tend that Flame (Sparrow Smith original) Follow Sparrow Smith on Instagram [Buy her newest album Carolina Mountains](Carolina Mountains | Sparrow Smith - BandcampBandcamphttps://sparrowsmith.bandcamp.com › album › carolina-...) Follow Resonant Rogues on Instagram Visit Resonant Rogues' website Support Get Up in the Cool on Patreon Send Tax Deductible Donations to Get Up in the Cool through Fracture Atlas Sign up at Pitchfork Banjo for my clawhammer instructional series! Schedule a banjo lesson with Cameron Visit Tall Poppy String Band's website and follow us on Instagram follow Sweeten the Third on Instagram

    Marketplace Tech
    TPU? GPU? What's the difference between these two chips used for AI?

    Marketplace Tech

    Play Episode Listen Later Feb 10, 2026 6:13


    Graphics processing units (GPUs) have become the most important commodity in the AI boom — and have made Nvidia a multi-trillion dollar company. But the tensor processing unit (TPU) could present itself as competition for the GPU.TPUs are developed by Google specifically for AI workloads. And so far, Anthropic, OpenAI and Meta have reportedly made deals for Google's TPUs.Christopher Miller, historian at Tufts University and author of "Chip War: The Fight for the World's Most Critical Technology," explains what this could mean.

    Marketplace All-in-One
    TPU? GPU? What's the difference between these two chips used for AI?

    Marketplace All-in-One

    Play Episode Listen Later Feb 10, 2026 6:13


    Graphics processing units (GPUs) have become the most important commodity in the AI boom — and have made Nvidia a multi-trillion dollar company. But the tensor processing unit (TPU) could present itself as competition for the GPU.TPUs are developed by Google specifically for AI workloads. And so far, Anthropic, OpenAI and Meta have reportedly made deals for Google's TPUs.Christopher Miller, historian at Tufts University and author of "Chip War: The Fight for the World's Most Critical Technology," explains what this could mean.

    WALL STREET COLADA
    Ventas minoristas, IA, chips y tech en foco. Todo lo que mueve el mercado HOY

    WALL STREET COLADA

    Play Episode Listen Later Feb 10, 2026 3:40


    Zen Trading Magazine
    ZTM Ed.105 Negocios Global chips, una industria que arrasa

    Zen Trading Magazine

    Play Episode Listen Later Feb 9, 2026 4:01


    Las acciones del sector de global chips, como STX, son las que mejor desempeño han tenido desde que empezó 2026. Otra empresa del sector que llama la atención es ASML. En este rally de los semiconductores impulsado por los grandes fabricantes también son protagonistas las surcoreanas SK Hynix y Samsung Electronics. ¿A qué se debe este positivo comportamiento de la industria? Aquí te lo contamos para que tomes decisiones de inversión bien informadas.

    @HPCpodcast with Shahin Khan and Doug Black

    - Sovereign AI: what is it, and does anyone have it? - Bullish on Eviden: Europe's top system company restores old name - Intel to build server GPUs of its own - MIT Technology Review AI Predictions [audio mp3="https://orionx.net/wp-content/uploads/2026/02/HPCNB_20260209.mp3"][/audio] The post HPC News Bytes – 20260209 appeared first on OrionX.net.

    Golf in Leicht - Der Podcast rund um dein Golfspiel
    #368: Die perfekte Saisonvorbereitung - So sieht sie aus

    Golf in Leicht - Der Podcast rund um dein Golfspiel

    Play Episode Listen Later Feb 8, 2026 24:32


    Die perfekte Saisonvorbereitung – so startest du stabil, fokussiert und mit Plan Viele Golfer starten in die neue Saison mit der Hoffnung, dass „es einfach besser läuft“. Doch Hoffnung ist kein Trainingsplan. In dieser Folge erklärt dir Fabian, wie eine wirklich durchdachte Saisonvorbereitung aussieht – mit klaren Schwerpunkten, sinnvoller Struktur und den genau richtigen Inhalten zur richtigen Zeit. Du erfährst: Warum du zuerst mit dem langen Spiel starten solltest – und wieso Stabilität beim Driver wichtiger ist als 200 Chips im zum Start Wie du Range, Platz und Zuhause sinnvoll kombinierst, um flexibel und wetterunabhängig zu trainieren Warum mentale Stärke nicht erst im Turnier entsteht – und wie du Routinen, Visualisierung und Fokus jetzt schon gezielt aufbaust Wie du durch strategisches Course Management Fehlerquellen eliminierst, Entscheidungen im Spiel vorbereitest und mehr Sicherheit gewinnst Wieso Fortschritt nur entsteht, wenn du nicht planlos „spielst“, sondern bewusst Schwerpunkte setzt und einen Trainingsplan hast und umsetzt Und: Wie sich Technik, Strategie, Mentaltraining und körperliche Fitness gegenseitig verstärken – für mehr Konstanz und bessere Scores Fabian nimmt dich mit in den strukturierten Aufbau der Saisonvorbereitung – vom Schwung über die Gedanken bis hin zur Strategie auf dem Platz. Denn wer jetzt richtig arbeitet, spielt später souveräner, konstanter – und mit mehr Freude.

    Idle Red Hands
    The Weekly Podcast no.322 – Crows by MCDM, Undergoblin Heist, Magic Secret Lair and Orcs & Crafts Zine

    Idle Red Hands

    Play Episode Listen Later Feb 8, 2026 34:24


    The official announcement of Crows by MCDM Productions, a new dungeon-crawling RPG led by James Introcaso appeared on Patreon. The game uses D6s and D10s, features a health system tied to inventory slots similar to Knave, and determines experience points based on the value of loot collected. It differs from a previous project, Draw Steel, by including potential negative results on the “power roll” but allows unlimited circumstantial bonuses for good planning. Crows also includes a base-building component where players develop their headquarters town and is set in a world where Archmages, corrupted by their magic, disappeared after warring as Necromancers. Undergoblin Heist by Hit Point Press is a hardcover adventure available with six accompanying miniatures. This wild sandbox goblin adventure is compatible with both Dungeons & Dragons 2024 Edition and Dungeon Crawl Classics. The players take on the role of the “Grotty Jacknives,” expert sneak-goblins tasked by Boss Soot to steal three magical artifacts from a cabbage goblin village. The adventure includes pre-generated characters, new species/class rules, a new spell, and a goblin Critical Hits table, and is written by Thom Denick with art and design by Bee Ho. The “Magic: The Gathering’s newest Secret Lair” drop, titled ‘Prints Charming,’ was released on February 2 as a “Chaos Vault” experiment by Wizards of the Coast. The collection is a quartet of green spells with different illustration styles, but its unusual aspect is the variable pricing tiers. Instead of a fixed price, the drop was offered at multiple points, ranging from a low of $9.99 up to $39.99 (and a foil version up to $49.99), ostensibly to test how much fans value the product. Despite the secondary market value of the cards being around $6, the experiment yielded bizarre sales results, with the most expensive $49.99 foil edition selling out earlier than the cheaper $29.99 foil edition, indicating some buyers were voluntarily paying more. The article speculates this could be due to hopes for a secret bonus card, automated bots, or individuals opting for a higher price to leave cheaper options for others. Orcs & Crafts Zine, a Kickstarter project focused on miniature terrain building rather than a game itself. The zine aims to teach the basics of creating miniature worlds, with the first 60-page issue focusing on cardboard foundations and requiring only basic tools like scissors and a hobby knife. The project emphasizes accessibility, creative control, and using everyday materials. It is supported by a team of “Orc assistants” like Crathar (Chief Crafting Officer for advanced tips), Painscar (focused on safety), and Bob (providing basic tips). The zine includes planning guides, technique breakdowns (e.g., wood planks, hills), step-by-step building instructions (e.g., Simple Cardboard Tree, Ruined Gnomish Distillery), and even unique movie recommendations from Martin for background crafting entertainment. #crowsrpg #undergoblin #eeldenring #orcsandcrafts Undergoblin Heist – 5e D&D (Hardcover) – Hit Point Press https://hitpointpress.com/products/under-goblin-heist-5e-d-d-hardcover Orcs and Crafts Zine: https://www.backerkit.com/c/projects/game-machinery/orcs-crafts-zine Call of Cthulhu Bundle: https://humblebundleinc.sjv.io/Xmz13G Free Guild Ball Starter Set: https://steamforged.com/products/guild-ball-starter-kit Warmachine on MyMiniFactory: https://mmf.io/upturned Mantic Companion App: https://companion.manticgames.com/ Use our Referral code: MCTXEE Support us by Shopping at Miniature Market (afilliate link): https://miniature-market.sjv.io/K0yj7n Support Us by Shopping on DTRPG (afilliate link): https://www.drivethrurpg.com?affiliate_id=2081746 Matt’s DriveThruRPG Publications: https://www.drivethrurpg.com/browse.php?author=Matthew%20Robinson https://substack.com/@matthewrobinson3 Chris on social media: https://hyvemynd.itch.io/​​ Jeremy's Links: http://www.abusecartoons.com/​​ http://www.rcharvey.com ​​Support Us on Patreon: https://www.patreon.com/upturnedtable Give us a tip on our livestream: https://streamlabs.com/upturnedtabletop/tip​ Donate or give us a tip on Paypal: https://www.paypal.com/ncp/payment/2754JZFW2QZU4 Intro song is “Chips” by KokoroNoMe https://kokoronome.bandcamp.com/

    The Focus Group
    Patagonia Drags a Queen and Chips Quit the Cape

    The Focus Group

    Play Episode Listen Later Feb 7, 2026 47:36


    On the agenda, politics, Apple products, and phantom energy. Shop Talk Focus Group reminds you to unplug at night. Caught My Eye announces the closure of the Cape Cod Potato Chip Factory in Hyannis and outdoor retailer Patagonia takes issue with Drag Queen Pattie Gonia. Mark Dawson, of DRZ Entertainment, is the Business Birthday this week. We're all business. Except when we're not. Apple Podcasts: apple.co/1WwDBrC Spotify: spoti.fi/2pC19B1 iHeart Radio: bit.ly/4aza5LW Tunein: bit.ly/1SE3NMb YouTube Music: bit.ly/43T8Y81 Pandora: pdora.co/2pEfctj YouTube: bit.ly/1spAF5a Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Key Wealth Matters
    Jobs Cool, Chips Rule and Positioning While the Dollar Drifts

    Key Wealth Matters

    Play Episode Listen Later Feb 6, 2026 25:59


    Markets absorbed a brief U.S. government shutdown, ongoing fourth‑quarter earnings, and fresh readings from the Institute for Supply Management: Services stayed in expansion while Manufacturing showed a tentative uptick. While the Bureau of Labor Statistics' payroll report was delayed, other labor signals softened—job openings slipped to 6.5 million, weekly claims rose to 231,000, and the ADP private payrolls tally was only 22,000. Equity leadership shifted as AI pressure hit software stocks while investors favored tangible, cash‑flowing businesses and added non‑U.S. exposure. Credit stayed orderly—investment‑grade spreads widened slightly and high‑yield widened a bit more—while the riskiest tier gained a little over 1% year‑to‑date. Treasury yields eased; the European Central Bank and Bank of England held policy rates steady. Speakers:Brian Pietrangelo, Managing Director of Investment StrategyGeorge Mateyo, Chief Investment OfficerRajeev Sharma, Head of Fixed IncomeStephen Hoedt, Head of Equities 00:01:35 — Week setup: shutdown ends, Q4 earnings, Services up, Manufacturing perks up. 00:03:10 — Jobs picture softens; big payrolls report pushed to next week; thoughts on the U.S. Dollar.00:08:36 — AI tool sparks global software selloff; chips seen as enablers. 00:15:29 — Credit mostly calm; risk appetite cools a bit this week. 00:21:07 — Super Bowl picks and quick Ohio note to close. Additional ResourcesRead: Key Questions: Who Is Kevin Warsh and What Does His Appointment Mean for the Fed's Next Chapter?Read: Comprehensive Key Numbers Key QuestionsWeekly Investment BriefSubscribe to our Key Wealth Insights newsletterFollow us on LinkedIn

    Squawk on the Street
    Alphabet's Blockbuster AI Spending Plans, Chips On The Move, & LIVE: Arm CEO Talks Results 2/5/26

    Squawk on the Street

    Play Episode Listen Later Feb 5, 2026 45:08


    Carl Quintanilla, Jim Cramer and David Faber kicked off the hour with a look at new numbers out of Google parent Alphabet - as they project huge AI spending in 2026 that could top $185B in 2026. Plus: hear what part of tech Cramer's calling a "winner take all" market - and the stocks he's calling a buy here... Along with a deep-dive on the chips: as memory shortage concerns hit shares of Qualcomm, and Arm CEO Rene Haas joins the team to breakdown new numbers from his company.  Around the edges: the anchors broke down new comments from the President around possible rate cuts ahead, Estée Lauder's tariff warning sending shares slumping, and Bitcoin's fresh move lower.  Squawk on the Street Disclaimer Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    City Cast Portland
    The Unexpected History of Our Local Exports, From Les Schwab to Juanita's Chips

    City Cast Portland

    Play Episode Listen Later Feb 5, 2026 37:11


    Everyone knows Nike and Tillamook, but countless other popular products and brands got their start in our region —  and many of them have unexpected stories. Today on City Cast Portland, we're sharing a fresh round of our favorite city and state exports: things that got their start here, but have become household names well beyond our fair city. Joining host Claudia Meza on the show are our very own producers, John Notarianni and Giulia Fiaoni. Become a member of City Cast Portland today! Get all the details and sign up here.  Who would you like to hear on City Cast Portland? Shoot us an email at portland@citycast.fm, or leave us a voicemail at 503-208-5448. Want more Portland news? Then make sure to sign up for our morning newsletter and be sure to follow us on Instagram.  Looking to advertise on City Cast Portland? Check out our options for podcast and newsletter ads at citycast.fm/advertise. Learn more about the sponsors of this February 5th episode: Oregon Ballet Neo Home Loans Pivot Portland

    local portland shoot chips exports tillamook les schwab unexpected history
    Leaders In Payments
    Special Series: The Future of Modern Payments with Pat Antonacci, Chief Product Officer at The Clearing House | Episode 464

    Leaders In Payments

    Play Episode Listen Later Feb 5, 2026 25:43 Transcription Available


    Money keeps moving while the world sleeps, and the rails behind it are evolving fast. We sit down with Pat Antonacci, Chief Product Officer at The Clearing House, to break down how CHIPS, ACH (EPN), and RTP each power a different promise - liquidity, scale, and always‑on finality and why that mix is reshaping how businesses and consumers move funds.Pat explains why CHIPS dominates high‑value cross‑border flows and how its netting algorithm delivers 30:1 liquidity savings that matter on volatile days. We trace ACH's steady rise, including same‑day and intraday growth, and dig into record holiday peaks that reveal the hidden rhythms of settlement. Then we go deep on RTP: eight years in, 98% of U.S. real‑time traffic, rising daily volumes, a $10 million limit, and use cases spanning account‑to‑account moves, brokerage funding, wallet top‑ups, gig payouts, loan disbursements, and tuition deadlines that can't wait until Monday.The conversation tackles big questions: Are rails competing or complementing? Where are checks being displaced? How do Request for Payment and ISO 20022 unlock cleaner data and fewer exceptions? We explore the 2026 landscape - APIs, cloud, AI‑driven fraud controls, open banking momentum and why the smart strategy is matching the rail to the job: ACH for routine batches, RTP for precise timing and finality, and CHIPS for high‑value, cross‑border certainty. Pat also previews The Clearing House roadmap, from broader RTP ubiquity and fraud tools to extended CHIPS hours that bring wires closer to continuous availability.If you care about how money actually moves and how that movement shapes cash flow, customer trust, and the broader economy, this conversation is your field guide. 

    Business daily
    TSMC announces production of advanced AI chips in Japan

    Business daily

    Play Episode Listen Later Feb 5, 2026 6:39


    Taiwanese chipmaker TSMC, Asia's most valuable company, has announced plans to produce 3-nanometre cutting-edge chips in Japan. Japan's Prime Minister Sanae Takaichi lauded the announcement as "the missing piece for the country" since she's looked to bolster domestic chipmaking ahead of lower house elections on February 8. Also in this edition: Cuba is on the edge of a "humanitarian collapse" as it faces an oil siege imposed by the United States.

    Squawk on the Street
    More Software Selling, AMD CEO Live, & Cramer's Earnings Takeaways 2/4/26

    Squawk on the Street

    Play Episode Listen Later Feb 4, 2026 48:02


    Another day of early software stock declines:Carl Quintanilla, Jim Cramer and David Faber broke down the biggest moves - and names Jim thinks are worth buying the dip for. Chips in focus after a wide-ranging interview with Nvidia CEO Jensen Huang on Mad Money overnight - who shot down rumors of strain in their OpenAI partnership - along with AMD earnings... Hear AMD CEO Lisa Su herself breakdown the numbers and where she sees growth ahead.  Plus: the team also discussed how to trade some of the morning's biggest earnings reports - from Chipotle to Eli Lilly to Uber.  Squawk on the Street Disclaimer Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Kramer & Jess On Demand Podcast
    TO BAY OR NOT TO BAY: Jalapeno Chips

    Kramer & Jess On Demand Podcast

    Play Episode Listen Later Feb 4, 2026 5:27


    We're doing spicy foods for February!

    TechCrunch Startups – Spoken Edition
    Exclusive: Positron raises $230M Series B to take on Nvidia's AI chips; plus, Apeiron Labs gets $29M to flood the oceans with autonomous underwater robots

    TechCrunch Startups – Spoken Edition

    Play Episode Listen Later Feb 4, 2026 7:17


    The investment comes from backers including the Qatar Investment Authority as demand for chips beyond Nvidia soars and as Qatar aims to build out its AI infrastructure. Also, To build and sell more of its autonomous underwater vehicles (AUVs), Apeiron Labs recently closed a $9.5 million Series A round led by Dyne Ventures, RA Capital Management Planetary Health and S2G Investments, the company exclusively told TechCrunch. Assembly Ventures, Bay Bridge Ventures, and TFX Capital participated. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Pantsuit Politics
    WHAT ARE WE DOING HERE?!

    Pantsuit Politics

    Play Episode Listen Later Feb 3, 2026 75:43


    Today we're discussing rampant corruption and speculation in the Trump administration. From UAE interests investing in Trump companies days before his second inauguration, apparent payment for pardons, and Amazon's outsized investment and promotion in the Melania "documentary." We also discuss the ways that Federal Courts are pushing back on ICE enforcement throughout the US. Tickets for our Minneapolis live show and first-ever Spice Conference go on sale soon! Details here Topics Discussed UAE Corruption Scandal Bitcoin, Chips, and Pardons Federal Courts vs. ICE Outside of Politics: The Grammys Ready to go deeper? Visit our website for complete show notes, exclusive premium content, chats, and more.See omnystudio.com/listener for privacy information.

    The John Batchelor Show
    S8 Ep412: Guest: David Shedd. Shedd warns against selling advanced chips to China, describing Beijing's "capture, cage, and kill" economic strategy and criticizing the U.S. administration's transactional approach. With Thaddeus MCCotter co-

    The John Batchelor Show

    Play Episode Listen Later Feb 3, 2026 8:12


    Guest: David Shedd. Shedd warns against selling advanced chips to China, describing Beijing's "capture, cage, and kill" economic strategy and criticizing the U.S. administration's transactional approach. With Thaddeus MCCotter co-host.1955

    Thinner Peace in Menopause
    Ep. 513: Why You Can't Stop Eating Chips at Restaurants

    Thinner Peace in Menopause

    Play Episode Listen Later Feb 3, 2026 21:36


    I'm answering a listener's question about feeling compelled to overeat chips and bread at restaurants—and showing you how to take back control. Get the full show notes and information here: https://drdebbutler.com/513  

    TD Ameritrade Network
    Pete Najarian: Memory Chips, Silver & Gold Surges to Continue

    TD Ameritrade Network

    Play Episode Listen Later Feb 3, 2026 8:17


    Earnings so far "hit it out of the park," says @MarketRebellion's Pete Najarian. He points to companies like SanDisk (SNDK) and Western Digital (WDC) showing promise for memory chip stocks and believes they have more room to run. When it comes to the metal trade, he expects the same even after silver and gold's recent downturn. Pete's big factors behind his bull case include silver's tie to AI and EVs, along with options momentum reaccelerating for ETFs like SLV and GLD. ======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about

    Talks from the Hoover Institution
    Insights From The 2025 US-China Economic And Security Review Commission Report: Findings And Recommendations

    Talks from the Hoover Institution

    Play Episode Listen Later Feb 3, 2026 88:01 Transcription Available


    The Hoover Institution Program on the US, China, and the World hosted, Insights from the 2025 US-China Economic and Security Review Commission Report: Findings and Recommendations, on Thursday, January 29, 2026.  This event features leading experts from the Hoover Institution and the US-China Economic and Security Review Commission for a discussion analyzing the key bilateral economic and security challenges faced by the US and China and their impacts on the broader international landscape. Congress created the US-China Economic and Security Review Commission to monitor, investigate, and report on the national security implications of the bilateral trade and economic relationship between the United States and the People's Republic of China. Its annual reports to Congress address and make recommendations about pressing issues such as trade practices, technological competition, military strategy, and human rights concerns, with far-reaching implications for policymakers and stakeholders around the world. The Commission's 2025 Annual Report was released in November 2025. To view the report, click the following link: https://www.uscc.gov/annual-reports FEATURING Erin Baggott Carter is a Hoover Fellow at the Hoover Institution at Stanford University. She is also an associate professor in the Department of Political Science and International Relations at the University of Southern California, a faculty affiliate at the Center on Democracy, Development and the Rule of Law (CDDRL) at Stanford University's Freeman Spogli Institute, and a nonresident scholar at the 21st Century China Center at UC San Diego. She has previously held fellowships at the CDDRL and Stanford's Center for International Security and Cooperation. She received a PhD in political science from Harvard University.  Drew Endy is a science fellow and senior fellow (courtesy) at the Hoover Institution. He leads Hoover's Bio-Strategy and Leadership effort, which focuses on keeping increasingly biotic futures secure, flourishing, and democratic. Professor Endy also researches and teaches bioengineering at Stanford University, where he is the Martin Family University Fellow in Undergraduate Education, senior fellow (courtesy) of the Freeman Spogli Institute for International Studies, and faculty codirector of degree programs for the Hasso Plattner Institute of Design.  Mike Kuiken is a Distinguished Visiting Fellow at Stanford University's Hoover Institution and serves as a Commissioner on the U.S.-China Economic and Security Review Commission. He is an advisor to the Special Competitive Studies Project (SCSP) and a member of Anthropic's National Security and Public Sector Advisory Council. He also consults with CEOs, boards, and senior leaders across investment, AI, defense, technology, and multinational firms globally.  The Honorable Randall G. Schriver is Chairman of the Board at The Institute for Indo-Pacific Security. In addition, Mr. Schriver is currently a partner at Pacific Solutions LLC. Most recently, Mr. Schriver served as the Assistant Secretary of Defense for Indo-Pacific Security Affairs from 8 January 2018 to 31 December 2019. Prior to his confirmation as Assistant Secretary, Mr. Schriver was a founding partner of Armitage International LLC, a consulting firm that specializes in international business development and strategies. He was also a founder of the Project 2049 Institute and served as President and CEO. Previously, Mr. Schriver served as Deputy Assistant Secretary of State for East Asian and Pacific Affairs.  MODERATOR  Glenn Tiffert is a distinguished research fellow at the Hoover Institution and a historian of modern China. He co-chairs Hoover's program on the  US, China, and the World, and also leads Stanford's participation in the National Science Foundation's SECURE program, a $67 million effort authorized by the CHIPS and Science Act of 2022 to enhance the security and integrity of the US research enterprise. He works extensively on the security and integrity of ecosystems of knowledge, particularly academic, corporate, and government research; science and technology policy; and malign foreign interference.  

    Alcoholics Alive!

    S13E8 Tate W. tells his story.  In Chip Shrapnel Shank and Wayne discuss "Chips aren't rewards or awards but they'll keep you out of the psych ward", "Now its time for the heavy metal" and "The garnet chip for 30 days and what feels like 3000 nights.  If you have a question, comment or suggestion you can email Shank and Wayne at freedom@alcoholicsalive.com 

    EUVC
    E689 | Simon Thomas, Founder of Paragraf: Graphene Chips, AI Energy, and the Hard-Tech Road from Lab to Fab

    EUVC

    Play Episode Listen Later Feb 3, 2026 53:35


    Welcome back! In this episode, Andreas Munk Holm sits down with Simon Thomas, CEO of Paragraf, one of Europe's rare hard-tech success stories, taking graphene from scientific breakthrough to industrial-scale electronics.Graphene has been called the “wonder material” for two decades. The promise has always been clear: faster, better, and dramatically more energy-efficient electronics. The missing piece has been execution at scale. Simon and the Paragraf team are building that missing bridge, with the world's first graphene electronics foundry in the UK, a growing portfolio of real commercial products, and a deep conviction that the next era of computing will require new materials, not just bigger data centers.This is a conversation about what it truly takes to build venture-backed hardware in Europe. How you fund capex-heavy deep tech. How do you keep investors aligned when timelines are long. How you keep teams motivated through delays and national security reviews. And why AI may accelerate materials discovery, but won't replace the brutal, necessary work of turning atoms into real manufacturing.ShareWhat's covered:01:27 What Paragraf is building and why graphene matters now03:50 Graphene wafers and the world's first graphene electronics foundry04:23 What graphene changes for power consumption and device life05:01 Why graphene isn't already inside data centers06:13 The future of “2D electronics” beyond graphene08:02 Foundry versus product company: why Paragraf does both09:40 Graphene's 20-year journey from papers to real-world scale13:15 When venture investors first showed up and what they needed to see16:58 Sovereignty, British Patient Capital, and why “national backing” matters24:08 The product-to-foundry loop and how you hook customers early27:36 Capex, equity limits, and the painful mechanics of deep-tech financing30:22 Surviving hard moments: people, pivots, and the NSI Act review38:10 How to structure boards over time, from tactical to strategic42:23 Keeping teams committed through uncertainty46:10 Where Paragraf is today: headcount, geographies, and commercialization49:16 AI in materials discovery and why manufacturing is still the bottleneck

    DailyQuarks – Dein täglicher Wissenspodcast
    Teilzeit - Gut für uns aber schlecht für die Gesellschaft?

    DailyQuarks – Dein täglicher Wissenspodcast

    Play Episode Listen Later Feb 3, 2026 22:57


    Außerdem: Chips & Co - Kann man da wirklich süchtig werden? (12:19) // Mehr spannende Themen wissenschaftlich eingeordnet findet ihr hier: www.quarks.de // Habt Ihr Feedback, Anregungen oder Fragen, die wir wissenschaftlich einordnen sollen? Dann meldet euch über Whatsapp oder Signal unter 0162 344 86 48 oder per Mail: quarksdaily@wdr.de. Von Sebastian Sonntag.

    A Way with Words — language, linguistics, and callers from all over
    All That and a Bag of Chips (Rebroadcast) - 2 February 2026

    A Way with Words — language, linguistics, and callers from all over

    Play Episode Listen Later Feb 2, 2026 53:45


    We tend to take the index of a book for granted, but centuries ago, these helpful lists were viewed with suspicion. Some even worried that indexes would harm reading comprehension! A witty new book tells the story. Plus, the Latin term bona fides [BOHN-ah FYDZ] was adopted into English to mean "good faith" or "authentic credentials." But there's more than one way to pronounce it. And: say you're off at summer camp, and there's a container in the dining hall labeled ort bucket. What will you find if you look inside? Also: crisp, with one foot in the milk bucket, a brain teaser about nicknames, French gestures, Dutchman, million-dollar family, dungarees, scared water, and nuking food. Hear hundreds of free episodes and learn more on the A Way with Words website: https://waywordradio.org. Be a part of the show: call or text 1 (877) 929-9673 toll-free in the United States and Canada; elsewhere in the world, call or text +1 619 800 4443. Send voice notes or messages via WhatsApp 16198004443. Email words@waywordradio.org. Copyright Wayword, Inc., a 501(c)(3) corporation. Learn more about your ad choices. Visit megaphone.fm/adchoices

    CEO Spotlight
    And just like that, TI is in the Chips

    CEO Spotlight

    Play Episode Listen Later Feb 2, 2026 7:31


    KRLD's David Johnson talks with Rafael Lizardi, CFO, Texas Instruments (Nasdaq: TXN)

    Big Squid with Justin Hamilton
    Wil Anderson - 8am Catch Up - Crunchy Chips and Stadium Ice Cream

    Big Squid with Justin Hamilton

    Play Episode Listen Later Feb 2, 2026 80:50


    Wil Anderson checks in from the Perth Fringe to discuss hotel hacks, crunchy cinema experiences, and what it means when it's time to get an ice-cream at an Ed Sheeran concert. Hosted on Acast. See acast.com/privacy for more information.

    DailyQuarks – Dein täglicher Wissenspodcast
    Unvernünftiges Verhalten - Ist das Selbstsabotage?

    DailyQuarks – Dein täglicher Wissenspodcast

    Play Episode Listen Later Feb 2, 2026 18:51


    Außerdem: Zu viel Salz - Wo könnten wir reduzieren ? // Mehr spannende Themen wissenschaftlich eingeordnet findet ihr hier: www.quarks.de // Habt ihr Feedback, Anregungen oder Fragen, die wir wissenschaftlich einordnen sollen? Dann meldet euch über Whatsapp oder Signal unter 0162 344 86 48 oder per Mail: quarksdaily@wdr.de. Von Sebastian Sonntag.

    @HPCpodcast with Shahin Khan and Doug Black

    - Microsoft Maia 200 for Inference - Nvidia H200 (not H20) for China - Corning-Meta Data Center - Nvidia-CoreWeave AI Deal - Neoclouds' role in Chip vs Cloud competition [audio mp3="https://orionx.net/wp-content/uploads/2026/02/HPCNB_20260202.mp3"][/audio] The post HPC News Bytes – 20260202 appeared first on OrionX.net.

    YA HAM RIGHT PODCAST
    Is getting cheated on the worst pain ever ?? And having chips on your shoulders!!!

    YA HAM RIGHT PODCAST

    Play Episode Listen Later Feb 1, 2026 23:25


    This week's episode we will be having a discussion on getting cheated on and is it the worst pain ever??? Also is having chips on your shoulders a good or bad thing…along with the craziest things we saw on the internet this week !!

    Squawk on the Street
    S&P 500 Hits 7,000, AI Trade-Chips Rally, Fed Decision Day 1/28/26

    Squawk on the Street

    Play Episode Listen Later Jan 28, 2026 42:32


    Carl Quintanilla, Jim Cramer and David Faber discussed what to make of the S&P 500 hitting the 7,000 level for the first time. The broader market index was propelled past that milestone by the AI trade, as results from ASML helped to spark a jump in semiconductor stocks and tech overall.  The rally sets the stage for after-the-bell earnings from three of mega-tech's "Magnificent 7" — Microsoft, Meta and Tesla. The anchors also explored what to expect from Wednesday's Fed decision on rates and the path ahead for monetary policy. Also in focus: Amazon to cut 16,000 additional jobs, Starbucks among the earnings winners, sources tell David that SoftBank is close to investing an additional $30 billion in OpenAI, President Donald Trump's weaker dollar message, First Lady Melania Trump rings the opening bell at the NYSE.   Squawk on the Street Disclaimer Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Packet Pushers - Full Podcast Feed
    NB559: Cisco Builds Nexus Switch for Intel AI Chips; TeraWave Promises 6Tbps from Space

    Packet Pushers - Full Podcast Feed

    Play Episode Listen Later Jan 26, 2026 44:59


    Take a Network Break! We start with a Red Alert in Oracle’s WebLogic Server Proxy Plugin for Apache or IIS, which has a severity score of 10. In the news, Fortinet warns that attackers have found a new exploit path against previously-patched vulnerabilities, Microsoft 365 services suffered an outage, and ServiceNow inks a deal with... Read more »

    Shawn Ryan Show
    #271 Ro Khanna - The Internal Failures Undermining America's Institutions

    Shawn Ryan Show

    Play Episode Listen Later Jan 15, 2026 112:37


    Ro Khanna is the U.S. Representative for California's 17th Congressional District (Silicon Valley) since 2017, serving his fifth term as a Democrat. Born to Indian immigrant parents, Khanna graduated Phi Beta Kappa with a B.A. in economics from the University of Chicago and earned a J.D. from Yale Law School. He taught economics at Stanford, worked in the Obama administration on commerce and manufacturing, and authored key provisions of the CHIPS and Science Act to boost U.S. tech manufacturing. A leader on climate, labor rights (supporting the PRO Act), and digital privacy, Khanna refuses PAC and lobbyist contributions and has championed bipartisan efforts like the Epstein Files Transparency Act (2025) for releasing sealed documents. In late 2025, he faced Silicon Valley backlash for supporting a proposed wealth tax on billionaires to fund healthcare amid Medicaid cuts. Khanna advocates for progressive economic patriotism, reducing inequality, and ethical tech governance while working across the aisle on national security and innovation. Married to Ritu Ahuja Khanna, with two children, he resides in Fremont. Shawn Ryan Show Sponsors: Ready to give your liver the support it deserves? Head to https://dosedaily.co/SRS or enter SRS to get 35% off your first subscription. Receive 30% off your first subscription order at https://armra.com/SRS or enter code SRS at checkout. Head to https://factormeals.com/srs50off and use code srs50off to get 50% off your first Factor box plus free breakfast for 1 year (new customers only, with qualifying subscription purchase). Take care of your skin like you take care of your gear—visit https://CalderaLab.com/SRS and use code SRS for 20% off your first order. If you're serious about selling to the Department of War, go to https://SBIRAdvisors.com and mention Shawn Ryan for your first month free. Ro Khanna Links: Website - https://khanna.house.gov Campaign Site - https://www.rokhanna.com X - https://x.com/RoKhanna FB - https://www.facebook.com/RepRoKhanna IG - https://www.instagram.com/rokhannausa Roblox Petition - https://act.rokhanna.com/a/save-roblox-petition Learn more about your ad choices. Visit podcastchoices.com/adchoices