POPULARITY
Categories
Smart Agency Masterclass with Jason Swenk: Podcast for Digital Marketing Agencies
Would you like access to our advanced agency training for FREE? https://www.agencymastery360.com/training How do you turn a $99 course, launched before it was even fully built, into a 7-figure coaching business? Today's guest did just that. And he's here to share why scrappier beats slick every time. If you've ever second-guessed launching messy, this episode will feel like validation. Brent Weaver is on the show talking about his start with UGURUS, the valuable learning that can come from starting before everything's in place, and why what came after selling his business wasn't exactly what he had expected. Today we kick off a two-parter with Brent Weaver, the founder of UGURUS, who went from building websites in high school to launching one of the most successful coaching programs for digital agency owners. If you've ever second-guessed your “build it as you go” approach — or wondered whether selling $99 courses online could ever turn into something real—this episode will feel like a shot of validation. In this episode, we'll discuss: Launching and selling without a net. The real reason Brent Weaver sold UGURUS. The unexpected, gut-punch part of selling. Subscribe Apple | Spotify | iHeart Radio Sponsors and Resources Wix: Today's episode of the Smart Agency Masterclass is sponsored by Wix Studio, the all-in-one platform designed to help agencies scale without the headaches. With intuitive tools, robust native business solutions, and low maintenance, Wix Studio lets your team focus on what matters most—delivering exceptional value to your clients. Ready to take your agency to the next level? Visit wix.com/studio and discover how Wix Studio can transform your workflow, boost profits, and strengthen client relationships. Building Something Before It's Built In 2012, Brent's agency was building on a tool called Business Catalyst, which led to a side project called BC Gurus, a blog for Business Catalyst users that eventually turned into a full-fledged business. That little blog became a membership site where his team posted business content on how to grow a Business Catalyst agency and, after selling his agency, was the seed for what eventually became UGURUS, a platform offering training and coaching to help agency owners close more deals and scale their businesses. Just as they were preparing to move forward with the site without the Business Catalyst element, as this tool had been discontinued, Brent found the name UGURUS had just gone up for auction. It all seemed serendipitous as they easily won this auction and the new stage of the business began. Lessons in Launching (and Selling) Without a Net Throughout their journey, Brent and his team learned something that every agency owner needs to hear: you don't need everything figured out before you start. And in fact, if you try to, you'll likely never launch at all. The early success of their $200 self-paced course helped them build an audience. But it wasn't until they started offering deeper, high-ticket coaching that things clicked into place. Selling a few $2,000 seats was way more scalable than chasing thousands of low-ticket customers. They did all of this without the luxury of a huge marketing budget or slick automation. Just hustle, relationships, partnerships, and a whole lot of belief in what they were doing. This is something Brent and Jason have both experienced. They agree it's better to go out, execute with what you have, and get feedback, rather than waiting for the perfect moment. Brent Weaver on Building, Selling, and What Came Next Brent and his team didn't start with a fully polished product. In fact, when they first launched their flagship 10K Bootcamp, they spent all their time selling it before creating it. In their view, if they couldn't sell it, they wouldn't build it. But they sold it. About 30 seats at $2,000 a pop. Of course, it did help that they weren't starting from scratch. They had a list of about 10,000 emails from their time running BC Gurus, which helped immensely. And then they had one week to create the first session. What followed was a whirlwind of late nights and Adobe Connect calls (for those who remember what that was) as Brent stayed one step ahead of each week's live session. It was clunky. It was imperfect. But it worked. Why? Because Brent was committed. He responded immediately to the slightest client dissatisfaction. He personally handled delivery. And he overdelivered wherever possible. That scrappy MVP became the foundation for a business that helped thousands of agencies get out of the feast-and-famine cycle. This kind of growth doesn't happen when you wait for the stars to align. It happens when you ship early, listen hard, and iterate fast. The $22,850 Lead Magnet That Took 6 Minutes to Create Let's talk about lead magnets that actually convert. The first product Brent ever sold was a gloriously titled “the $22,850 Website Proposal.” That wasn't a gimmick. It was a real client proposal that closed a big deal—with cross-sells, recurring revenue, and multi-location projects all baked in. Instead of building something fancy, he stripped out client details, dropped it into a Google Doc, and gave it away. Six minutes of work. Hundreds of thousands of downloads. The lesson? Your most valuable assets are often sitting in a dusty folder, not in your imagination. Proof beats polish every time. The Real Reason Brent Sold UGURUS So why sell a successful business? For Brent, it wasn't burnout—it was the pull toward a bigger vision. After buying out his co-founder and riding the COVID rollercoaster, things just weren't lighting him up anymore. Then came Cloudways—and more importantly, a series of conversations with their CMO, Santi. In a way, he was no longer getting what he wanted from the business, and the more he spoke with Santi, and saw what they were doing with their platform, the more he dreamed about turning that into an agency growth community. Hence, what started as co-branded webinars and strategy calls evolved into shared vision sessions. Eventually, Cloudways pitched an acquisition. The appeal? A chance to bring agency coaching to a massive platform with 13,000+ agency users. Brent saw an opportunity to merge purpose with scale and went all in. When the Buyer Gets Bought Here's the plot twist: just ten months after the acquisition, Cloudways got acquired by DigitalOcean, and suddenly UGURUS was a small fish in a billion-dollar pond. DigitalOcean was focused on AI, GPUs, and hardcore infrastructure—not coaching communities. So eventually, Brent's team and vision were sidelined. He stayed on. He fought for his team. But like he says—when you sell, it's no longer yours. And if the buyer shifts priorities, you've got to live with it. That's the tradeoff. Don't Sell Unless You Know What's Next The hard truth here is don't sell unless you know what you're waking up to the next day. Brent thought he had his next chapter lined up. He had a six-month transition plan. A roadmap. But then came the cultural disconnect. Engineering talk at happy hours. Roadmaps that had nothing to do with agency growth. The adventure he signed up for didn't look like what it became. That's the gut-punch part of selling. You can have a clean exit and still feel like you lost something. That's why clarity before the exit is non-negotiable. Next Time on Part Two: What really happens after the exit? Brent pulls back the curtain on post-sale culture shock, why some big opportunities fizzled, and how his next move with E2M caught even him by surprise. You won't want to miss this. Want to Build an Exclusive, Scalable Agency That Clients Line Up For? Our Agency Blueprint helps you identify growth bottlenecks, build community-driven strategies, and position your agency as a category of one.
A handheld Xbox that's really an ROG Ally with a new Ryzen processor?? An LCD that actually NEEDS bright sunlight like a Game Boy Color?? (Oh, and Josh's legendary food segment.) There's some EVGA sad news mixed in there with a cool new GOG feature and too many security stories.Timestamps:00:00 Intro00:39 Patreon01:20 Food with Josh03:30 ASUS ROG Xbox Ally handhelds have new AMD Ryzen Z2 processors06:51 Nintendo sold a record number of Switch 2 consoles08:37 NVIDIA N1X competitive with high-end mobile CPUs?12:38 Samsung now selling 3GB GDDR7 modules16:27 Apple uses car model years now, and Tahoe is their last OS supporting Intel22:01 EVGA motherboards have issues with RTX 50 GPUs?27:48 Josh talks about a new PNY flash drive30:01 (in)Security Corner54:07 Gaming Quick Hits1:00:46 Eazeye Monitor 2.0 - an RLCD monitor review1:11:53 Picks of the Week1:33:21 Outro ★ Support this podcast on Patreon ★
Hey folks, this is Alex, finally back home! This week was full of crazy AI news, both model related but also shifts in the AI landscape and big companies, with Zuck going all in on scale & execu-hiring Alex Wang for a crazy $14B dollars. OpenAI meanwhile, maybe received a new shipment of GPUs? Otherwise, it's hard to explain how they have dropped the o3 price by 80%, while also shipping o3-pro (in chat and API). Apple was also featured in today's episode, but more so for the lack of AI news, completely delaying the “very personalized private Siri powered by Apple Intelligence” during WWDC25 this week. We had 2 guests on the show this week, Stefania Druga and Eric Provencher (who builds RepoPrompt). Stefania helped me cover the AI Engineer conference we all went to last week, and shared some cool Science CoPilot stuff she's working on, while Eric is the GOTO guy for O3-pro helped us understand what this model is great for! As always, TL;DR and show notes at the bottom, video for those who prefer watching is attached below, let's dive in! Big Companies LLMs & APIsLet's start with big companies, because the landscape has shifted, new top reasoner models dropped and some huge companies didn't deliver this week! Zuck goes all in on SuperIntelligence - Meta's $14B stake in ScaleAI and Alex WangThis may be the most consequential piece of AI news today. Fresh from the dissapointing results of LLama 4, reports of top researchers leaving the Llama team, many have decided to exclude Meta from the AI race. We have a saying at ThursdAI, don't bet against Zuck! Zuck decided to spend a lot of money (nearly 20% of their reported $65B investment in AI infrastructure) to get a 49% stake in Scale AI and bring Alex Wang it's (now former) CEO to lead the new Superintelligence team at Meta. For folks who are not familiar with Scale, it's a massive company in providing human annotated data services to all the big AI labs, Google, OpenAI, Microsoft, Anthropic.. all of them really. Alex Wang, is the youngest self made billionaire because of it, and now Zuck not only has access to all their expertise, but also to a very impressive AI persona, who could help revive the excitement about Meta's AI efforts, help recruit the best researchers, and lead the way inside Meta. Wang is also an outspoken China hawk who spends as much time in congressional hearings as in Slack, so the geopolitics here are … spicy. Meta just stapled itself to the biggest annotation funnel on Earth, hired away Google's Jack Rae (who was on the pod just last week, shipping for Google!) for brainy model alignment, and started waving seven-to-nine-figure comp packages at every researcher with “Transformer” in their citation list. Whatever disappointment you felt over Llama-4's muted debut, Zuck clearly felt it too—and responded like a founder who still controls every voting share. OpenAI's Game-Changer: o3 Price Slash & o3-pro launches to top the intelligence leaderboards!Meanwhile OpenAI dropping not one, but two mind-blowing updates. First, they've slashed the price of o3—their premium reasoning model—by a staggering 80%. We're talking from $40/$10 per million tokens down to just $8/$2. That's right, folks, it's now in the same league as Claude Sonnet cost-wise, making top-tier intelligence dirt cheap. I remember when a price drop of 80% after a year got us excited; now it's 80% in just four months with zero quality loss. They've confirmed it's the full o3 model—no distillation or quantization here. How are they pulling this off? I'm guessing someone got a shipment of shiny new H200s from Jensen!And just when you thought it couldn't get better, OpenAI rolled out o3-pro, their highest intelligence offering yet. Available for pro and team accounts, and via API (87% cheaper than o1-pro, by the way), this model—or consortium of models—is a beast. It's topping charts on Artificial Analysis, barely edging out Gemini 2.5 as the new king. Benchmarks are insane: 93% on AIME 2024 (state-of-the-art territory), 84% on GPQA Diamond, and nearing a 3000 ELO score on competition coding. Human preference tests show 64-66% of folks prefer o3-pro for clarity and comprehensiveness across tasks like scientific analysis and personal writing.I've been playing with it myself, and the way o3-pro handles long context and tough problems is unreal. As my friend Eric Provencher (creator of RepoPrompt) shared on the show, it's surgical—perfect for big refactors and bug diagnosis in coding. It's got all the tools o3 has—web search, image analysis, memory personalization—and you can run it in background mode via API for async tasks. Sure, it's slower due to deep reasoning (no streaming thought tokens), but the consistency and depth? Worth it. Oh, and funny story—I was prepping a talk for Hamel Hussain's evals course, with a slide saying “don't use large reasoning models if budget's tight.” The day before, this price drop hits, and I'm scrambling to update everything. That's AI pace for ya!Apple WWDC: Where's the Smarter Siri? Oh Apple. Sweet, sweet Apple. Remember all those Bella Ramsey ads promising a personalized Siri that knows everything about you? Well, Craig Federighi opened WWDC by basically saying "Yeah, about that smart Siri... she's not coming. Don't wait up."Instead, we got:* AI that can combine emojis (revolutionary!
In this episode of the Data Center Frontier Show, we sit down with Kevin Cochrane, Chief Marketing Officer of Vultr, to explore how the company is positioning itself at the forefront of AI-native cloud infrastructure, and why they're all-in on AMD's GPUs, open-source software, and a globally distributed strategy for the future of inference. Cochrane begins by outlining the evolution of the GPU market, moving from a scarcity-driven, centralized training era to a new chapter focused on global inference workloads. With enterprises now seeking to embed AI across every application and workflow, Vultr is preparing for what Cochrane calls a “10-year rebuild cycle” of enterprise infrastructure—one that will layer GPUs alongside CPUs across every corner of the cloud. Vultr's recent partnership with AMD plays a critical role in that strategy. The company is deploying both the MI300X and MI325X GPUs across its 32 data center regions, offering customers optimized options for inference workloads. Cochrane explains the advantages of AMD's chips, such as higher VRAM and power efficiency, which allow large models to run with fewer GPUs—boosting both performance and cost-effectiveness. These deployments are backed by Vultr's close integration with Supermicro, which delivers the rack-scale servers needed to bring new GPU capacity online quickly and reliably. Another key focus of the episode is ROCm (Radeon Open Compute), AMD's open-source software ecosystem for AI and HPC workloads. Cochrane emphasizes that Vultr is not just deploying AMD hardware; it's fully aligned with the open-source movement underpinning it. He highlights Vultr's ongoing global ROCm hackathons and points to zero-day ROCm support on platforms like Hugging Face as proof of how open standards can catalyze rapid innovation and developer adoption. “Open source and open standards always win in the long run,” Cochrane says. “The future of AI infrastructure depends on a global, community-driven ecosystem, just like the early days of cloud.” The conversation wraps with a look at Vultr's growth strategy following its $3.5 billion valuation and recent funding round. Cochrane envisions a world where inference workloads become ubiquitous and deeply embedded into everyday life—from transportation to customer service to enterprise operations. That, he says, will require a global fabric of low-latency, GPU-powered infrastructure. “The world is going to become one giant inference engine,” Cochrane concludes. “And we're building the foundation for that today.” Tune in to hear how Vultr's bold moves in open-source AI infrastructure and its partnership with AMD may shape the next decade of cloud computing, one GPU cluster at a time.
Nvidia – das Unternehmen verbinden viele vor allem mit Grafikprozessoren für Videospiele. Doch das US-Unternehmen ist heute einer der wichtigsten Akteure im Bereich der Künstlichen Intelligenz. GPUs von Nvidia sorgen für enorme Rechenleistungen, die unter anderem für das Training von KI-Modellen genutzt werden. Auch die Finanz Informatik hat im Zuge des S-KIPilot, einer Agentic AI für Sparkassen-Mitarbeitende, mit Nvidia zusammengearbeitet. Denn die KI-Anwendung läuft in den mit Nvidia-Chips ausgestatteten Rechenzentren der Finanz Informatik, also 100 %-on-prem. Warum ist das wichtig? Was genau bedeutet überhaupt Agentic AI und wie sieht die Zukunft in diesem Bereich aus? Und was macht Nvidia sonst noch so? Diese und weitere Fragen stellt Host Jonas Ross an Dr. Jochen Papenbrock und Markus Hacker von Nvidia in dieser Folge von „Alles Digital?!“, dem Podcast der Finanz Informatik zu Innovationen in der Finanzwelt.
Talk Python To Me - Python conversations for passionate developers
If you're looking to leverage the insane power of modern GPUs for data science and ML, you might think you'll need to use some low-level programming language such as C++. But the folks over at NVIDIA have been hard at work building Python SDKs which provide nearly native level of performance when doing Pythonic GPU programming. Bryce Adelstein Lelbach is here to tell us about programming your GPU in pure Python. Episode sponsors Posit Agntcy Talk Python Courses Links from the show Bryce Adelstein Lelbach on Twitter: @blelbach Episode Deep Dive write up: talkpython.fm/blog NVIDIA CUDA Python API: github.com Numba (JIT Compiler for Python): numba.pydata.org Applied Data Science Podcast: adspthepodcast.com NVIDIA Accelerated Computing Hub: github.com NVIDIA CUDA Python Math API Documentation: docs.nvidia.com CUDA Cooperative Groups (CCCL): nvidia.github.io Numba CUDA User Guide: nvidia.github.io CUDA Python Core API: nvidia.github.io Numba (JIT Compiler for Python): numba.pydata.org NVIDIA's First Desktop AI PC ($3,000): arstechnica.com Google Colab: colab.research.google.com Compiler Explorer (“Godbolt”): godbolt.org CuPy: github.com RAPIDS User Guide: docs.rapids.ai Watch this episode on YouTube: youtube.com Episode #509 deep-dive: talkpython.fm/509 Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Craig Dunham is the CEO of Voltron Data, a company specializing in GPU-accelerated data infrastructure for large-scale analytics, AI, and machine learning workloads. Before joining Voltron Data, he served as CEO of Lumar, a SaaS technical SEO platform, and held executive roles at Guild Education and Seismic, where he led the integration of Seismic's acquisition of The Savo Group and drove go-to-market strategies in the financial services sector. Craig began his career in investment banking with Citi and Lehman Brothers before transitioning into technology leadership roles. He holds a MBA from Northwestern University and a BS from Hampton University. In this episode… In a world where efficiency and speed are paramount, how can companies quickly process massive amounts of data without breaking the bank on infrastructure and energy costs? With the rise of AI and increasing data volumes from everyday activities, organizations face a daunting challenge: achieving fast and cost-effective data processing. Is there a solution that can transform how businesses handle data and unlock new possibilities? Craig Dunham, a B2B SaaS leader with expertise in go-to-market strategy and enterprise data systems, tackles these challenges head-on by leveraging GPU-accelerated computing. Unlike traditional CPU-based systems, Voltron Data's technology uses GPUs to greatly enhance data processing speed and efficiency. Craig shares how their solution helps enterprises reduce processing times from hours to minutes, enabling organizations to run complex analytics faster and more cost-effectively. He emphasizes that Voltron Data's approach doesn't require a complete overhaul of existing systems, making it a more accessible option for businesses seeking to enhance their computing capabilities. In this episode of the Inspired Insider Podcast, Dr. Jeremy Weisz interviews Craig Dunham, CEO at Voltron Data, about building high-performance data systems. Craig delves into the challenges and solutions in today's data-driven business landscape, how Voltron Data's innovative solutions are revolutionizing data analytics, and the advantages of using GPU over CPU for data processing. He also shares valuable lessons on leading high-performing teams and adapting to market demands.
En esta nueva edición de la Tertul-IA, Lu, Javi y Frankie analizan algunos de los temas más relevantes del panorama tecnológico actual. Desde los costes reales de escalar soluciones con IA hasta las narrativas infladas en torno a ciertas startups, la conversación pone el foco en lo que muchas veces se oculta tras la innovación
Fastfetch and LibreOffice mint new releases, KDE teases Kerton for VM management, and KDE is looking to capture Windows 10 exiles. Bcachefs broke filesystems and then fixed them, AMD releases a couple new GPUs, and there's weird drama in X11 and kernel land. For tips, we have Pipewire node management, notes from Kubuntu beta, and a quick primer on the difference between git fetch and git pull. You can find the show notes at https://bit.ly/4jEM36i Have fun! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Ken McDonald Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
פרק מספר 496 של רברס עם פלטפורמה - באמפרס מספר 86: רן, דותן ואלון באולפן הוירטואלי (באמצעות Riverside.fm - תודה!) עם סדרה של קצרצרים שתפסו את תשומת הלב בתקופה האחרונה (והפעם קצת יותר) - בלוגים מעניינים, דברים מ- GitHub או Twitter וכל מיני דברים שראינו, לפני שהכל יתמלא ב-AI.
Welcome to Top of the Morning by Mint.. I'm Nelson John and here are today's top stories. 1. RBI in the Spotlight: Will Growth Trump Caution? All eyes are on RBI Governor Sanjay Malhotra today as the central bank gears up to announce its second bi-monthly policy decision of FY26. A 25 basis point cut is widely expected—bringing the repo rate down to 5.75%—marking the third straight rate reduction this year. With inflation easing within the 2–4% comfort zone, the focus shifts to fuelling India's growth engine. The bigger test lies in Malhotra's tone on inflation outlook, GDP expectations, and global headwinds. This policy pivot could shape the economic narrative for the rest of 2025. 2. Tesla Tanks as Trump-Musk War Escalates Tesla shares nosedived over 14%, wiping out $150 billion in value, after Elon Musk and Donald Trump clashed publicly over federal subsidies and EV policies. The tipping point? Trump's threat to cut all government contracts with Musk's firms, including NASA-linked SpaceX, following Musk's claim that Trump “owes him” for the 2016 win. Investors fear political blowback could derail Tesla's robotaxi rollout. Analyst Dan Ives warned, “If Trump hits pause on autonomy, it could delay Tesla's next big bet.” 3. Musk Calls for Trump's Impeachment In a dramatic twist, Musk endorsed a social media post calling for Trump's impeachment and backed JD Vance as a replacement. Then came another bombshell: Musk claimed Trump's name is in the unreleased Epstein files, suggesting political motives for withholding them. Responding to Trump's threat of terminating contracts, Musk declared SpaceX would begin decommissioning its Dragon spacecraft—NASA's key ride to the ISS. As political theatrics spiral, the tech-White House feud is now playing out on the world's biggest stage. 4. OpenAI Academy Launches in India OpenAI has chosen India for the global rollout of its first-ever OpenAI Academy in collaboration with the IndiaAI Mission. Training content will be delivered in English, Hindi, and four regional languages, alongside the IndiaAI FutureSkills portal. With India now hosting data residency for enterprise tools and 34,000 affordable GPUs available, the stage is set for inclusive AI innovation. Eleven nonprofits in India will also receive $150,000 in API credits under OpenAI's AI for Impact Accelerator, empowering AI for social good. 5. Cognizant's Billion-Dollar Win Signals Momentum Amid a tough deal-making climate, Cognizant has quietly clinched a $1 billion contract—likely with UnitedHealth Group—spanning renewal, expansion, and new AI-led work. This marks its second mega-deal in two months, with CEO Ravi Kumar's Infosys-era ties to UHG's Sandeep Dadlani possibly playing a role. With healthcare forming nearly a third of its revenue, the deal offers fresh momentum and showcases Cognizant's edge in closing deals without formal RFPs. While rivals struggle, Cognizant may be scripting a quiet comeback. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of The Culture Palette Show, Kelsey, Justin, and Jaleel kick things off with a recap of their week and weekend before diving into stories from some of their past trips. The conversation takes a sharp turn into the world of tech and privacy as the cast talk about how governments might be using GPUs and tax data to track your movements. Then the guys unpack the growing tension between automakers' infotainment systems and the integration of Android Auto and Apple CarPlay. Is Intel finally stepping up in the GPU game? And could YouTube's rumored likeness finder tool be a game-changer—or a privacy nightmare? Closing out the show with a wild legal story, Music, and more!SocialsInstagram: @culturepaletteshowTwitter: @culture_paletteYouTube: @CulturePaletteShow
Episode 61: What will the next generation of AI-powered PCs mean for your everyday computing—and how will features like on-device AI, privacy controls, and new processors transform our digital lives? Matt Wolfe (https://x.com/mreflow) is joined by Pavan Davuluri (https://x.com/pavandavuluri), Corporate Vice President of Windows and Devices at Microsoft, who's leading the charge on bringing AI to mainstream computers. In this episode of The Next Wave, Matt dives deep with Pavan into the world of AI PCs, exploring how specialized hardware like NPUs (Neural Processing Units) make AI more accessible and affordable. They break down the difference between CPUs, GPUs, and NPUs, and discuss game-changing Windows features like Recall—digging into the privacy safeguards and how AI can now run locally on your device. Plus, you'll hear Satya Nadella (https://x.com/satyanadella), Microsoft's CEO, share his vision for how agentic AI could revolutionize healthcare and what the future holds for AI-powered Windows experiences. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) NPUs: The Third Processor Revolution (05:41) NPU Efficiency in AI Devices (09:31) Windows Empowering Users Faster (13:00) Evolving Windows Ecosystem Opportunities (13:49) AI Enhancing M365 Copilot Research (15:43) Satya Nadella On AI and Healthcare — Mentions: Want the ultimate guide to Advanced Prompt Engineering? Get it here: https://clickhubspot.com/wbv Pavan Davuluri: https://www.linkedin.com/in/pavand/ Satya Nadella: https://www.linkedin.com/in/satyanadella/ Microsoft: https://www.microsoft.com/ Microsoft 365: https://www.microsoft365.com/ Microsoft Recall https://learn.microsoft.com/en-us/windows/ai/recall/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Building a cloud for AI is more than just stacking GPUs together—it requires deep software expertise and hardware innovation working in sync. At NVIDIA GTC, I had the chance to sit down with Gleb K. and Oleg Fedorov from Nebius to break down what it really takes to build a high-performance AI cloud from scratch.Gleb, leading the software side, shared insights into the complexities of AI cloud infrastructure, while Oleg, focusing on hardware, talked about the compute power needed to support today's AI demands.We covered:✅ What it means to be a "serial cloud builder" and how Gleb transitioned from managing data centers to architecting AI-first cloud solutions✅ Why an AI cloud is much more than just a collection of GPUs and how Nebius optimized for speed and efficiency✅ The key ingredients of “good compute” and what it takes to deliver top-tier performance in AI workloads✅ How Nebius and NVIDIA are working together to push the boundaries of AI infrastructureThis interview is about the software-hardware synergy that powers AI innovation and the future of AI cloud computing.#data #ai #nebius #nvidiagtc2025 #theravitshow
Hive Digital Technologies Executive Chairman Frank Holmes joined Steve Darling from Proactive to share significant operational milestones for HIVE Digital Technologies, including surpassing 10 Exahash per second (EH/s) in global Bitcoin mining hashrate and achieving key benchmarks in its high-performance computing (HPC) expansion. Holmes reported that HIVE has added 1 EH/s of mining capacity per week over the last month, bringing its current total to over 10 EH/s. The company is now ahead of schedule in reaching its Phase 1 objective of 11.5 EH/s by the end of June 2025. With a fleet efficiency of 20 Joules per Terahash, HIVE is demonstrating both speed and sustainability in its mining operations. Importantly, Holmes emphasized that this rapid growth is fully funded, giving the company confidence as it continues to scale toward its long-term target of 25 EH/s by the fourth quarter of 2025. This expansion positions HIVE as a serious contender in the global Bitcoin mining sector. Beyond mining, HIVE's HPC subsidiary, Buzz HPC, is also exceeding expectations. The company announced that Buzz HPC has reached a $20 million annualized revenue run-rate from its GPU cloud services—a full month ahead of its original June 2025 target. This milestone reflects the growing demand for GPU-based infrastructure and the effectiveness of HIVE's strategy in building a complementary business beyond crypto. Buzz HPC now operates with a fleet of over 5,000 high-performance GPUs, enabling the company to serve clients in AI, machine learning, and cloud rendering. The success of Buzz HPC further validates HIVE's diversified growth model and positions it as a rising player in the cloud infrastructure and AI computing space. #proactiveinvestors #hivedigitaltechnologies # #tsxv #hive #nasdaq #hive #CryptoMining #GreenEnergy #BitcoinMining #ParaguayMining #DataCenter #Exahash #S21Miners #DigitalAssets #ProactiveInvestors
How does a top AI company scale massive clusters and build AI for the enterprise? In this episode of The Liftoff with Keith, we talk to Ted Shelton, COO of Inflection AI, from the AI Infra Summit 2025. Ted shares how their team pivoted from consumers to enterprise after their Microsoft deal, why seamless infrastructure is key, and what it takes to build AI models that run on NVIDIA, AMD, and Intel.Learn why “getting to the no” is the smartest move for founders, how enterprises can embrace sovereign AI, and how Inflection's approach to model customization unlocks massive business value.
Everyone Counts by Dr. Jürgen Weimann - Der Podcast über Transformation mit Begeisterung
In dieser Folge spreche ich mit Henrik Klages, Managing Partner von TNG Technology Consulting, über die faszinierende und rasante Entwicklung großer Sprachmodelle (LLMs) – und was das für uns alle bedeutet. Henrik erklärt auf verständliche Weise, wie LLMs funktionieren, warum GPUs wichtiger als CPUs sind und wieso der Mythos vom „nächsten Wort“ die wahre Kraft dieser Systeme unterschätzt. Außerdem räumt er mit Irrtümern rund um KI auf und zeigt anhand konkreter Beispiele aus Praxis und Forschung, wie Unternehmen heute aktiv werden müssen, um nicht den Anschluss zu verlieren.
This week, we're diving into the world of PlayerUnknown's Battlegrounds (PUBG) and the highly anticipated new game from PUBG Studios! Is it a sequel, a spin-off, or something entirely unexpected? We break down the rumors, official teasers, and what fans can hope for in this next evolution of battle royale. Plus, we discuss PUBG's enduring legacy and how it stacks up against today's competition. Then, we shift gears to Computex 2025 Day 2, where the biggest names in tech unveiled cutting-edge hardware, from next-gen GPUs to AI-powered peripherals. We recap the most exciting announcements, surprise reveals, and what they mean for gamers and PC enthusiasts. Whether you're a PUBG diehard or a tech junkie, this episode has something for you. Drop in, loot up, and stay tuned for the latest! Our Patreon https://www.patreon.com/TechPrimeMedia Talking Gaming & Tech is Produced by Tech Prime Media and is part of the Dorkening Podcast Network and is brought to you by Deadly Grounds Coffee! https://youtu.be/7Y2rL7v75X4?si=iHKeO4TWA4njqmJF https://deadlygroundscoffee.com/ Send us your feedback online: https://pinecast.com/feedback/talking-gaming-tech/9037b3f8-99e0-4bdb-affb-1b3d3fc9020d This podcast is powered by Pinecast.
Dylan bonds with Nvidia's CFO and I try to keep the GPUs in actual democracies. Outtro Music: FaceTime, Karencici, 2018. https://open.spotify.com/track/2PNDZp0ultOJrQL4AVENPO?si=46cdf72cdffb40a3 Learn more about your ad choices. Visit megaphone.fm/adchoices
Dylan bonds with Nvidia's CFO and I try to keep the GPUs in actual democracies. Outtro Music: FaceTime, Karencici, 2018. https://open.spotify.com/track/2PNDZp0ultOJrQL4AVENPO?si=46cdf72cdffb40a3 Learn more about your ad choices. Visit megaphone.fm/adchoices
Andrew Lindsey, CEO of FLEXNODE, joined us on JSA TV at Metro Connect USA to share how the company is supporting the growing demands of AI and the rapid evolution of digital infrastructure. From AI-driven GPUs to the need for liquid cooling, Flexnode is paving the way for a future-proof, scalable infrastructure with its modular, adaptable designs.
Build and run your AI apps and agents at scale with Azure. Orchestrate multi-agent apps and high-scale inference solutions using open-source and proprietary models, no infrastructure management needed. With Azure, connect frameworks like Semantic Kernel to models from DeepSeek, Llama, OpenAI's GPT-4o, and Sora, without provisioning GPUs or writing complex scheduling logic. Just submit your prompt and assets, and the models do the rest. Using Azure's Model as a Service, access cutting-edge models, including brand-new releases like DeepSeek R1 and Sora, as managed APIs with autoscaling and built-in security. Whether you're handling bursts of demand, fine-tuning models, or provisioning compute, Azure provides the capacity, efficiency, and flexibility you need. With industry-leading AI silicon, including H100s, GB200s, and advanced cooling, your solutions can run with the same power and scale behind ChatGPT. Mark Russinovich, Azure CTO, Deputy CISO, and Microsoft Technical Fellow, joins Jeremy Chapman to share how Azure's latest AI advancements and orchestration capabilities unlock new possibilities for developers. ► QUICK LINKS: 00:00 - Build and run AI apps and agents in Azure 00:26 - Narrated video generation example with multi-agentic, multi-model app 03:17 - Model as a Service in Azure 04:02 - Scale and performance 04:55 - Enterprise grade security 05:17 - Latest AI silicon available on Azure 06:29 - Inference at scale 07:27 - Everyday AI and agentic solutions 08:36 - Provisioned Throughput 10:55 - Fractional GPU Allocation 12:13 - What's next for Azure? 12:44 - Wrap up ► Link References For more information, check out https://aka.ms/AzureAI ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Nvidia, as you probably know, makes chips — more specifically, GPUs, which are needed to power artificial intelligence systems. But as AI adoption ramps up, why does it feel like Nvidia's still the only chipmaker in the game? In this episode, why the California-based firm is, for now, peerless, and which companies may be angling to compete. Plus: Dwindling tourists worry American retailers, Dick's Sporting Goods sticks to its partly-sunny forecast and the share of single women as first-time homebuyers grows.Every story has an economic angle. Want some in your inbox? Subscribe to our daily or weekly newsletter.Marketplace is more than a radio show. Check out our original reporting and financial literacy content at marketplace.org — and consider making an investment in our future.
Nvidia, as you probably know, makes chips — more specifically, GPUs, which are needed to power artificial intelligence systems. But as AI adoption ramps up, why does it feel like Nvidia's still the only chipmaker in the game? In this episode, why the California-based firm is, for now, peerless, and which companies may be angling to compete. Plus: Dwindling tourists worry American retailers, Dick's Sporting Goods sticks to its partly-sunny forecast and the share of single women as first-time homebuyers grows.Every story has an economic angle. Want some in your inbox? Subscribe to our daily or weekly newsletter.Marketplace is more than a radio show. Check out our original reporting and financial literacy content at marketplace.org — and consider making an investment in our future.
Dell Technologies has announced Dell AI Factory advancements, including powerful and energy-efficient AI infrastructure, integrated partner ecosystem solutions and professional services to drive simpler and faster AI deployments. Why it matters AI is now essential for businesses, with 75% of organisations saying AI is key to their strategy and 65% successfully moving AI projects into production. However, challenges like data quality, security concerns and high costs can slow progress. The Dell AI Factory approach can be up to 62% more cost effective for inferencing LLMs on- premises than the public cloud and helps organizations securely and easily deploy enterprise AI workloads at any scale. Dell offers the industry's most comprehensive AI portfolio designed for deployments across client devices, data centres, edge locations and clouds. More than 3,000 global customers across industries are accelerating their AI initiatives with the Dell AI Factory. Dell infrastructure advancements help organizations deploy and manage AI at any scale Dell introduces end-to-end AI infrastructure to support everything from edge inferencing on an AI PC to managing massive enterprise AI workloads in the data center. Dell Pro Max AI PC delivers industry's first enterprise-grade discrete NPU in a mobile form factor The Dell Pro Max Plus laptop with Qualcomm AI 100 PC Inference Card is the world's first mobile workstation with an enterprise-grade discrete NPU. It offers fast and secure on-device inferencing at the edge for large AI models typically run in the cloud, such as today's 109-billion- parameter model. The Qualcomm AI 100 PC Inference Card features 32 AI-cores and 64 GB memory, providing power to meet the needs of AI engineers and data scientists deploying large models for edge inferencing. Dell redefines AI cooling with innovations that reduce cooling energy costs by up to 60% The industry-first Dell PowerCool Enclosed Rear Door Heat Exchanger (eRDHx) is a Dell- engineered alternative to standard rear door heat exchangers. Designed to capture 100% of IT heat generated with its self-contained airflow system, the eRDHx can reduce cooling energy costs by up to 60% compared to currently available solutions. With Dell's factory integrated IR7000 racks equipped with future-ready eRDHx technology, organizations can: Significantly cut costs and eliminate reliance on expensive chillers given the eRDHx operates with water temperatures warmer than traditional solutions (between 32 and 36 degrees Celsius). Maximise data center capacity by deploying up to 16% more racks of dense compute, without increasing power consumption. Enable air cooling capacity up to 80 kW per rack for dense AI and HPC deployments. Minimise risk with advanced leak detection, real-time thermal monitoring, and unified management of all rack-level components with the Dell Integrated Rack Controller. Dell PowerEdge servers with AMD GPUs maximize performance and efficiency Dell PowerEdge XE9785 and XE9785L servers will support AMD Instinct MI350 series GPUs, which offer 288 GB of HBM3E memory per GPU and deliver up to 35 times greater inferencing performance. Available in liquid-cooled and air-cooled configurations, the servers will reduce facility cooling energy costs. Dell advancements power efficient and secure AI deployments and workflows Because AI is only as powerful as the data that fuels it, organizations need a platform designed for performance and scalability. The Dell AI Data Platform updates improve access to high quality structured, semi-structured and unstructured data across the AI lifecycle. Dell Project Lightning is the world's fastest parallel file system per new testing, delivering up to two times greater throughput than competing parallel file systems. Project Lightning will accelerate training time for large-scale and complex AI workflows. Dell Data Lakehouse enhancements simplify AI workflows and accelerate use cases - such as recommendation engines, semantic...
As enterprises roll out production applications using AI model inferencing, they are finding that they are limited by the amount of memory that can be addressed by a GPU. This episode of Utilizing Tech features Steen Graham, founder of Metrum AI, discussing modern RAG and agentic AI applications with Ace Stryker and Stephen Foskett. Achieving the promise of AI requires access to data, and the memory required to deliver this is increasingly a focus of AI infrastructure providers. Technologies like DiskANN allow workloads to be offloaded to solid-state drives rather than system memory, and this surprisingly results in better performance. Another idea is to offload a large AI model to SSDs and deploy larger models on lower-cost GPUs, and this is showing a great deal of promise. Agentic AI in particular can be run in an asynchronous model, enabling them to take advantage of lower-spec hardware including older GPUs and accelerators, reduced RAM capacity and performance, and even all-CPU infrastructure. All of this suggests that AI can be run with less financial and power resources than generally assumed.Guest: Steen Graham is the Founder and CEO of Metrum AI. You can connect with Steen on LinkedIn and learn more about Metrum AI on their website.Guest Host: Ace Stryker is the Director of Product Marketing at Solidigm. You can connect with Ace on LinkedIn and learn more about Solidigm and their AI efforts on their dedicated AI landing page or watch their AI Field Day presentations from the recent event.Hosts:Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesJeniece Wnorowski, Head of Influencer Marketing at Solidigm Scott Shadley, Leadership Narrative Director and Evangelist at SolidigmFollow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon. Visit the Tech Field Day website for more information on upcoming events. For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
In this episode of the Additive Snack Podcast, host Fabian Alefeld explores the critical role of mathematics in additive manufacturing with guest Harshil Goel, founder, and CEO of Dyndrite. Harshil shares his unconventional entry into the additive manufacturing industry, driven by his deep background in mathematics and mechanical engineering. The conversation delves into how Dyndrite's software provides solutions for complex additive manufacturing challenges, from data preparation to materials and process development. Harshil also discusses various customer success stories and how their software helps streamline qualification processes, ultimately enhancing productivity. Additionally, they discussed the upcoming Dyndrite roadshow aimed at educating users on advanced additive manufacturing techniques, featuring hands-on sessions and practical demonstrations.Comments about the show or wish to share your AM journey? Contact us at additive.snack@eos-na.com. The Additive Snack Podcast is brought to you by EOS. For more information about Dyndrite's innovative solutions, visit their website and connect with Harshil Goel on LinkedIn. 01:17 Meet Harshil Go: From Mathematics to Additive Manufacturing02:33 The Birth of Dyndrite: Solving Software Challenges05:22 Understanding Dyndrite's Core Offerings09:53 Dyndrite's Unique Approach to Build Preparation15:07 Customer Success Stories and Real-World Applications19:47 Empowering Engineers with Python Integration23:04 Learning and Adapting to Dyndrite's Tools26:51 Multilingual Proficiency and Family Background27:36 Transitioning to Coding and Tool Integration28:17 Optimizing Production with Dendrite30:05 Challenges and Innovations in Qualification33:20 Deep Dive into Aviation Qualification39:17 Additive Manufacturing Industry Trends43:09 The Role of GPUs and AI in Additive Manufacturing45:59 Dyndrite Roadshow and Conclusion
Martin Mao is the co-founder and CEO of Chronosphere, an observability platform built for the modern containerized world. Prior to Chronosphere, Martin led the observability team at Uber, tackling the unique challenges of large-scale distributed systems. With a background as a technical lead at AWS, Martin brings unique experience in building scalable and reliable infrastructure. In this episode, he shares the story behind Chronosphere, its approach to cost-efficient observability, and the future of monitoring in the age of AI.What you'll learn:The specific observability challenges that arise when transitioning to containerized environments and microservices architectures, including increased data volume and new problem sources.How Chronosphere addresses the issue of wasteful data storage by providing features that identify and optimize useful data, ensuring customers only pay for valuable insights.Chronosphere's strategy for competing with observability solutions offered by major cloud providers like AWS, Azure, and Google Cloud, focusing on specialized end-to-end product.The innovative ways in which Chronosphere's products, including their observability platform and telemetry pipeline, improve the process of detecting and resolving problems.How Chronosphere is leveraging AI and knowledge graphs to normalize unstructured data, enhance its analytics engine, and provide more effective insights to customers.Why targeting early adopters and tech-forward companies is beneficial for product innovation, providing valuable feedback for further improvements and new features. How observability requirements are changing with the rise of AI and LLM-based applications, and the unique data collection and evaluation criteria needed for GPUs.Takeaways:Chronosphere originated from the observability challenges faced at Uber, where existing solutions couldn't handle the scale and complexity of a containerized environment.Cost efficiency is a major differentiator for Chronosphere, offering significantly better cost-benefit ratios compared to other solutions, making it attractive for companies operating at scale.The company's telemetry pipeline product can be used with existing observability solutions like Splunk and Elastic to reduce costs without requiring a full platform migration.Chronosphere's architecture is purposely single-tenanted to minimize coupled infrastructures, ensuring reliability and continuous monitoring even when core components go down.AI-driven insights for observability may not benefit from LLMs that are trained on private business data, which can be diverse and may cause models to overfit to a specific case.Many tech-forward companies are using the platform to monitor model training which involves GPU clusters and a new evaluation criterion that is unlike general CPU workload.The company found a huge potential by scrubbing the diverse data and building knowledge graphs to be used as a source of useful information when problems are recognized.Subscribe to Startup Project for more engaging conversations with leading entrepreneurs!→ Email updates: https://startupproject.substack.com/#StartupProject #Chronosphere #Observability #Containers #Microservices #Uber #AWS #Monitoring #CloudNative #CostOptimization #AI #ArtificialIntelligence #LLM #MLOps #Entrepreneurship #Podcast #YouTube #Tech #Innovation
An airhacks.fm conversation with Juan Fumero (@snatverk) about: tornadovm as a Java parallel framework for accelerating data parallelization on GPUs and other hardware, first GPU experiences with ELSA Winner and Voodoo cards, explanation of TornadoVM as a plugin to existing JDKs that uses Graal as a library, TornadoVM's programming model with @parallel and @reduce annotations for parallelizable code, introduction of kernel API for lower-level GPU programming, TornadoVM's ability to dynamically reconfigure and select the best hardware for workloads, implementation of LLM inference acceleration with TornadoVM, challenges in accelerating Llama models on GPUs, introduction of tensor types in TornadoVM to support FP8 and FP16 operations, shared buffer capabilities for GPU memory management, comparison of Java Vector API performance versus GPU acceleration, discussion of model quantization as a potential use case for TornadoVM, exploration of Deep Java Library (DJL) and its ND array implementation, potential standardization of tensor types in Java, integration possibilities with Project Babylon and its Code Reflection capabilities, TornadoVM's execution plans and task graphs for defining accelerated workloads, ability to run on multiple GPUs with different backends simultaneously, potential enterprise applications for LLMs in Java including model distillation for domain-specific models, discussion of Foreign Function & Memory API integration in TornadoVM, performance comparison between different GPU backends like OpenCL and CUDA, collaboration with Intel Level Zero oneAPI and integrated graphics support, future plans for RISC-V support in TornadoVM Juan Fumero on twitter: @snatverk
We're just a pair with a shorter show this week as we chat a bit more about Doom: The Dark Ages and the nature of PC system requirements (and the state of PC hardware and GPUs right now), plus Crashlands 2, Final Destination Bloodlines, then a bunch of your emails. This week's music: Finishing Move Inc. - Unchained Predator
Timestamps: 0:00 See ya on Wed, May 21 0:09 Epic's plan for Apple to block Fortnite 3:29 Intel Arc Pro B60, RX 9060 XT 4:27 OpenAI Codex, Grok's breakdown 5:50 MSI! 6:41 QUICK BITS INTRO 6:47 Spotify podcast play counts 7:14 The Steam data breach that wasn't 7:41 Australian rocket top fell off 8:07 BREAKING: Vader is bad guy NEWS SOURCES: https://lmg.gg/oRJxT Learn more about your ad choices. Visit megaphone.fm/adchoices
Robert Hallock, VP and GM at Intel, joins us for a deep dive into the rise of AI PCs and why they're more than just a buzzword. We unpack how new hardware accelerators are making smarter, faster, and more private computing possible, and why local, offline AI is about to become as essential as graphics in tomorrow's laptops. Robert explains Intel's ecosystem strategy, the real differences between CPUs, GPUs, and NPUs, and what it will take for AI features to reach everyone, not just creative pros but everyday users. Support the show on Patreon! http://patreon.com/aiinsideshow Subscribe to the YouTube channel! http://www.youtube.com/@aiinsideshow Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:41 - Defining the AI PC: What Makes It Different and Why Now? 0:03:47 - Architectural Shifts: How AI PCs Differ from Traditional PCs 0:05:29 - Intel's Role in the AI Ecosystem: Hardware, Software, and Industry Enablement 0:08:20 - Lessons from the Past: The Intel Web Tablet and Driving Industry Change 0:09:32 - Hardware Evolution: What Needs to Change for AI PCs? 0:11:02 - Real-World AI PC Use Cases: Enterprise, Creative, and Consumer Adoption Waves 0:13:51 - Local vs. Cloud AI: Privacy, Personalization, and the Value of On-Device AI 0:16:50 - Trust and Branding: The Meaning of “AI Inside” for Intel 0:19:26 - Accessibility and User Personas: Who Benefits from AI PCs Today? 0:22:30 - The Graphics-AI Connection: Why GPUs Became Essential for AI Workloads 0:25:10 - The Evolution of GPUs: From Graphics to AI Powerhouses 0:26:56 - Gaming's Role in Driving AI Adoption 0:28:00 - Historical Tech Drivers: Media, Typography, and Early AI Tools 0:29:37 - The Local AI Movement: Are We at an Inflection Point? 0:30:44 - AI Hardware Breakdown: CPUs, GPUs, and NPUs Explained 0:33:49 - Internal Challenges: Education and Customer Awareness at Intel 0:36:06 - Robert Hallock's Role at Intel and Closing Thoughts 0:37:17 - Thank you to Robert Hallock and episode wrap-up Learn more about your ad choices. Visit megaphone.fm/adchoices
The AI revolution is underway, and the U.S. and China are racing to the top. At the heart of this competition are semiconductors—especially advanced GPUs that power everything from natural language processing to autonomous weapons. The U.S. is betting that export controls can help check China's technological ambitions. But will this containment strategy work—or could it inadvertently accelerate China's drive for self-sufficiency? Those who think chip controls will work argue that restricting China's access gives the U.S. critical breathing room to advance AI safely, set global norms, and maintain dominance. Those who believe chip controls are inadequate, or could backfire, warn that domestic chipmakers, like Nvidia and Intel, also rely on sales from China. Cutting off access could harm U.S. competitiveness in the long run, especially if other countries don't fully align with U.S. policy. As the race for AI supremacy intensifies, we debate the question: Can the U.S. Outpace China in AI Through Chip Controls? Arguing Yes: Lindsay Gorman, Managing Director and Senior Fellow of the German Marshall Fund's Technology Program; Venture Scientist at Deep Science Ventures Will Hurd, Former U.S. Representative and CIA Officer Arguing No: Paul Triolo, Senior Vice President and Partner at DGA-Albright Stonebridge Group Susan Thornton, Former Diplomat; Visiting Lecturer in Law and Senior Fellow at the Yale Law School Paul Tsai China Center Emmy award-winning journalist John Donvan moderates This debate was produced in partnership with Johns Hopkins University. This debate was recorded on May 14, 2025 at 6 PM at Shriver Hall, 3400 N Charles St Ste 14, in Baltimore, Maryland. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Episode 71: We chat about potential upcoming Intel Arc GPUs including the B770 and B580 24GB, discuss the RX 9060 XT and what AMD might do with pricing, and round out the chat with some discoveries about laptop GPUs.CHAPTERS00:00 - Intro05:13 - Intel Arc B770, Is It Real?22:10 - New Arc B580 Configurations?26:18 - Lead-Up to the RX 9060 XT39:04 - Price Considerations and Concerns with 9060 XT1:04:30 - The RTX 5090 Laptop is a Joke1:14:25 - Updates From Our Boring LivesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
In Episode 6 The Bitcoin Policy Hour, the Bitcoin Policy Institute team unpacks the emerging “Riyadh Accord,” a sweeping geopolitical realignment where the United States, Saudi Arabia, and other Gulf nations are bundling AI, Bitcoin, and techno-industrial leverage into a new framework of global influence.As Blackwell chips begin to replace F-35s as diplomatic bargaining tools, and sovereign wealth funds quietly accumulate Bitcoin, Riyadh is fast becoming the epicenter of digital energy, intelligence infrastructure, and monetary power. The conversation explores how U.S. foreign policy is shifting from military entanglements toward high-tech trade agreements and capital co-investments — with AI and Bitcoin at the core.PLUS, they explore the recent slew of BTC treasury companies amping up activity and how they fit into the picture of jurisdictional and memetic arbitrage.Chapters:00:00 - Introduction01:45 - What Is the “Riyadh Accord”?07:30 - Blackwell Chips Replace F-35s in Middle East Bargains13:30 - AGI Infrastructure: Will the AI Run Happen in Riyadh?17:00 - A New Multipolar World Centered on Energy and Compute21:00 - Samourai Wallet, Legal Overreach, and Bitcoin's Core Ethos26:30 - Policy Hypocrisy: Bitcoin Freedom vs. Surveillance State31:00 - AI Feudalism and the Fight for Decentralized Money36:00 - The Open Source AI vs. Corporate Subscription Future41:00 - Bearer Assets: Bitcoin, GPUs, and Energy as Sovereign Tools46:00 - Global Reflexivity and Bitcoin Treasury Companies51:00 - Metaplanet, Nakamoto, and the Meme Wrapped in Arbitrage56:00 - Bitcoin's Geopolitical Moment: What Comes Next?01:01:00 - Closing Thoughts: Bitcoin Banks, Policy Risks, and Meme Economics⭐ Join top policymakers, technologists, and Bitcoin industry leaders at the 2025 Bitcoin Policy Summit, June 25–26 in Washington, D.C.
Unlock the secrets behind the rapid evolution of robotics with Anshuman Kumar, head of hardware at Matic Robots, as we dissect what makes a robot more than just a machine. Discover how modern marvels, from everyday tools to cutting-edge autonomous vehicles, are reshaping our lives. Anshuman shares the technological breakthroughs that are fueling this transformation, revealing the vital roles that GPUs, AI, and a blend of mechanics, electronics, and algorithms play in creating robots capable of perceiving and interacting with their surroundings like never before. Anshuman Kumar is the Head of Hardware at Matic Robots, where he pioneered the mechanical design for Matic - the world's first truly autonomous, private, and perceptive floor cleaning robot. Previously, he was a key engineer at Tesla Motors, resolving critical reliability and scaling challenges for the Model S and Model 3 traction inverters. With a Master's in Product Design from Carnegie Mellon University and a Bachelor's in Mechanical Engineering from IIT Delhi, Anshuman also founded and led the Carnegie Mellon Hyperloop team to be awarded in the SpaceX Hyperloop competition. In this episode, you'll hear about: Exploration of the robotics spectrum from simple tools to complex autonomous vehicles. Technological breakthroughs in AI, GPUs, and algorithms driving robotic advancements. The role of cameras and computer vision in enhancing home robotics and ensuring privacy. Matic Robots' innovative on-device processing to address privacy concerns in consumer robotics. Cultural and market dynamics explored through a roti-making appliance's success in the US. Importance of curiosity and tackling unglamorous problems in the startup and tech industry. Follow and Review: We'd love for you to follow us if you haven't yet. Click that purple '+' in the top right corner of your Apple Podcasts app. We'd love it even more if you could drop a review or 5-star rating over on Apple Podcasts. Simply select “Ratings and Reviews” and “Write a Review” then a quick line with your favorite part of the episode. It only takes a second and it helps spread the word about the podcast. Supporting Resources: Linkedin - https://www.linkedin.com/in/anshuman-kumar/ Website - https://maticrobots.com/ Contact: anshuman@maticrobots.com ; anshumankumar.iitd@gmail.com Matic Website : https://maticrobots.com/ Hardware Nation Episode : https://www.youtube.com/watch?v=AoUnXZg0Wb0&t=249s&pp=ygUVaGFyZHdhcmUgbmF0aW9uIG1hdGlj Matic Privacy : https://maticrobots.com/blog/why-matic-is-the-most-private-and-secure-robot-vacuum/ Matic Mopping : https://maticrobots.com/blog/the-magic-behind-matics-mopping/ Matic Sweeping : https://maticrobots.com/blog/why-matic-brushroll-is-different/ Alcorn Immigration Law: Subscribe to the monthly Alcorn newsletter Sophie Alcorn Podcast: Episode 16: E-2 Visa for Founders and Employees Episode 19: Australian Visas Including E-3 Episode 20: TN Visas and Status for Canadian and Mexican Citizens Immigration Options for Talent, Investors, and Founders Immigration Law for Tech Startups eBook
At Google Cloud Next '25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity. Google's long-term investment in liquid cooling, now in its fourth generation, plays a key role in managing the heat generated by these powerful chips. Czop highlighted the incremental design improvements made visible through changes in the data center setup, such as liquid cooling pipe placements. Customers often ask whether to use TPUs or GPUs, but the answer depends on their specific workloads and infrastructure. Some, like Moloco, have seen a 10x performance boost by moving directly from CPUs to TPUs. However, many still use both TPUs and GPUs. As models evolve faster than hardware, Google relies on collaborations with teams like DeepMind to anticipate future needs.Learn more from The New Stack about the latest AI infrastructure insights from Google Cloud:Google Cloud Therapist on Bringing AI to Cloud Native InfrastructureA2A, MCP, Kafka and Flink: The New Stack for AI AgentsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
In 2014, when Lisa Su took over as CEO of Advanced Micro Devices, AMD was on the verge of bankruptcy. Su bet hard on hardware and not only pulled the semiconductor company back from the brink, but also led it to surpass its historical rival, Intel, in market cap. Since the launch of ChatGPT made high-powered chips like AMDs “sexy” again, demand for chips has intensified exponentially, but so has the public spotlight on the industry — including from the federal government. In a live conversation, at the Johns Hopkins University Bloomberg Center, as part of their inaugural Discovery Series, Kara talks to Su about her strategy in face of the Trump administration's tariff and export control threats, how to safeguard the US in the global AI race, and what she says when male tech leaders brag about the size of their GPUs. Listen to more from On with Kara Swisher here. Learn more about your ad choices. Visit podcastchoices.com/adchoices
This week on “Waking Up With AI,” Anna Gressel looks at how decentralized AI training could revolutionize the field by allowing for the collaborative use of advanced GPUs worldwide, expanding access to model development while raising interesting questions about export controls and regulatory frameworks. ## Learn More About Paul, Weiss's Artificial Intelligence Practice: https://www.paulweiss.com/practices/litigation/artificial-intelligence
At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE's foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. Allen explained that GKE offers scalability, elasticity, and support for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google's continued investment in open source and scalable architecture.Learn more from The New Stack about the latest insights with Google Cloud: Google Kubernetes Engine Customized for Faster AI WorkKubeCon Europe: How Google Will Evolve Kubernetes in the AI EraApache Ray Finds a Home on the Google Kubernetes EngineJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Gary Marcus is a cognitive scientist, author, and longtime AI skeptic. Marcus joins Big Technology to discuss whether large‑language‑model scaling is running into a wall. Tune in to hear a frank debate on the limits of “just add GPUs" and what that means for the next wave of AI. We also cover data‑privacy fallout from ad‑driven assistants, open‑source bio‑risk fears, and the quest for interpretability. Hit play for a reality check on AI's future — and the insight you need to follow where the industry heads next. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Today we're in conversation with Siddhant Mehta, Project Manager at Skanska, to explore how AI is transforming construction. From choosing the right tools to critiquing SaaS pricing models, Sid shares insights on tech adoption, AI coding, and the future of project management.00:46 – Sid's Journey AbroadSid Mehta shares his story from Mumbai to the U.S., managing multimillion-dollar projects and finding his place in construction management.02:03 – Building Tech NetworksHow Skanska leverages emerging tech groups, vendor evaluations, and peer networks to spread innovation across teams.03:55 – Tech Adoption RealitiesSid challenges perceptions of slow adoption in construction, highlighting why pilot projects need time to show results.05:14 – The Feedback GapWhy construction tech tools often miss the mark, and how missing field feedback hurts tool development.06:43 – Choosing the Right ToolSid explains why not every tech solution fits every project, stressing the importance of aligning tools with project type and phase.09:06 – SaaS Pricing RantA frank critique of SaaS pricing in construction, questioning project-based fees versus simpler subscriptions.12:00 – Naming Names (Kinda) A playful yet pointed critique of familiar industry pricing models—without naming names (but we all know who).17:05 – Rise of AI CodingExploring tools like Replit, Claude, and Cursor, and the rise of “vibe coding” in construction tech and software development.23:02 – AI's Development ImpactHow AI coding shifts the role of developers, and why front-end engineering faces more disruption than back-end.28:00 – Data Centers & DemandHow AI's growth drives demand for data centers, reshaping infrastructure needs for GPUs, power, and cooling.35:00 – Environmental ImpactsA look at the ecological consequences of data center expansion, from water usage to energy demands.40:48 – AI Saves the DayReal-world examples of AI replacing executive assistants, saving hours on email, scheduling, and admin tasks in construction.45:00 – Skanska's Internal AIHow Skanska built internal chatbots to automate project schedules, saving schedulers hours every week.47:26 – Ripple Effect of AISid reflects on how AI's time savings can scale across thousands of employees, transforming workflows organization-wide.50:00 – Marketing's AI ShiftWhy SEO strategies are changing in an AI world, and how creative content is being reshaped by generative tools.54:00 – AI's Rapid AccelerationClosing thoughts on how quickly AI is evolving, and why getting on board now is key for construction leaders.Go build something awesome!CHECK OUT THE PARTNERS THAT MAKE OUR SHOW POSSIBLE: https://www.brospodcast.com/partnersFIND US ONLINE: -Our website: https://www.brospodcast.com -LinkedIn: / constructionbrospodcast -Instagram: / constructionbrospodcast -TikTok: https://www.tiktok.com/@constructionbrothers?lang=en-Eddie on LinkedIn: / eddie-c-057b3b11 -Tyler on LinkedIn: / tylerscottcampbell If you enjoy the podcast, please rate us on Apple Podcasts or wherever you listen to us! Thanks for listening!
Carl Peterson, CEO of Thunder Compute uncovers how Thunder Computer is redefining GPU utilization by enabling network-attached virtual GPUs—dramatically slashing costs and democratizing access. Carl shares the startup's Y Combinator origin story, the impact of DeepSeek, and how virtualization is transforming AI development for individuals and enterprises alike. We also unpack GPU security, job disruption from AI, and the accelerating arms race in model development. A must-listen for anyone navigating AI, compute efficiency, and data protection.
Na adoção de sistemas de Inteligência Artificial para uma ou mais áreas do negócio, as empresas devem começar cada um dos projetos de maneira estruturada e bem planejada. Pensando mais pequeno, se for o caso - e depois, baseadas nos dados, irem escalando o uso da IA de forma mais assertiva, evitando cair no hype de investir recursos e tempo em uma tecnologia tão inovadora sem muito planejamento, apenas para surfar a onda do momento. Para falar desse tema e das iniciativas para democratizar a IA a grandes, médios e pequenos negócios, por meio do processamento em CPUs, mais baratas e energeticamente mais eficientes do que os grandes sistemas baseados em GPUs, tecnologia que vem ganhando espaço no mercado, e o casamento entre a Inteligência Artificial com o desenvolvimento no padrão Open Source (Código Aberto), que incentiva a colaboração e a integração de ecossistemas com diferentes parceiros, o Start Eldorado recebe Sandra Vaz, country manager da Red Hat para o Brasil, que conversou sobre estes e mais temas com o apresentador Daniel Gonzales. O programa vai ao ar todas as quartas-feiras, às 21h, em FM 107,3 para toda a Grande São Paulo, site, app, canais digitais e assistentes de voz.See omnystudio.com/listener for privacy information.
In 2014, when Lisa Su took over as CEO of Advanced Micro Devices, AMD was on the verge of bankruptcy. Su bet hard on hardware and not only pulled the semiconductor company back from the brink, but also led it to surpass its historical rival, Intel, in market cap. Since the launch of ChatGPT made high-powered chips like AMDs “sexy” again, demand for chips has intensified exponentially, but so has the public spotlight on the industry — including from the federal government. In a live conversation, at the Johns Hopkins University Bloomberg Center, as part of their inaugural Discovery Series, Kara talks to Su about her strategy in face of the Trump administration's tariff and export control threats, how to safeguard the US in the global AI race, and what she says when male tech leaders brag about the size of their GPUs. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode of Gradient Dissent, host Lukas Biewald talks with Sualeh Asif, the CPO and co-founder of Cursor, one of the fastest-growing and most loved AI-powered coding platforms. Sualeh shares the story behind Cursor's creation, the technical and design decisions that set it apart, and how AI models are changing the way we build software. They dive deep into infrastructure challenges, the importance of speed and user experience, and how emerging trends in agents and reasoning models are reshaping the developer workflow.Sualeh also discusses scaling AI inference to support hundreds of millions of requests per day, building trust through product quality, and his vision for how programming will evolve in the next few years.⏳Timestamps:00:00 How Cursor got started and why it took off04:50 Switching from Vim to VS Code and the rise of CoPilot08:10 Why Cursor won among competitors: product philosophy and execution10:30 How user data and feedback loops drive Cursor's improvements12:20 Iterating on AI agents: what made Cursor hold back and wait13:30 Competitive coding background: advantage or challenge?16:30 Making coding fun again: latency, flow, and model choices19:10 Building Cursor's infrastructure: from GPUs to indexing billions of files26:00 How Cursor prioritizes compute allocation for indexing30:00 Running massive ML infrastructure: surprises and scaling lessons34:50 Why Cursor chose DeepSeek models early36:00 Where AI agents are heading next40:07 Debugging and evaluating complex AI agents42:00 How coding workflows will change over the next 2–3 years46:20 Dream future projects: AI for reading codebases and papers
Episode 69: The GeForce RTX 5060 Ti 8GB is really bad, there are many problems with it (especially at the price), so is the upcoming AMD Radeon RX 9060 XT 8GB in trouble? We discuss all of that in today's episode, and yes, we're getting into VRAM yet again.CHAPTERS00:00 - Intro00:33 - 8GB GPUs are Dead on Arrival13:54 - The Main Problem is the Name34:56 - Can it Use the Advertised Features?41:18 - AMD Radeon RX 9060 XT Rumor Talk59:16 - Updates From Our Boring LivesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
Everyone's chasing bigger AI. The real opportunity? Smarter scaling.Distributed computing is quietly rewriting the rules of what's possible—not just for tech giants, but for everyone building with AI.We're talking cost. We're talking scale. And we're definitely talking disruption.Tom Curry, CEO and Co-Founder of DistributeAI, joins us as we dig into the future of distributed power and practical AI performance.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Distributed Computing for Affordable AIOpen Source vs. Proprietary AI ModelsGPU Demand and Compute LimitationsEdge Computing and Privacy ConcernsSmall Business AI Compute SolutionsFuture Trends in AI Model SizesImpact of Open Source AI DominanceTimestamps:00:00 Rising Importance of AI Compute06:21 AI Model Resource Constraints09:24 AI Models' Efficiency vs. Complexity12:24 Edge Compute for Daily Tasks16:00 Compute Cost Drives AI Market16:58 AI Models: Balancing Cost and Innovation20:43 Adaptability in Rapidly Changing BusinessKeywords:Distributed computing, compute, GPUs, generative AI, ChatGPT, large language models, open source models, proprietary models, affordable AI, scale, Distribute AI, spare compute, Tom Curry, mid-level businesses, accessible AI ecosystem, API access, power grid, NVIDIA, OpenAI, tokens, chain of thought, models size, reasoning models, edge computing, cell phones analogy, data privacy, DeepSeek, Google Gemini 3, Eloscores, open models, hybrid models, centralized model, OpenAI strategy, Anthropic, Claw tokens, commoditization, applications, government contracts, integration, UX and UI, technology advancements, private source AI, business leaders, AI deployment strategy, flexibility in AI.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
A daily non-partisan, conversational breakdown of today's top news and breaking news stories Headlines: – Trump Says He's ‘Very Angry' at Putin; Says He Doesn't Care If Foreign Car Prices Rise (03:40) – Americans' Economic Outlook A Bit More Pessimistic, Despite Egg Prices Plummeting (08:50) – Will Anyone Be Fired In The Aftermath Of Signalgate? (11:30) – Trump Says There Are “Methods” For Him To Pursue A Third Term (17:30) – Myanmar, Thailand Quake Death Toll Rises Above 1,600 (19:50) – RFK Jr. Forces Out Peter Marks, FDA's Top Vaccine Scientist (22:45) – Leader Of Violent MS-13 Gang Arrested In Virginia, Feds Say (26:00) – Columbia President Is Replaced as Trump Threatens University's Funding (27:30) – Sam Altman: ChatGPT's Viral Image-Generation AI is ‘Melting' OpenAI's GPUs (32:20) – NCAA Final Four Is Set On Men's Side (34:20) – On This Day In History (36:00) Thanks To Our Sponsors: – Vanta – Get $1,000 off – Shopify – $1 per-month trial Code: monews – Industrious - Coworking office. 30% off day pass – LMNT - Free Sample Pack with any LMNT drink mix purchase – Athletic Greens – AG1 Powder + 1 year of free Vitamin D & 5 free travel packs – BetterHelp – 10% off your first month
In this episode, we do a Studio Ghibli-like rendition of The Vergecast. First, Nilay and David discuss some big news in the gadget world, from the mysteriously viral midrange Canon camera to the upgrades we're expecting out of Apple in the next few months. Plus, is it over for Amazon's Echo brand? After all that, The Verge's Kylie Robison joins the show to discuss everything happening at OpenAI: the company launched a new image generator inside of ChatGPT, and it immediately became both a huge hit and a big mess. (Par for the course with OpenAI, really.) Kylie also explains why Perplexity is probably not buying TikTok, no matter how much it might want to. Finally, in the lightning round, it's time for everyone's favorite segment, Brendan Carr Is a Dummy, followed by the latest on the Signal attack-planning chaos in the government, some news about Elon Musk pressuring Reddit CEO Steve Huffmann, and what's next for the car industry with huge tariffs looming. Oh, and a little bit of exciting e-bike news Further reading: From Meta: Bringing the Magic of Friends Back to Facebook Apple's AirPods Max with USB-C will soon support lossless audio The Apple Watch may get cameras and Apple Intelligence Apple's WWDC 2025 event starts June 9th Don't expect an overhauled Messages app in iOS 19. Amazon tests renaming Echo smart speakers and smart displays to just ‘Alexa' OpenAI reshuffles leadership as Sam Altman pivots to technical focus OpenAI upgrades image generation and rolls it out in ChatGPT and Sora ChatGPT's new image generator is delayed for free users ChatGPT is turning everything into Studio Ghibli art OpenAI says ‘our GPUs are melting' as it limits ChatGPT image generation requests OpenAI expects to earn $12.7 billion in revenue this year. Nvidia Infinite Creative Microsoft adds ‘deep reasoning' Copilot AI for research and data analysis Google says its new ‘reasoning' Gemini AI models are the best ones yet Google is rolling out Gemini's real-time AI video features Perplexity's bid for TikTok continues Trump's FCC says it will start investigating Disney, too From Status: Sounding the Carr Alarm Trump officials leaked a military strike in a Signal group chat The Atlantic releases strike group chat messages And the Most Tortured Signal-Gate Backronym Award goes to… | The Verge Elon Musk pressured Reddit's CEO on content moderation | The Verge Trump's plans to save TikTok may fail to keep it online, Democrats warn Rivian spins out secret e-bike lab into a new company called Also BYD beats Tesla. Trump says he will impose a 25 percent tariff on imported vehicles Email us at vergecast@theverge.com or call us at 866-VERGE11, we love hearing from you. Learn more about your ad choices. Visit podcastchoices.com/adchoices