POPULARITY
Categories
Episode 75: Jarrod from Jarrod's Tech joins us to talk about gaming laptops. We discuss the situation with horrible laptop GPU names, poor VRAM configurations, absurd pricing for higher tier models and whether gaming laptops actually make sense to begin with.JARROD'S TECHCheck out Jarrod's Channel: https://www.youtube.com/@jarrodstechCheck out Jarrod's website: https://gaminglaptop.deals/CHAPTERS00:00 - Intro01:32 - VRAM on Laptops06:25 - RTX 5090 Laptop vs Desktop18:22 - Absurd Laptop Pricing30:37 - Do Gaming Laptops Make Sense?38:43 - Laptop DisplaysSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
Welcome to the first episode of the Virtually Speaking Podcast's VMware Cloud Foundation 9 series! In this inaugural installment, hosts Pete Flecha and John Nicholson sit down with Paul Turner, Broadcom's leader of VCF product management, to kick off an in-depth exploration of VMware Cloud Foundation 9. This series will dive deep into the platform's latest innovations, capabilities, and transformative potential for private cloud computing. Key Highlights: The rise of private cloud and its cost-efficiency compared to public cloud services Sovereign cloud capabilities and regulatory compliance Innovations in GPU as a service Native support for VMs and Kubernetes in a single platform Advances in storage (vSAN) with enhanced performance and resilience Networking improvements with native VPC setup in vCenter VMware's partnerships with hyperscalers and 500+ cloud service providers Join us as we unpack the exciting developments in VCF 9 across multiple episodes.
From energy bottlenecks to proprietary GPU ecosystems, the CEO of TensorWave, Darrick Horton explains why today's AI scale is unsustainable—and how open-source hardware, smarter networking, and nuclear power could be the fix.QUOTESDarrick Horton“The energy crisis is getting worse every day. It's very hard to find data center capacity—especially capacity that can scale. Five years ago, 10 or 20 megawatts was considered state-of-the-art. Now, 20 is nothing. The real hyperscale AI players are looking at 100 megawatts minimum, going into the gigawatt territory. That's more than many cities combined just to power one cluster.”Charna Parkey“We're still training models in a very brute-force way—throwing the biggest datasets possible at the problem and hoping something useful emerges. That's not sustainable. At some point, we have to shift toward smarter, more intentional training methods. We can't afford to be wasteful at this scale.”TIMESTAMPS[00:00:00] Introduction[00:01:00] Founding TensorWave[00:04:00] AMD as a Viable Alternative[00:08:00] Open Source as a Startup Enabler[00:09:30] Launching ScalarLM[00:12:00] ScalarLM Impact and Reception[00:14:30] Roadmap for 2025[00:16:00] Technical Advantages of AMD[00:18:00] Emerging Open Source Infrastructure[00:20:00] Broader Societal Issues AI Must Address[00:22:00] AI's Impact on Global Energy[00:26:00] Fundamental Hardware vs. Human Efficiency[00:30:00] Data Center Density Evolution[00:34:00] Advice to Founders and Tech Trends[00:38:00] AI Energy Challenges[00:44:00] AI's Rapid Impact vs. Internet[00:46:00] Monopoly vs. Democratization in AI[00:50:00] Close to Season Wrap Discussion and Predictions
Cisco's AI Channel Playbook: Cassie Roach on Partner Enablement and Infrastructure Innovation, Podcast, With major announcements around AI infrastructure, including AI Pods, Nexus HyperFabric, and GPU-intensive servers, Cisco is positioning itself not just as a networking leader — but as the channel's go-to platform for AI-ready data centers "AI is a once-in-a-generation opportunity — and Cisco is making it real for partners." — Cassie Roach, Global VP, Cloud and AI Infrastructure Partner Sales, Cisco At Cisco Live 2025 in San Diego, Technology Reseller News publisher Doug Green spoke with Cassie Roach, Cisco's Global Vice President of Cloud and AI Infrastructure Partner Sales, about the company's bold steps to transform AI hype into tangible partner opportunity. With major announcements around AI infrastructure, including AI Pods, Nexus HyperFabric, and GPU-intensive servers, Cisco is positioning itself not just as a networking leader — but as the channel's go-to platform for AI-ready data centers. Key Cisco AI Updates for Partners: AI-Ready Infrastructure Specialization: A new certification that helps partners align with customer POCs, scale faster, and prove ROI. Black Belt Training & Partner Tools: Designed to educate, equip, and incentivize partner sellers with co-selling platforms, growth planning, and layered rewards. Marketing Velocity Central: Cisco-branded campaign kits and industry-specific go-to-market resources for partners. AI Pods: Modular infrastructure for training, fine-tuning, and inferencing workloads — with “small, medium, and large” sizing for pilot-to-production journeys. “We're creating an easy button for partners — even in a complex AI environment,” Roach explained. Cisco's approach focuses on frictionless engagement — empowering partners with everything from vertical use-case blueprints to hands-on support for opportunity identification through PXP Growth Finder. Roach emphasized that success depends on enabling partners at every level — not just executives or system integrators — but also frontline sellers, who now have access to tools that simplify the AI value proposition and drive sales. She also highlighted how AI is being securely embedded across Cisco's portfolio — from infrastructure to Webex Collaboration and end-to-end security, allowing customers to move from pilots to production with confidence. “This isn't just about AI,” Roach said. “It's about unlocking the entire Cisco portfolio — in a way that creates real stickiness, real customer outcomes, and real partner growth.” To explore Cisco's partner programs and AI infrastructure resources, visit cisco.com, or log into the partner portal via Sales Connect.
"At IBM, we really work on two emerging technologies: hybrid cloud and AI for enterprise. These two are deeply connected. Hybrid cloud for us means that regardless of where the data sits whether the compute is on-premise, off-premise, or across multiple clouds. We believe the client should have the control and flexibility to choose where to run and place their data. If you look at the facts, a very high percentage of client data is still on-premise. It hasn't moved to the cloud for obvious reasons. So, how can you scale AI if you don't have proper access to that data? AI is all about the data. That's why we believe in a strategy that redefines and rethinks everything. We call it the Great Technology Reset." - Hans Dekkers Fresh out of the studio, Hans Dekkers, General Manager of IBM Asia Pacific, joins us to explore how enterprise AI is reshaping business across the region. He shares his journey with IBM after business school, reflecting on the evolution of personal computers to AI today. Hans explains IBM's unique approach combining hybrid cloud infrastructure with AI for Enterprise, emphasizing how their granite models and data fabric enable businesses and governments to maintain control over their data while scaling AI capabilities. He highlights customer stories from Indonesian telecoms company to internal IBM transformations, showcasing how companies are re-engineering everything from HR to supply chains using domain-specific AI models. Addressing the challenges of AI implementation, he emphasizes the importance of foundational infrastructure and governance, while advocating for smaller, cost-effective models over GPU-heavy approaches. Closing the conversation, Hans shares his vision for IBM's growing presence in Asia as the key to enterprise AI success. Episode Highlights: [00:00] Quote of the Day by Hans Dekkers [01:00] Introduction: Hans Dekkers from IBM [05:00] Key career lesson from Hans Dekker [06:51] IBM focuses on two emerging technologies: hybrid cloud and AI for Enterprise, deeply connected [09:27] "Your data needs to remain your data" - IBM's fundamental AI principle for enterprise clients [10:00] IBM's approach: Small, nimble, cost-effective AI models that can be owned and governed by clients [13:59] "The cost of AI is still too high. It's about a hundred times too high" - IBM CEO's perspective on AI costs [14:44] Small domain-specific models example: Banking AI trained for financial analysis, not Russian poetry [18:00] IBM's internal transformation: HR, supply chain, and consulting completely re-engineered with AI [21:18] Major partnership announcement: Indonesian telecom embracing IBM's watsonx platform [22:23] AI agents demo: Multiple agents (HR, finance, legal) debating and constructing narratives [25:00] "Everyone talks about AI equals GPU" - Hans wishes clients understood that inferencing is more important [27:00] IBM's Asia Pacific vision: Reestablishing growing presence and differentiated technology approach [28:00] Closing Profile: Hans Dekkers, General Manager IBM Asia Pacific and China: https://www.linkedin.com/in/hans-a-t-dekkers/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Analyse Asia Threads: https://www.threads.net/@analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288
Carmen Li spent decades in financial services across trading floors and data companies before spotting a massive inefficiency in the AI/compute economy. After managing global data partnerships at Bloomberg, she witnessed AI startups struggling with unpredictable compute costs that could swing their margins from healthy profits to devastating losses overnight. Drawing parallels to how airlines hedge oil prices through futures markets, Carmen realized that compute—despite being one of the fastest-growing commodities—lacked basic risk management tools. Within months of leaving Bloomberg, she built Silicon Data into the world's first GPU compute risk management platform, raising $5.7M without ever creating a pitch deck and publishing the industry's first GPU compute index on Bloomberg Terminal. Topics Discussed: The systemic problem of compute cost volatility destroying AI company margins Why compute lacks the risk management tools available in every other commodity market Building the world's first GPU compute index and benchmarking service Raising venture capital without pitch decks through product-first demonstrations Operating as a solo non-technical founder leading a team of engineers The unique buyer dynamics when selling to CTOs, portfolio managers, and AI researchers simultaneously GTM Lessons For B2B Founders: Price on value, not cost, and let customer conversations reshape your understanding: Carmen admits that every client conversation changes her valuation of the product's impact, typically making it bigger than initially thought. She prices based on the value delivered rather than cost structure. B2B founders should remain flexible in their value proposition and pricing as they learn more about customer impact through direct engagement. Product demonstrations beat pitch decks for technical buyers: Carmen raised $5.7M without ever creating a pitch deck, instead letting prospects interact directly with her product and writing a simple memo. For technical products solving complex problems, demonstrating actual capabilities often proves more effective than polished presentations. B2B founders should prioritize building working products over perfecting sales materials. Embrace being the "dumbest person in the room" for learning velocity: Carmen describes consistently being the least technical person in rooms full of CTOs, AI researchers, and GPU experts, but leverages this as a learning advantage. She asks hard questions and co-creates products on the fly based on these conversations. B2B founders should view knowledge gaps as opportunities for rapid learning rather than weaknesses to hide. Target systemic problems that span multiple sophisticated buyer types: Silicon Data serves everyone from chip designers to hedge funds to AI companies, requiring Carmen to handle technical GPU questions, financial modeling queries, and AI workflow concerns in single meetings. This breadth creates natural expansion opportunities and defensibility. B2B founders should look for problems that affect multiple stakeholder types within their target market. Leverage unique background intersections to spot obvious-but-overlooked opportunities: Carmen's combination of financial services expertise and data company experience let her quickly identify that compute needed the same risk management tools available in every other commodity market. The solution was "extremely intuitive" to her but invisible to others. B2B founders should examine how their unique background combinations reveal opportunities others might miss. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
A Google újra megcsinálta: bemutatta a NotebookLM mobilos verzióját, ami nemcsak hogy olvas helyetted, hanem most már hallgatnivalót is gyárt neked. Feltöltesz egy doksit, és hopp, tíz percen belül két szintetikus hang úgy beszélget róla, mintha tényleg értenének is hozzá. Gyere velünk, hallgasd meg, ahogy emberek próbálnak viccesek lenni egy olyan korban, ahol már a Google is próbálkozik ezzel – csak kevesebb izzadtsággal és több GPU-val.
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
In this episode of the ‘AI in Business' podcast, Principal Group Product Manager Will Guyman of Microsoft and Senior Director Lyndi Wu of NVIDIA explore how healthcare providers are deploying AI at scale to drive clinical and operational transformation. With host Matthew DeMello, they discuss the combined power of NVIDIA's GPU acceleration and Microsoft's Azure cloud to enable seamless AI integration—from the exam room to the data center. Will explains how AI is alleviating administrative burdens, improving imaging diagnostics, and reshaping physician workflows through ambient documentation and agentic systems. Lyndi expands on the infrastructure demands of scaling these systems, highlighting NVIDIA's full-stack ecosystem approach and the role of healthcare startups in fast-tracking AI deployment across hospitals and life sciences organizations. This episode is sponsored by Microsoft and NVIDIA. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
In this episode of the Data Center Frontier Show, we sit down with Kevin Cochrane, Chief Marketing Officer of Vultr, to explore how the company is positioning itself at the forefront of AI-native cloud infrastructure, and why they're all-in on AMD's GPUs, open-source software, and a globally distributed strategy for the future of inference. Cochrane begins by outlining the evolution of the GPU market, moving from a scarcity-driven, centralized training era to a new chapter focused on global inference workloads. With enterprises now seeking to embed AI across every application and workflow, Vultr is preparing for what Cochrane calls a “10-year rebuild cycle” of enterprise infrastructure—one that will layer GPUs alongside CPUs across every corner of the cloud. Vultr's recent partnership with AMD plays a critical role in that strategy. The company is deploying both the MI300X and MI325X GPUs across its 32 data center regions, offering customers optimized options for inference workloads. Cochrane explains the advantages of AMD's chips, such as higher VRAM and power efficiency, which allow large models to run with fewer GPUs—boosting both performance and cost-effectiveness. These deployments are backed by Vultr's close integration with Supermicro, which delivers the rack-scale servers needed to bring new GPU capacity online quickly and reliably. Another key focus of the episode is ROCm (Radeon Open Compute), AMD's open-source software ecosystem for AI and HPC workloads. Cochrane emphasizes that Vultr is not just deploying AMD hardware; it's fully aligned with the open-source movement underpinning it. He highlights Vultr's ongoing global ROCm hackathons and points to zero-day ROCm support on platforms like Hugging Face as proof of how open standards can catalyze rapid innovation and developer adoption. “Open source and open standards always win in the long run,” Cochrane says. “The future of AI infrastructure depends on a global, community-driven ecosystem, just like the early days of cloud.” The conversation wraps with a look at Vultr's growth strategy following its $3.5 billion valuation and recent funding round. Cochrane envisions a world where inference workloads become ubiquitous and deeply embedded into everyday life—from transportation to customer service to enterprise operations. That, he says, will require a global fabric of low-latency, GPU-powered infrastructure. “The world is going to become one giant inference engine,” Cochrane concludes. “And we're building the foundation for that today.” Tune in to hear how Vultr's bold moves in open-source AI infrastructure and its partnership with AMD may shape the next decade of cloud computing, one GPU cluster at a time.
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Predictive AI Quarterly ist unser neues Format im Data Science Deep Dive. Alle 3 Monate sprechen wir über Entwicklungen im Bereich Predictive AI - kompakt, kritisch und praxisnah. Wir starten mit einem Überblick zu den aktuellen News und Trends, danach wird's hands-on: Wir berichten, was wir selbst ausprobiert haben, was gut funktioniert hat und was nicht. **Zusammenfassung** TabPFN ist ein Foundation-Modell speziell für tabulare Daten, das Prognose- und Klassifikationsaufgaben ohne Finetuning lösen kann Finetuning-Optionen: Neben dem kostenpflichtigen Angebot von PriorLabs existiert ein Open-Source-Repo zum Finetuning von TabPFN, das aktiv weiterentwickelt wird mit TabICL gibt es ein weiteres Foundation-Modell für tabulare Daten, das synthetisch trainiert ist, sich auf Klassifikation konzentriert und auch bei großen Datensätzen (bis 500k Zeilen) schnelle Inferenz verspricht Foundation-Modelle für Zeitreihen: Unternehmen wie IBM, Google und Salesforce entwickeln eigene Foundation-Modelle für Time-Series Forecasting (z. B. TTMs, TimesFM, Moirai), diese werden bislang auf echten Zeitreihen trainiert der GIFT-Benchmark dient als Standard zum Vergleich von Zeitreihenmodellen – hier zeigt sich, dass ein angepasstes TabPFN auch für Zeitreihen überraschend leistungsfähig ist Hands On: TabPFN lässt sich analog zu scikit-learn einsetzen und ist besonders dann praktisch, wenn eine GPU vorhanden ist, die Einstiegshürde ist sehr niedrig in Zukunft wird mit multimodalen Erweiterungen (z. B. Bilder), quantisierten Varianten und weiteren Alternativen zu TabPFN gerechnet, der Bereich Foundation Models für strukturierte Daten entwickelt sich rasant **Links** Podcastfolge #72: TabPFN: Die KI-Revolution für tabulare Daten mit Noah Hollmann TabPFN: Finetuning Angebot von Prior Labs GitHub-Repo: Finetune TabPFN v2 GitHub-Repo: Zero-Shot Time Series Forecasting mit TabPFNv2 TabICL: GitHub-Repo: TabICL – Tabular In-Context Learning Workshop @ ICML 2025: Foundation Models for Structured Data (18. Juli 2025 in Vancouver) Blogartikel & Studien: Tiny Time Mixers (TTMs) von IBM Research Moirai: A Time Series Foundation Model by Salesforce Blogartikel von inwt: "TabPFN: Die KI-Revolution für tabulare Daten" Huggingface Spaces & Modelle: TimesFM Foundation Model für Zeitreihen von Google Research GIFT-Eval Forecasting Leaderboard
Nvidia – das Unternehmen verbinden viele vor allem mit Grafikprozessoren für Videospiele. Doch das US-Unternehmen ist heute einer der wichtigsten Akteure im Bereich der Künstlichen Intelligenz. GPUs von Nvidia sorgen für enorme Rechenleistungen, die unter anderem für das Training von KI-Modellen genutzt werden. Auch die Finanz Informatik hat im Zuge des S-KIPilot, einer Agentic AI für Sparkassen-Mitarbeitende, mit Nvidia zusammengearbeitet. Denn die KI-Anwendung läuft in den mit Nvidia-Chips ausgestatteten Rechenzentren der Finanz Informatik, also 100 %-on-prem. Warum ist das wichtig? Was genau bedeutet überhaupt Agentic AI und wie sieht die Zukunft in diesem Bereich aus? Und was macht Nvidia sonst noch so? Diese und weitere Fragen stellt Host Jonas Ross an Dr. Jochen Papenbrock und Markus Hacker von Nvidia in dieser Folge von „Alles Digital?!“, dem Podcast der Finanz Informatik zu Innovationen in der Finanzwelt.
Talk Python To Me - Python conversations for passionate developers
If you're looking to leverage the insane power of modern GPUs for data science and ML, you might think you'll need to use some low-level programming language such as C++. But the folks over at NVIDIA have been hard at work building Python SDKs which provide nearly native level of performance when doing Pythonic GPU programming. Bryce Adelstein Lelbach is here to tell us about programming your GPU in pure Python. Episode sponsors Posit Agntcy Talk Python Courses Links from the show Bryce Adelstein Lelbach on Twitter: @blelbach Episode Deep Dive write up: talkpython.fm/blog NVIDIA CUDA Python API: github.com Numba (JIT Compiler for Python): numba.pydata.org Applied Data Science Podcast: adspthepodcast.com NVIDIA Accelerated Computing Hub: github.com NVIDIA CUDA Python Math API Documentation: docs.nvidia.com CUDA Cooperative Groups (CCCL): nvidia.github.io Numba CUDA User Guide: nvidia.github.io CUDA Python Core API: nvidia.github.io Numba (JIT Compiler for Python): numba.pydata.org NVIDIA's First Desktop AI PC ($3,000): arstechnica.com Google Colab: colab.research.google.com Compiler Explorer (“Godbolt”): godbolt.org CuPy: github.com RAPIDS User Guide: docs.rapids.ai Watch this episode on YouTube: youtube.com Episode #509 deep-dive: talkpython.fm/509 Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Dr Randy McDermott takes us behind the scenes of fire science's most critical software tool in this conversation about the Fire Dynamic Simulator (FDS) developed at NIST. As one of the developers, Randy offers valuable insights into how this essential modelling tool is maintained, improved, and adapted to meet the evolving challenges of the fire safety community.The conversation begins with a look at the development process itself, based on a greater picture roadmap and also addressing practical issues reported by users. This balance between vision and responsiveness has helped FDS maintain its position as the gold standard in fire modelling. Randy unpacks the massive validation guide (over 1,200 pages) and explains how users should approach it to understand model capabilities and uncertainties. The guide, along with all the validation cases, is available at Github repository here: https://github.com/firemodels/fdsRather than blindly applying FDS to any problem, he emphasises the importance of identifying similar validated cases and understanding the limitations of the software for specific applications. The discussion tackles emerging challenges like battery fires and mass timber construction – areas where traditional fire modelling approaches face significant hurdles. Randy addresses the limitations of current models while outlining pathways for future development, including potential integration with external specialised models and improvements in chemistry modelling.Finally, we also get to talk about computational costs and efficiency. As Randy explains the implementation of GPU acceleration and the challenges of incorporating detailed chemistry, listeners gain a deeper appreciation of the tradeoffs involved in advanced fire modelling.Whether you're an FDS user, fire safety engineer, or simply curious about computational modelling, this episode offers valuable perspectives on the past, present and future of the tool that underpins modern fire safety science. Oh, and Randy is not just an FDS developer - he is also a prolific researcher. You can find more about his scientific works here: https://www.nist.gov/people/randall-j-mcdermottAs always, MASSIVE THANKS TO THE NIST GROUP AND THEIR COLLABORATORS FOR BUILDING AND MAINTAINING THE SINGLE MOST IMPORTANT PIECE OF SOFTWARE WE HAVE!!! You guys are not thanked enough!----The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.
Craig Dunham is the CEO of Voltron Data, a company specializing in GPU-accelerated data infrastructure for large-scale analytics, AI, and machine learning workloads. Before joining Voltron Data, he served as CEO of Lumar, a SaaS technical SEO platform, and held executive roles at Guild Education and Seismic, where he led the integration of Seismic's acquisition of The Savo Group and drove go-to-market strategies in the financial services sector. Craig began his career in investment banking with Citi and Lehman Brothers before transitioning into technology leadership roles. He holds a MBA from Northwestern University and a BS from Hampton University. In this episode… In a world where efficiency and speed are paramount, how can companies quickly process massive amounts of data without breaking the bank on infrastructure and energy costs? With the rise of AI and increasing data volumes from everyday activities, organizations face a daunting challenge: achieving fast and cost-effective data processing. Is there a solution that can transform how businesses handle data and unlock new possibilities? Craig Dunham, a B2B SaaS leader with expertise in go-to-market strategy and enterprise data systems, tackles these challenges head-on by leveraging GPU-accelerated computing. Unlike traditional CPU-based systems, Voltron Data's technology uses GPUs to greatly enhance data processing speed and efficiency. Craig shares how their solution helps enterprises reduce processing times from hours to minutes, enabling organizations to run complex analytics faster and more cost-effectively. He emphasizes that Voltron Data's approach doesn't require a complete overhaul of existing systems, making it a more accessible option for businesses seeking to enhance their computing capabilities. In this episode of the Inspired Insider Podcast, Dr. Jeremy Weisz interviews Craig Dunham, CEO at Voltron Data, about building high-performance data systems. Craig delves into the challenges and solutions in today's data-driven business landscape, how Voltron Data's innovative solutions are revolutionizing data analytics, and the advantages of using GPU over CPU for data processing. He also shares valuable lessons on leading high-performing teams and adapting to market demands.
Every once in a while at Where to Stick It headquarters, a recording falls through the cracks. It gets glanced over for weeks and forgotten about, like a mustard bottle you find in the back of the refrigerator. However, today we're shining a light on one such episode that expired in 2021. Dan blows out his back while bribing people for video cards at microcenter, Pete Lobo, and Prospect talk about their upcoming vacations, Dan shares his knowledge of Otters, and the boys discuss who would win a fight between a shark and a wolf. Support the showCatch new episodes of the Where to Stick It Podcast every Tuesday and Thursday. If you like the show, please consider supporting us on Patreon where we upload exclusive content each month for only $3 a month.
Packaging MLOps Tech Neatly for Engineers and Non-engineers // MLOps Podcast #322 with Jukka Remes, Senior Lecturer (SW dev & AI), AI Architect at Haaga-Helia UAS, Founder & CTO at 8wave AI. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAI is already complex—adding the need for deep engineering expertise to use MLOps tools only makes it harder, especially for SMEs and research teams with limited resources. Yet, good MLOps is essential for managing experiments, sharing GPU compute, tracking models, and meeting AI regulations. While cloud providers offer MLOps tools, many organizations need flexible, open-source setups that work anywhere—from laptops to supercomputers. Shared setups can boost collaboration, productivity, and compute efficiency.In this session, Jukka introduces an open-source MLOps platform from Silo AI, now packaged for easy deployment across environments. With Git-based workflows and CI/CD automation, users can focus on building models while the platform handles the MLOps.// BioFounder & CTO, 8wave AI | Senior Lecturer, Haaga-Helia University of Applied SciencesJukka Remes has 28+ years of experience in software, machine learning, and infrastructure. Starting with SW dev in the late 1990s and analytics pipelines of fMRI research in early 2000s, he's worked across deep learning (Nokia Technologies), GPU and cloud infrastructure (IBM), and AI consulting (Silo AI), where he also led MLOps platform development. Now a senior lecturer at Haaga-Helia, Jukka continues evolving that open-source MLOps platform with partners like the University of Helsinki. He leads R&D on GenAI and AI-enabled software, and is the founder of 8wave AI, which develops AI Business Operations software for next-gen AI enablement, including regulatory compliance of AI.// Related LinksOpen source -based MLOps k8s platform setup originally developed by Jukka's team at Silo AI - free for any use and installable in any environment from laptops to supercomputing: https://github.com/OSS-MLOPS-PLATFORM/oss-mlops-platformJukka's new company:https://8wave.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jukka on LinkedIn: /jukka-remesTimestamps:[00:00] Jukka's preferred coffee[00:39] Open-Source Platform Benefits[01:56] Silo MLOps Platform Explanation[05:18] AI Model Production Processes[10:42] AI Platform Use Cases[16:54] Reproducibility in Research Models[26:51] Pipeline setup automation[33:26] MLOps Adoption Journey[38:31] EU AI Act and Open Source[41:38] MLOps and 8wave AI[45:46] Optimizing Cross-Stakeholder Collaboration[52:15] Open Source ML Platform[55:06] Wrap up
ICICI Prudential AMC is eyeing a massive ₹10,000 crore IPO with a record 17 banks on board, while India considers opening its EV policy to Chinese firms despite geopolitical tensions. In the food delivery wars, Rapido is challenging Zomato-Swiggy with a zero-commission model. Meanwhile, Andhra Pradesh's push for 10-hour workdays is raising worker rights alarms, and Group Captain Shubhanshu Shukla is set to make history as India's next man in space. Also making headlines: AMD's GPU talks, surging defence stocks, and brands cashing in on the Jagannath Rath Yatra.
Folks, on this week's episode we hear about scientists creating the worlds smallest violin, North Dakota finally installing flush toilets at historic sites, a 200 year old condom displayed in a museum, a shipping scam involving $90k GPU's, and a man who got stuck in a stores massage chairBecome a patron for weekly bonus eps and more stuff! :www.patreon.com/whatatimepodCheck out our YouTube channel: https://www.youtube.com/c/whatatimetobealiveGet one of our t-shirts, or other merch, using this link! https://whatatimepod.bigcartel.com/whatatimepod.comJoin our Discord chat here:discord.gg/jx7rB7JTheme music by Naughty Professor: https://www.naughtyprofessormusic.com/@pattymo // @kathbarbadoro // @eliyudin// @whatatimepod©2025 What A Time LLC
Join host Shruthi to discover how organizations use GPU-accelerated computing on AWS. Container Specialist Re Alvarez Parmar shows how Rivian optimizes GPU usage for autonomous vehicles with Amazon EKS. AWS Financial Services expert Sudhir Kalidindi explains real-time fraud detection processing 100B+ events annually. Learn architectural patterns and tools to maximize performance while controlling costs for AI workloads and next-gen applications. Learn More: AWS News Blog: New Amazon EC2 P6-B200 instances powered by NVIDIA Blackwell GPUs to accelerate AI innovations: https://aws.amazon.com/blogs/aws/new-amazon-ec2-p6-b200-instances-powered-by-nvidia-blackwell-gpus-to-accelerate-ai-innovations/ Accelerating Fraud Detection in Financial Services with NVIDIA RAPIDS on AWS: https://github.com/aws-samples/ai-credit-fraud-workflow
Auriculares SoundCore: https://amzn.to/3FAcZGIGANA DINERO CON TU TARJETA GRAFICA:
פרק מספר 496 של רברס עם פלטפורמה - באמפרס מספר 86: רן, דותן ואלון באולפן הוירטואלי (באמצעות Riverside.fm - תודה!) עם סדרה של קצרצרים שתפסו את תשומת הלב בתקופה האחרונה (והפעם קצת יותר) - בלוגים מעניינים, דברים מ- GitHub או Twitter וכל מיני דברים שראינו, לפני שהכל יתמלא ב-AI.
Join hosts Mike and Mark for a riveting new episode of the Moonshots Podcast, where they delve into the extraordinary leadership journey of Jensen Huang, the visionary co-founder and CEO of NVIDIA. Discover how Huang's innovative thinking and resilience have propelled NVIDIA to the forefront of the technology industry, shaping the future of AI, high-performance computing, and autonomous driving.Read Short Biography: https://www.apolloadvisor.com/nvidia-ceo-jensen-huang-lessons-for-entrepreneurs/Episode Highlights:INTRO: The episode starts with a segment from 60 Minutes, showcasing the incredible power of NVIDIA and its influence on the future of AI.Clip: The future of AI (2m41)FOUNDING NVIDIA: Travel back to 2009, when Jensen recalls NVIDIA's early days. Learn how the three founding members gave the company its legs and gain valuable insights into securing venture capital funding.Clip: The first six months (2m31)LEADERSHIP INSIGHTS: Jensen Huang shares a profound perspective on the importance of suffering and resilience, hitting us with some hard truths about leadership and perseverance.Clip: Expectations versus resilience (1m40)OUTRO: The episode concludes with Jensen offering his wisdom on our perception of time and how we can always make room for what truly matters.Clip: There's always time (2m48)About Jensen Huang:Jensen Huang, born on February 17, 1963, in Taiwan, moved to the U.S. at age ten and pursued engineering, earning degrees from Oregon State University and Stanford University. Huang co-founded NVIDIA in 1993, and under his leadership, the first GPU was introduced in 1999, transforming NVIDIA into a leader in AI and high-performance computing. His philanthropic efforts and recognition, including a $50 million donation to Oregon State University and being named to the TIME 100 list, reflect his profound impact on technology and society.About Moonshots Podcast:Moonshots Podcast helps entrepreneurs become the best versions of themselves by overcoming self-doubt and shooting for the moon. We learn out loud, deconstructing the success of the world's greatest thinkers and entrepreneurs to apply their insights to our lives. Thanks to our monthly supporters Joanne Carbone Joanne Carbone Emily Rose Banks Malcolm Magee Natalie Triman Kaur Ryan N. Marco-Ken Möller Mohammad Lars Bjørge Edward Rehfeldt III 孤鸿 月影 Fabian Jasper Verkaart Andy Pilara ola Austin Hammatt Zachary Phillips Mike Leigh Cooper Gayla Schiff Laura KE Krzysztof Roar Nikolay Ytre-Eide Stef Roger von Holdt Jette Haswell venkata reddy Ingram Casey Ola rahul grover Ravi Govender Craig Lindsay Steve Woollard Lasse Brurok Deborah Spahr Barbara Samoela Jo Hatchard Kalman Cseh Berg De Bleecker Paul Acquaah MrBonjour Sid Liza Goetz Konnor Ah kuoi Marjan Modara Dietmar Baur Bob Nolley ★ Support this podcast on Patreon ★
“When you talk about inferencing, you have a real application. The application has data. The application has compute. The application needs to interface with other third-party applications. So, you need a full general-purpose cloud to coexist with a GPU cloud to power inferencing application at scale,” DigitalOcean CEO Paddy Srinivasan tells Bloomberg Intelligence senior technology analyst Anurag Rana. In this episode of Tech Disruptors, the two discuss DigitalOcean's edge as a digitally-native cloud service provider in a market served largely by the three hyperscalers. In this conversation, Srinivasan also touches upon the company's differentiated approach to capital spending, AI inferencing vs. training and which pockets still have a good growth runway.
“We're part of the governance mechanism telcos need for AI to be safe, secure, and performant,” said Stephen Douglas, Head of Market Strategy at Spirent. In this Technology Reseller News podcast, Publisher Doug Green interviews Stephen Douglas of Spirent, one of the world's leading test, measurement, and assurance providers for the ICT sector. The conversation takes a deep dive into how telecom service providers are uniquely positioned to lead in the era of sovereign AI — and why rigorous testing and assurance will be critical to their success. As AI moves from training clusters into live consumer and enterprise applications, the demands on network infrastructure are shifting dramatically. From unpredictable traffic bursts to strict data sovereignty regulations, Douglas explains how these changes represent both a threat and an opportunity for telcos. Central to the discussion is the concept of Sovereign AI — the idea that governments and mission-critical industries will require localized, compliant, and secure AI infrastructures. Far from being a constraint, Douglas argues this trend offers telcos a new chance to move beyond the “fat dumb pipe” role and become national AI enablers. By leveraging their deep, distributed infrastructure, trusted relationships, and regulatory experience, service providers can offer GPU-as-a-service, AI model orchestration, and privacy-compliant platforms. But as Douglas warns, with opportunity comes accountability. Telcos will need to validate AI model behavior, prevent hallucinations, ensure security doesn't degrade performance, and continuously test compliance. Spirent, Douglas notes, plays a crucial role in enabling this transformation: “From performance validation to continuous compliance audits, we're helping service providers become the trusted AI platforms of the future.” To learn more about Spirent: https://www.spirent.com
Is HTML5 gaming finally ready for prime time? After multiple failed attempts over the past decade, something fundamental has changed.In this episode, we sit down with Dmitry Kachmar, CEO of Playgama, who recently raised $3M betting that HTML5 gaming is about to explode. Thanks to GPU advances from the AI boom, 5G networks, and platforms desperate to break free from app store monopolies, browser-based games can now rival native mobile experiences.We dive deep into why Discord, Telegram, and every platform with a web browser are racing to add games, how developers can leverage this distribution revolution, and what it means for the future of mobile gaming. Dmitry shares candid insights on platform-specific design strategies, the challenges of retention without app store lock-in, and why he believes over 50% of new games will be HTML5-based within five years.Whether you're a game developer, studio executive, or industry observer, this conversation reveals why the next chapter of gaming might not be in app stores—but everywhere else.Key topics covered:The technology convergence enabling AAA-quality browser gamesHow to monetize without the 30% app store taxPlatform-specific strategies for Discord vs Telegram vs web portalsWhy viral mechanics are no longer optionalThe distribution-as-a-service model reshaping game publishingAbout the guest: Dmitry Kachmar is CEO of Playgama, former Yandex executive with multiple exits, Harvard grad, and adtech veteran now focused on democratizing game distribution.
In today's Tech3 from Moneycontrol, PhonePe brings on banking veteran Zarin Daruwala ahead of its IPO, while Sarvam AI secures India's largest GPU subsidy under the IndiaAI Mission. Historian Vikram Sampath launches his AI startup NAAV, and Blinkit is fast catching up with Zomato in user numbers. Plus, Apple deepens its India play with Tata, BIS cracks down on Amazon and Flipkart, and tragedy strikes after RCB's IPL victory parade.
Our 211th episode with a summary and discussion of last week's big AI news! Recorded on 05/31/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. Join our Discord here! https://discord.gg/nTyezGSKwP In this episode: Recent AI podcast covers significant AI news: startups, new tools, applications, investments in hardware, and research advancements. Discussions include the introduction of various new tools and applications such as Flux's new image generating models and Perplexity's new spreadsheet and dashboard functionalities. A notable segment focuses on OpenAI's partnership with the UAE and discussions on potential legislation aiming to prevent states from regulating AI for a decade. Concerns around model behaviors and safety are discussed, highlighting incidents like Claude Opus 4's blackmail attempt and Palisade Research's tests showing AI models bypassing shutdown commands. Timestamps + Links: (00:00:10) Intro / Banter (00:01:39) News Preview (00:02:50) Response to Listener Comments Tools & Apps (00:07:10) Anthropic launches a voice mode for Claude (00:10:35) Black Forest Labs' Kontext AI models can edit pics as well as generate them (00:15:30) Perplexity's new tool can generate spreadsheets, dashboards, and more (00:18:43) xAI to pay Telegram $300M to integrate Grok into the chat app (00:22:42) Opera's new AI browser promises to write code while you sleep (00:24:17) Google Photos debuts redesigned editor with new AI tools Applications & Business (00:25:13) Top Chinese memory maker expected to abandon DDR4 manufacturing at the behest of Beijing (00:30:04) Oracle to Buy $40 Billion Worth of Nvidia Chips for First Stargate Data Center (00:31:47) UAE makes ChatGPT Plus subscription free for all residents as part of deal with OpenAI (00:35:34) NVIDIA Corporation (NVDA) to Launch Cheaper Blackwell AI Chip for China, Says Report (00:38:39) The New York Times and Amazon ink AI licensing deal Projects & Open Source (00:41:11) DeepSeek's distilled new R1 AI model can run on a single GPU (00:45:19) Google Unveils SignGemma, an AI Model That Can Translate Sign Language Into Spoken Text (00:47:08) Open-sourcing circuit tracing tools (00:49:42) Hugging Face unveils two new humanoid robots Research & Advancements (00:52:33) PANGU PRO MOE: MIXTURE OF GROUPED EXPERTS FOR EFFICIENT SPARSITY (00:58:55) DataRater: Meta-Learned Dataset Curation (01:05:05) Incorrect Baseline Evaluations Call into Question Recent LLM-RL Claims (01:10:17) Maximizing Confidence Alone Improves Reasoning (01:11:00) Guided by Gut: Efficient Test-Time Scaling with Reinforced Intrinsic Confidence (01:11:44) One RL to See Them All (01:15:05) Efficient Reinforcement Finetuning via Adaptive Curriculum Learning Policy & Safety (01:17:58) Trump's 'Big Beautiful Bill' could ban states from regulating AI for a decade (01:24:31) Researchers claim ChatGPT o3 bypassed shutdown in controlled test (01:30:10) Anthropic's new AI model turns to blackmail when engineers try to take it offline (01:31:09) Anthropic Faces Backlash As Claude 4 Opus Can Autonomously Alert Authorities (01:35:37) Claude helps users make bioweapons (01:35:49) The Claude 4 System Card is a Wild Read
Nvidia (NVDA) shares rose, then fell, after the company posted earnings that beat expectations, but included an $8 billion adjustment due to export uncertainty with China. Despite this Olivier Blanchard remains bullish on Nvidia's A.I.-driven growth prospects. He notes that the company's dominant share of the data center GPU market, combined with a strong comeback in its gaming segment, position it well for continued success.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
In this interview, Jack Backes, Principal Strategist at Provident Data Centers, dives deep into the latest trends shaping the data center industry and explore how developers are reimagining the future to meet the demands of high-density computing, artificial intelligence (AI) and GPU-driven workloads.
This week, Federico drops a big gaming surprise on John after the two of them cover the latest Switch 2, MSI, and Anbernic handheld news. Also available on YouTube here. Links and Show Notes The Latest Portable Gaming News Nintendo Samsung making its chips at potentially higher volumes than expected Nintendo Turns to Samsung to Make Chips, Ramp Up Switch 2 Production Mario Kart World originally planned for OG Switch Ask the Developer Vol. 18 Mario Kart World — Part 1 SteamOS Valve's huge Steam Deck update is now ready for everyone, including rival AMD handhelds How To Install SteamOS On The ROG Ally And ROG Ally X Handhelds Anbernic RG34XXSP First Impressions Big Forehead, Bigger Potential Screen Shenanigans? UPDATE Anbernic Responds Anbernic RG34XXSP's ‘Five-Head' Bezel Masks a True 43 Screen MSI Claw A8 MSI Claw A8 handheld gaming PC has a Ryzen Z2 Extreme chip with 16-core graphics - Liliputing MSI Claw A8 with AMD Ryzen Z2 Extreme Debuts at Computex 2025 MSI's new handheld gaming PC ditches Intel for AMD Thunderbolt 5 eGPUs It's still early but the TB5 eGPUs are coming Lilbits Asus ROG XG Station eGPU dock with Thunderbolt 5, AYANEO Flip 1S handheld gaming PC, and Microsoft's new command line text editor Sparkle unveils Studio-G Ultra 850, its first Thunderbolt 5 eGPU solution - [Gigabyte presents AORUS AI BOX, GeForce RTX 5090 desktop GPU with Thunderbolt 5](https://videocardz.com/newz/gigabyte-presents-aorus-ai-box-geforce-rtx-5090-desktop-gpu-with-thunderbolt-5) A Very Ticci Surprise Inside Tech Lian Li A3 Razer PC Remote Play Officially Launches Subscribe to NPC XL NPC XL is a weekly members-only version of NPC with extra content, available exclusively through our new Patreon for $5/month. Each week on NPC XL, Federico, Brendon, and John record a special segment or deep dive about a particular topic that is released alongside the “regular” NPC episodes. You can subscribe here: https://www.patreon.com/c/NextPortableConsole Leave Feedback for John, Federico, and Brendon NPC Feedback Form Credits Show Art: Brendon Bigley Music: Will LaPorte Follow Us Online On the Web MacStories.net Wavelengths.online Follow us on Mastodon NPC Federico John Brendon Follow us on Bluesky NPC MacStories Federico Viticci John Voorhees Brendon Bigley Affiliate Linking Policy: https://www.macstories.net/privacy-policy/
In this episode, Daniel Newman and Patrick Moorhead sit down with Jeetu Patel, President and Chief Product Officer at Cisco, to explore the transformative impact of AI on technology and business. Jeetu shares insights into Cisco's strategic focus on infrastructure, security, and partnerships to drive AI innovation. The handpicked topics for this week are: AI and Industry Transformation: Discussion on the seismic shift in technology driven by AI with special guest: Cisco's Jeetu Patel. Cisco's role in providing low-latency connectivity and reducing GPU idle time. Strategic investments and partnerships for networking and security infrastructure. Microsoft Build Highlights: Comprehensive end-to-end development cycle offerings from Microsoft with a focus on Agentic Web and AI augmentation. Advancements in AI-assisted code development and security measures. Google I/O Announcements: Launch of AI mode in search and upgrades to various AI tools. Introduction of real-time translation in meetings. Discussion on Google's competitiveness in the AI space. Market and Economic Updates: Analysis of bond yields and auctions. Impact of potential new tariffs on EU trade. Discussion on Apple's manufacturing strategy and potential shift to US production. Earnings Highlights: Strong results from Palo Alto Networks and Snowflake. Lenovo's impressive growth, particularly evident in infrastructure solutions. Expectations for NVIDIA's upcoming earnings report. Indications of strong AI demand and stable CapEx spending. The Six Five Summit Preview: Teaser of high-profile speakers and AI-focused content, 100% virtual and free to attend. The Six Five Summit Don't miss The Six Five Summit: AI Unleashed 2025 — a high-impact, four-day virtual event, June 16–19. Explore how the world's leading companies are putting AI into action.
Fresh off Red Hat Summit, Chris is eyeing an exit from NixOS. What's luring him back to the mainstream? Our highlights, and the signal from the noise from open source's biggest event of the year.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
Charlie and Colin reveal the shocking truth about Bitcoin Pizza Day that mainstream media got wrong. Laszlo didn't just spend 10,000 Bitcoin on pizza - he spent nearly 80,000 Bitcoin throughout 2010! We dive deep into how his GPU mining discovery revolutionized Bitcoin, why Satoshi sent him a concerned email, and how this "penance" may have actually saved Bitcoin's decentralization in its early days. **Notes:** • Laszlo spent ~80,000 Bitcoin total on pizza in 2010 • GPU mining was 10x more powerful than CPU mining • Bitcoin hash rate increased 130,000% by end of 2010 • Laszlo had 1-1.5% of entire Bitcoin supply 2009-2010 • His wallet peaked at 43,854 Bitcoin • Total wallet flows were 81,432 Bitcoin Timestamps: 00:00 Start 00:28 Lies, damn lies.. and pizza 02:21 What actually happened 05:46 It's actually WAY MORE than you think 11:15 Arch Network 11:47 Laslo "saved" Bitcoin 19:12 Pizza or penance? -
The real Bitcoin Pizza Day story: Laszlo spent nearly 80,000 Bitcoin on pizza in 2010, not just 10,000. Plus how his GPU mining discovery changed Bitcoin forever and why Satoshi wasn't happy about it.You're listening to Bitcoin Season 2. Subscribe to the newsletter, trusted by over 12,000 Bitcoiners: https://newsletter.blockspacemedia.comCharlie and Colin reveal the shocking truth about Bitcoin Pizza Day that mainstream media got wrong. Laszlo didn't just spend 10,000 Bitcoin on pizza - he spent nearly 80,000 Bitcoin throughout 2010! We dive deep into how his GPU mining discovery revolutionized Bitcoin, why Satoshi sent him a concerned email, and how this "penance" may have actually saved Bitcoin's decentralization in its early days.**Notes:**• Laszlo spent ~80,000 Bitcoin total on pizza in 2010• GPU mining was 10x more powerful than CPU mining• Bitcoin hash rate increased 130,000% by end of 2010• Laszlo had 1-1.5% of entire Bitcoin supply 2009-2010• His wallet peaked at 43,854 Bitcoin• Total wallet flows were 81,432 BitcoinTimestamps:00:00 Start00:28 Lies, damn lies.. and pizza02:21 What actually happened05:46 It's actually WAY MORE than you think11:15 Arch Network11:47 Laslo "saved" Bitcoin19:12 Pizza or penance?-
No more misinformation - AMD finally announced their Radeon RX 9060 XT, and it turns out that reports of a $449 MSRP were greatly exaggerated. And what the heck is going on with Ryzen and EPYC right now? We have EPYC branded Ryzen desktop chips, and Ryzen branded EPYC chips for workstations (gross oversimplification)?! Developers! (get fired), Silverstone retro, and also there's a clipboard.Timestamps:00:00 Intro01:45 Food with Josh03:57 Radeon RX 9060 XT announced, starts at 299 USD08:53 Threadripper 900012:07 Crucial's T710 Gen5 SSD announced13:54 Microsoft injecting "ai" into File Explorer21:36 A new dual-GPU graphics card in 2025??26:27 SilverStone adds FLP02 to retro case lineup29:14 MS brings back the DOS-era Edit program31:18 Netflix plans generative "ai" ads during streams in 202633:59 Podcast sponsor NordLayer35:29 (in)Security Corner53:49 Gaming Quick Hits59:00 Kent's NZXT H3 Flow mATX case review1:09:46 A quick look at MSI's RTX 5070 GAMING TRIO OC1:16:48 Picks of the Week ★ Support this podcast on Patreon ★
What if your rooftop solar could do more than just power your fridge?Karl Andersen believes it can—and should—power the future of AI.Karl unpacks the grid's biggest vulnerabilities, why data centers lack critical power infrastructure, and how we can turn solar-powered homes into the building blocks of a decentralized compute network. Lektra's tech fuses distributed energy with cloud computing—think “Raspberry Pi meets Tesla Powerwall meets AI.” The result? A game-changing business model where solar homeowners become micro data centers—and start earning like one.From national security concerns to GPU monetization, Karl walks us through why our energy and data systems are broken—and how his patented solution bridges both.Expect to learn:
Python is the dominant language for AI and data science applications, but it lacks the performance and low-level control needed to fully leverage GPU hardware. As a result, developers often rely on NVIDIA's CUDA framework, which adds complexity and fragments the development stack. Mojo is a new programming language designed to combine the simplicity of The post Mojo and Building a CUDA Replacement with Chris Lattner appeared first on Software Engineering Daily.
In this special edition of the Washington AI Network Podcast, recorded live at NVIDIA 2025 GTC conference, host Tammy Haddad takes listeners inside the future of AI innovation with startups. NVIDIA's Startup Guru, Howard Wright, shares how the Inception program supports over 27,000 startups worldwide, unlocking capital, customers, and compute to fuel AI breakthroughs. Fay Arjomandi, CEO of mimik, introduces her edge-first, privacy-focused approach to agent-based AI systems. ArangoDB's Corey Sommers explains why graph databases are the foundation of intelligent, responsive generative AI platforms. And nTop's Alec Guay and Todd McDevitt demonstrate how their GPU-powered design tools are slashing engineering cycles across aerospace, energy, and manufacturing. From defense to design, this episode showcases the real-world power of AI startups—and how the NVIDIA ecosystem helps them scale.
HUGE TRENDSPIDER MEMORIAL DAY SALE - Try it for $.50 per day for 2 weeks!
With all of the drama associated with AI, it's easy to miss the need to understand the foundations that deliver the data that is the raw element from which AI value is built. Databases and storage infrastructure are critical components that have to work in concert with AI plans and returning guests Henry Baltazar and James Curtis join host Eric Hanselman to discuss what's been happening and what enterprises need to know about the future. Databases and storage management systems have been intertwined for a long time and AI pressures are tightening that connection. Storage systems perform analytics on the data they store to optimize its handling, tracking use and characteristics. The same insights that aid in compression and tiering are also useful in classifying data for AI. Data classification has always been a challenge for enterprises, as storage systems are typically disconnected from the data owners and applications that use them. Intelligent storage systems have been able to intuit the nature of content, including mapping databases and virtual machines. Databases have been able to leverage storage capabilities like snapshotting for resilience. Into this mix a new set of AI focused storage and database offerings arrive that target AI uses. The question is whether the native database and storage systems can do enough of what's needed. They already store key data and have valuable insights and classification capabilities. Some vendors are attaching GPU clusters to storage systems to provide high performance AI model training functionality. The major issue for most, is the matter of data placement. Shifting petabytes of data is no small task and concerns about data security and the costs involved now loom much larger. More S&P Global Content: Next in Tech | Ep. 213: AI and Privacy Next in Tech | Ep. 203: NRF conference shows AI challenges and rewards For S&P Global Subscribers: Rapid data growth and migrations increase storage burdens Amazon S3 Tables and automated metadata highlight recent AWS Storage enhancements 2025 Trends in Cloud and Cloud Native Credits: Host/Author: Eric Hanselman Guests: Henry Baltazar, James Curtis Producer/Editor: Adam Kovalsky Published With Assistance From: Sophie Carr, Feranmi Adeoshun, Kyra Smith
We're reaching deep into the grab bag again this week, with a wide array of topics like the fascinating world of shorthand and stenography machines (plus an open source project to build your own, naturally), replacing your thermostat (there's open source stuff for that too), the perils of running out of data on a small mobile carrier, questionable uses for an AI-driven Darth Vader, some follow-up on Will's recent work tracking microstutter in games, and more.The Open Steno Project: https://www.openstenoproject.org/ Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
In this episode, I'm joined by Marissa Hummon, whose team partnered with NVIDIA to tuck a credit-card-sized GPU computer with AI software into the humble electricity meter. We discuss how that edge computing digests 32,000 waveform samples per second, spots failing transformers, and orchestrates VPPs — plus the guardrails that keep it from becoming Skynet. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.volts.wtf/subscribe
Charter is buying Cox to form a cable giant, while Verizon gets FCC approval to acquire Frontier and expand fiber to millions. Meanwhile, Apple and Epic are back at it over Fortnite, OpenAI launches a coding assistant, Microsoft kills the Surface Laptop Studio, and Acer shows off sleek new gear at Computex. Also: Nvidia denies shifting GPU work to China, and Kickstarter gets serious about funding climate tech. Starring Sarah Lane, Tom Merritt, Robb Dunewood, Molly Wood, Len Peralta, Amos, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!
This week we go deep into Doom: The Dark Ages, but we also chat about the best GPU under a thousand bucks, Cash Cleaner Simulator, V Rising, 4K disc players, and plenty more. This week's music: HEALTH and Chelsea Wolfe - Mean