POPULARITY
Categories
Microsoft slashed 9,000 jobs a few weeks ago, with the Xbox division taking a hit including some studio closures. What about that $70 BILLION Activision Blizzard acquisition just a couple of years ago? It sounds like fertile ground for of Raph's largest victory laps to date, after he suggested Microsoft should have ditched Xbox years ago. But first, an update on xAI after last week's ep. Grok 4 has arrived, and its big feature is... AI anime waifus. Uncensored and backed by a million GPUs... what could this possibly mean for the birth rate? And are OpenAI's sycophantic personal assistants really that different from an AI girlfriend at the end of the day? See omnystudio.com/listener for privacy information.
Scaling AI is more than just stacking GPUs, it's about making every dollar and watt count.Chris Sosa, Director of Engineering at AMD, reveals how they're pushing the limits of AI hardware efficiency.Chris shares:▫️ The biggest bottleneck in scaling AI workloads today▫️ Why idle GPUs drain your budget▫️ The struggle between training vs. inference at scale▫️ How AMD's ROCm stack is changing AI accessibility▫️ His vision for smarter AI orchestration and platformsA must-watch for engineers, AI builders, and tech leaders hungry for insights on the future of AI infrastructure.Follow more of the Liftoff with Keith:- Spotify: https://open.spotify.com/show/3cFpLXfYvcUsxvsT9MwyAD- Apple Podcasts: https://podcasts.apple.com/us/podcast/liftoff-with-keith-newman/id1560219589- Substack: https://keithnewman.substack.com/- LinkedIn: https://www.linkedin.com/company/liftoffwithkeith/- Newman Media Studios: https://newmanmediastudios.com/For sponsorship inquiries, please contact: sponsorships@wherewithstudio.com
Join The Full Nerd gang as they talk about the latest PC hardware topics. In this episode the gang chats about the introduction of Nvidia Smooth Motion (driver level MFG) for RTX 40 series GPUs, why Intel's CEO says they aren't a top fab anymore, Zen 6 getting into the hands of partners, and more. And of course we answer your questions live! *This episode is sponsored by Fractal Design and the new Scape wireless gaming headphones. The Scape headphones feature Fractal's signature clean design, rich audio quality, and easy configuration through the browser. Use these links to upgrade your gaming audio today: Scape Dark: https://www.amazon.com/dp/B0D5HGK3C2 Scape Light: https://www.amazon.com/dp/B0D5HK6JRS Scape Dark: https://www.newegg.com/p/26-743-003 Scape Light: https://www.newegg.com/p/26-743-004 Links: - Semantic Search: https://next.content.town/p/the-monkey-s-paw-curls-windows-is-finally-using-my-pc-s-ai-processor - Intel isn't on top: https://www.pcworld.com/article/2845779/intel-ceo-says-intel-isnt-a-top-chip-company-any-more.html - Arrow Lake Refresh: https://videocardz.com/newz/intel-arrow-lake-refresh-with-higher-clocks-coming-this-half-of-the-year - Zen 6: https://videocardz.com/newz/amd-zen-6-to-primarily-use-tsmc-n2p-n3p-for-low-end-mobile-skus Join the PC related discussions and ask us questions on Discord: https://discord.gg/SGPRSy7 Follow the crew on X: @AdamPMurray @BradChacos @MorphingBall @WillSmith ============= Follow PCWorld! Website: http://www.pcworld.com X: https://www.x.com/pcworld =============
Take a Network Break! We start with listener follow-up on Arista market share in the enterprise, and then sound the alarm about a remote code execution vulnerability in Adobe Experience Manager. On the news front, Arista buys VeloCloud to charge into the SD-WAN market, CoreWeave acquires a cryptominer to get access to GPUs and electricity... Read more »
Take a Network Break! We start with listener follow-up on Arista market share in the enterprise, and then sound the alarm about a remote code execution vulnerability in Adobe Experience Manager. On the news front, Arista buys VeloCloud to charge into the SD-WAN market, CoreWeave acquires a cryptominer to get access to GPUs and electricity... Read more »
Take a Network Break! We start with listener follow-up on Arista market share in the enterprise, and then sound the alarm about a remote code execution vulnerability in Adobe Experience Manager. On the news front, Arista buys VeloCloud to charge into the SD-WAN market, CoreWeave acquires a cryptominer to get access to GPUs and electricity... Read more »
Chris Adams is joined by Adrian Cockcroft, former VP of Cloud Architecture Strategy at AWS, a pioneer of microservices at Netflix, and contributor to the Green Software Foundation's Real Time Cloud project. They explore the evolution of cloud sustainability—from monoliths to microservices to serverless—and what it really takes to track carbon emissions in real time. Adrian explains why GPUs offer rare transparency in energy data, how the Real Time Cloud dataset works, and what's holding cloud providers back from full carbon disclosure. Plus, he shares his latest obsession: building a generative AI-powered house automation system using agent swarms.
The Rise of Sovereign AI and Global AI Innovation in a World of US Protectionism // MLOps Podcast #331 with Frank Meehan, Founder and CEO of Frontier One AI.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// Abstract“The awakening of every single country is that they have to control their AI intelligence and not outsource their data" - Jensen Huang. Sovereign AI is rapidly becoming a fundamental national utility, much like defense, energy or telecoms. Nations worldwide recognize that AI sovereignty—having control over your AI infrastructure, data, and models—is essential for economic progress, security, and especially independence - especially when the US is pushing protectionism and trying to prevent global AI innovation. Of course this has the opposite effect - DeepSeek created by a Hedge Fund in China; India building the world's largest AI data centre (3 GW), and global software teams scaling, learning and building faster than ever before. However most countries lack the talent, financing and experience to implement Sovereign AI for their requirements - and it is our belief at Frontier One, that one of the biggest markets for AI applications, cloud services and GPUs will be global governments. We see it already - with $10B of GPUs in 2024 bought directly by governments - and it's rapidly expanding. We will talk about what Sovereign AI is - both infrastructure and software details / why it is crucial for a nation / how to get involved as part of the MLOps community. // BioCo-Founder of Frontier One - building Sovereign AI Factories and Cloud software for global markets.Frank is a 2X CEO | 2X CMO (with 2X exits + 1 IPO NYSE), Board Director (Spotify, Siri) and Investor (SparkLabs Group) with 20+ years of experience in creating and growing leading brands, products and companies.Chair of Improvability, automating due diligence and reporting for corporates, foundations and Governments with AI.Co-founder and partner at SparkLabs Group - investors in OpenAI, Anthropic, 88 Rising, Discord, Animoca, Andela, Vectara, Kneron, Messari, Lifesum + 400 companies in our portfolio. Investment Committee and LP at SparkLabs Cultiv8 with 56 investments in consumer food and regenerative agriculture companies.Co-founder and CMO - later CEO - of Equilibrium AI (Singapore), building it to one of the leading ESG and Carbon data management platforms globally. Equilibrium was acquired by FiscalNote in 2021, where he joined the senior leadership team, running the ESG business globally, and helping the company IPO in 2022 on the NYSE at $1.1B valuation.Board director at Spotify (2009-2012); Siri (2009-2010 exited to Apple); Lifesum (leading AI health app with 50 million users), seed investor in 88 Rising (Asia's leading independent music label); CEO/CMO and co-founder at INQ Mobile (mobile internet pioneer); and Global Director for devices and products at 3 Mobile.Started as a software developer with Ericsson Mobile in Sweden, after graduating from KTH in Stockholm and the University of Sydney with a Bachelor of Mechanical Engineering, and Master of Science in Fluid Mechanics.// Related Linkshttps://www.frontierone.ai/ and https://www.sparklabsgroup.com~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Frank on LinkedIn: /frankmeehan/
In this episode, we talk to Core Scientific COO Matt Brown about the company's pivot away from housing cryptomining rigs to hosting GPUs for the likes of AI cloud firm CoreWeave.We talk about the wider crypto market and why the move to AI hosting is becoming so common, the rise of the neoclouds and why they're willing to work with companies that might not be used to working to Tier III-quality uptime requirements, and Matt's own experience coming to the crypto space from world of traditional colo.
US equity markets advanced, with the technology-centric Nasdaq climbing to a fresh record high as tariff and trade headlines remained in focus and as investors parsed meeting minutes from the Federal Reserve's June monetary policy meeting - Dow rose +218-points or +0.49% to 44,458.30, settling ~1% below its record closing high. Nvidia Corp rose +1.80% to US$162.88, pulling back from earlier highs that saw the chipmaker become the first ever company to achieve a market capitalisation of US$4 trillion. Nvidia needed to finish at or above US$163.93 to reach the $4 trillion milestone at the close. Nonetheless, the chipmaker's closing market capitalisation of US$3.974 trillion was a record closing peak for any company. Amazon.com Inc +1.45% that its cloud division has developed hardware to cool down next-generation Nvidia graphics processing units (GPUs) that are used for artificial intelligence workloads. UnitedHeatlh Group Inc fell -1.56% to be the worst performing Dow component overnight following a report from the Wall Street Journal that ex-employees and medical professionals have been interviewed by Department of Justice investigators over a probe into the insurer's Medicare billing practices.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:INLA is a fast, deterministic method for Bayesian inference.INLA is particularly useful for large datasets and complex models.The R INLA package is widely used for implementing INLA methodology.INLA has been applied in various fields, including epidemiology and air quality control.Computational challenges in INLA are minimal compared to MCMC methods.The Smart Gradient method enhances the efficiency of INLA.INLA can handle various likelihoods, not just Gaussian.SPDs allow for more efficient computations in spatial modeling.The new INLA methodology scales better for large datasets, especially in medical imaging.Priors in Bayesian models can significantly impact the results and should be chosen carefully.Penalized complexity priors (PC priors) help prevent overfitting in models.Understanding the underlying mathematics of priors is crucial for effective modeling.The integration of GPUs in computational methods is a key future direction for INLA.The development of new sparse solvers is essential for handling larger models efficiently.Chapters:06:06 Understanding INLA: A Comparison with MCMC08:46 Applications of INLA in Real-World Scenarios11:58 Latent Gaussian Models and Their Importance15:12 Impactful Applications of INLA in Health and Environment18:09 Computational Challenges and Solutions in INLA21:06 Stochastic Partial Differential Equations in Spatial Modeling23:55 Future Directions and Innovations in INLA39:51 Exploring Stochastic Differential Equations43:02 Advancements in INLA Methodology50:40 Getting Started with INLA56:25 Understanding Priors in Bayesian ModelsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad
A look inside CoreWeave's neocloud business model, and how the company went from an ETH miner to a $75 billion business. FILL OUT THE SURVEY BY CLICKING HEREWelcome back to The Mining Pod! Today, Colin and Will break down CoreWeave's meteoric rise to a $75+ billion valuation and why Bitcoin miners like Core Scientific, Galaxy Digital, and Applied Digital are all racing to partner with AI cloud providers. We explore CoreWeave's neocloud business model, GPU economics vs bitcoin mining profitability, and what this means for the future of the mining industry.Subscribe to our newsletter! **Notes:**• CoreWeave valued at $75B+ (12x revenue multiple)• 72% of Q1 revenue came from Microsoft/OpenAI• CoreWeave manages 250,000+ GPUs globally• $15B+ in contracted revenue securedTimestamps:00:00 Start04:05 Coreweave overview07:40 Neocloud10:28 Other Neocloud providers12:49 Oracle, OpenAI & Stargate16:20 Crusoe17:58 Hyperscaler street cred21:08 Energy pipeline26:53 Revenue32:51 Capex vs revenue38:03 GPU lifespan41:58 Bull vs Bear49:00 Partner concentration
FILL OUT THE SURVEY BY CLICKING HERE Welcome back to The Mining Pod! Today, Colin and Will break down CoreWeave's meteoric rise to a $75+ billion valuation and why Bitcoin miners like Core Scientific, Galaxy Digital, and Applied Digital are all racing to partner with AI cloud providers. We explore CoreWeave's neocloud business model, GPU economics vs bitcoin mining profitability, and what this means for the future of the mining industry. Subscribe to our newsletter! **Notes:** • CoreWeave valued at $75B+ (12x revenue multiple) • 72% of Q1 revenue came from Microsoft/OpenAI • CoreWeave manages 250,000+ GPUs globally • $15B+ in contracted revenue secured Timestamps: 00:00 Start 04:05 Coreweave overview 07:40 Neocloud 10:28 Other Neocloud providers 12:49 Oracle, OpenAI & Stargate 16:20 Crusoe 17:58 Hyperscaler street cred 21:08 Energy pipeline 26:53 Revenue 32:51 Capex vs revenue 38:03 GPU lifespan 41:58 Bull vs Bear 49:00 Partner concentration Article links: https://mattturck.com/coreweave/ https://research.artemis.xyz/p/coreweave-fundamental-deep-dive https://www.mostlymetrics.com/p/coreweave-ipo-s1-breakdown https://research.artemis.xyz/p/coreweave-fundamental-deep-dive https://www.mostlymetrics.com/p/coreweave-ipo-s1-breakdown
Hive Digital Technologies Executive Chairman Frank Holmes joined Steve Darling from Proactive to announce that the company achieved an 18% increase in Bitcoin production in June 2025 compared to the previous month, signaling strong momentum as it advances its ambitious global scale-up strategy. The company mined 164 Bitcoin in June, fueled largely by the early success of its newly energized 100-megawatt Phase 1 operation in Paraguay. This facility is central to Hive's goal of reaching 12 Bitcoin per day in production by the end of 2025, driving improved revenue generation and cash flow margins. Holmes shared that the Phase 2 expansion is now underway. As of today, 0.4 exahash per second (EH/s) of new-generation Bitmain S21+ Hydro machines are already online, representing over 5% of the total Phase 2 capacity. Once fully energized, Phase 2 is expected to deliver 6.5 EH/s, pushing Hive's global mining fleet past 25 EH/s by U.S. Thanksgiving, with an industry-leading efficiency of ~18.5 J/TH. Hive's overall hashrate stood at 11.4 EH/s in June, nearly doubling since the end of March. Holmes described this scale-up as the “most ambitious growth trajectory in Hive's history,” positioning the company among the global leaders in Bitcoin mining. Meanwhile, Hive's high-performance computing division, BUZZ HPC, is also scaling aggressively to support sovereign Canadian AI infrastructure. In June, BUZZ HPC signed a purchase agreement for a 7.2 MW Tier 3 data center campus in Toronto, capable of hosting up to 5,000 next-gen GPUs. This facility adds a critical building block to Hive's long-term strategy of balancing Bitcoin mining with AI-driven high-performance computing solutions across its Canadian and European operations. With dual momentum in crypto mining and AI infrastructure, Hive is executing a vertically integrated model focused on high-efficiency, high-margin digital infrastructure growth. #proactiveinvestors #usglobalinvestorsinc #nasdaq #grow #etf #BitcoinMining #HighPerformanceComputing #AIDataCenters #GreenEnergy #CryptoMining #TorontoTech #ParaguayExpansion #FrankHolmes #BlockchainInnovation
On this week's show we take a first look at the proposed HDMI 2.2 specification. We also read your emails and take a look at the week's news. News: YouTube Once Again Dominates TV Usage In May SunBrite Debuts Full Sun 4K Smart TV Series XGIMI Releases MoGo 4 Series Projectors Amazon to Shutter Freevee in September 2025, Merging Content into Prime Video HDMI 2.2 Specification The HDMI 2.2 specification, announced by the HDMI Forum at CES 2025, introduces several advanced features to support higher resolutions, refresh rates, and enhanced audio-visual performance. Below is a summary of the key features included in the HDMI 2.2 specification based on the information we have today: Increased Bandwidth (Up to 96 Gbps): HDMI 2.2 doubles the bandwidth of HDMI 2.1 (from 48 Gbps to 96 Gbps), enabling support for higher resolution and refresh rate combinations, as well as data-intensive applications. This increased bandwidth supports uncompressed and compressed video formats, making it suitable for advanced applications like AR/VR, spatial reality, light field displays, medical imaging, and machine vision. Support for Higher Resolutions and Refresh Rates: Uncompressed Formats 4K at 240 Hz and 480 Hz (4:4:4 chroma sampling, 10-bit and 12-bit color). 8K at 60 Hz and 240 Hz (4:4:4 chroma sampling, 8-bit and 10-bit color). 10K at 120 Hz. 12K at 120 Hz. 16K at 60 Hz. Compressed Formats (using Display Stream Compression or similar): Supports higher refresh rates like 4K at 480 Hz, 8K at 240 Hz, and 10K at 120 Hz, which require compression to achieve these rates within the bandwidth constraints. Next-Generation Fixed Rate Link (FRL) Technology: HDMI 2.2 introduces an advanced version of Fixed Rate Link signaling technology, optimized for better support of uncompressed content at high resolutions and refresh rates, ensuring pristine image quality and low latency Ultra96 HDMI Cable: A new cable type, the Ultra96 HDMI Cable, is introduced to support the full 96 Gbps bandwidth and all HDMI 2.2 features. These cables are backward compatible with older HDMI devices but are required to fully utilize HDMI 2.2's capabilities. The Ultra96 cables are part of the HDMI Cable Certification Program, requiring testing and certification with a visible Ultra96 certification label to ensure compliance. Features low electromagnetic interference (EMI) for stable and reliable data transmission. Latency Indication Protocol (LIP): A new feature designed to improve audio and video “‘video synchronization, particularly in multi-hop setups involving devices like AV receivers or soundbars. LIP enhances synchronization over existing methods, reducing issues like lip-sync lag, especially for fast-paced content or gaming. Support for Advanced Color and Chroma Formats: Supports high-quality color spaces like BT.2020 with 10-bit, 12-bit, and 16-bit color depth. Enables uncompressed full chroma formats (e.g., 4:4:4) at high resolutions, ensuring richer colors and pristine image quality. Additional Notes Availability: The HDMI 2.2 specification was announced at CES 2025, with Ultra96 cables expected to be available in Q3/Q4 2025. HDMI 2.2-compliant devices (e.g., TVs, monitors, GPUs) are expected to appear in late 2025 or 2026 Optional Features: Like previous HDMI versions, features such as Variable Refresh Rate (VRR), Auto Low Latency Mode (ALLM), Quick Frame Transport (QFT), and Enhanced Audio Return Channel (eARC) remain optional and depend on device manufacturer implementation. Consumer Guidance: The Ultra96 feature name helps consumers identify cables and devices capable of supporting 64 Gbps, 80 Gbps, or 96 Gbps bandwidth, ensuring optimal performance.
This week on the podcast we go over our review of the Enermax ETS-TD60 Digital CPU Cooler. We also talk about leaked information on the RTX 50 Super refresh, gamers not buying 8GB GPUs, AMD Ryzen processors dominating, and much more!
Ploopy releases a high-resolution Knob, advanced profile management for GPUs on Linux, a new Wi-Fi and Bluetooth module from RasPi, and a glimpse at the future of gaming on ARM.
- GPU-ASIC War - Hyperscalers' CPUs, “GPUs", DPUs, QPUs - Google TPU-7 and Open AI? - Meta's AI chip tape out - Microsoft's AI chip delays - Why do engineering projects get delayed? - Chip co-designers break into chip supply chain [audio mp3="https://orionx.net/wp-content/uploads/2025/06/HPCNB_20250630.mp3"][/audio] The post HPC News Bytes – 20250630 appeared first on OrionX.net.
Guest: Dinakar Munagala, Co-Founder and CEO of Blaize Revolutionizing AI at the Edge Discussion: Blaze Origin Story: Co-founded by Dinakar Munagala, Blaze emerged recognizing traditional GPUs not ideal for AI Low-power AI needs outside data centers at the 'edge' Blaze specializes in edge processing, enabling AI on devices like cameras and medical tools where real-time decisions are critical Hardware & Software with new chip architecture tailored for low-power, on-device AI Paired with user-friendly software to overcome deployment barriers Real-world use cases include COVID-19 detection from X-rays, early cancer detection from retina scans, and fall monitoring in elder care Human-AI Synergy augmenting professionals—not replacing them—while making AI accessible, scalable, and secure To stream our Station live 24/7 visit www.HealthcareNOWRadio.com or ask your Smart Device to “….Play Healthcare NOW Radio”. Find all of our network podcasts on your favorite podcast platforms and be sure to subscribe and like us. Learn more at www.healthcarenowradio.com/listen
In this episode, recorded live at Microsoft Build, we sit down with the always insightful Anthony Chu, Principal PM at Microsoft, to explore the world of Azure Container Apps. Anthony brings a unique developer-first perspective on simplifying complex cloud infrastructure, weaving together topics like Dapr, serverless GPUs, dynamic sessions, and even a surprising amount of pasta talk. Whether you're new to containers or neck-deep in Kubernetes YAML files, there's something for everyone here, especially if you're into Marvel vs. DC debates or want to know which Spider-Man reigns supreme. Guest:
AI just taught me this cool thing... keep on listening to find out what it is! Today we talk about the massive and fast-moving implications of AI. We share the personal experiences with how AI challenges traditional business structures and workflows, requiring users to reimagine how work is done. We also explores how AI may replace many functions within organizations, from marketing to operations, while still lacking in areas like math accuracy and sales conversations. We also talk about Mary Meeker's AI report, noting unprecedented user adoption, the rapid rise of global competitors like China's DeepSeek, and the prediction that LLMs will become personal, customizable, and nearly costless. We need to rethink AI's role in business, its deflationary impact on cost, and how fast-changing technology may render old tools and concepts obsolete. We discuss... How humor and sarcasm could be the final frontier in distinguishing AI from humans. The greatest investment in AI is learning how to use it personally and professionally. How limited human imagination, not technology, is the biggest barrier to innovation with AI. AI's limitations in math were noted, with a warning not to fully trust it as a CFO despite its operational usefulness. AI isn't quite ready for high-touch sales calls but is rapidly closing the gap in other business areas. Global AI adoption is surging, with China's DeepSeek gaining ground quickly through much lower-cost models. Token costs have dropped nearly 100% in two years, and energy efficiency in GPUs has improved drastically. With the penny going out of circulation, it might be time to start saving them as collectibles. AI development curves are moving much faster than traditional SaaS models, making this a truly disruptive moment in tech. Meta's LLaMA has been downloaded 1.2 billion times in 10 weeks, with over 100,000 derivative models created. The performance gap between open-source and closed AI models is shrinking rapidly, with DeepSeek nearly matching OpenAI on benchmarks. The AI ecosystem is becoming decentralized, much like the shift from centralized platforms to blockchain-based alternatives. Decentralization is praised for enabling free speech, innovation, and diversity of thought, unlike centralized control. Most employees are already using AI tools like ChatGPT personally, even if companies haven't officially adopted them. AI is increasing personal productivity, but there's concern it may ultimately compress work rather than improve quality of life. Over 60,000 new AI-related job titles have emerged in just two years, indicating a massive career reshuffle. Without earned knowledge, people can misuse powerful tools like AI, just as they did with nuclear weapons. The future with AI could resemble either Skynet or Star Trek, and no one truly knows which way it will go. There is risk of psychological strain and social dysfunction if people are displaced without purpose. AI tools can now bypass paywalls and summarize articles, challenging traditional media revenue models. The current wealth gap and collapse of the middle class is unprecedented, even before full-scale AI disruption. Decentralized AI (e.g., having your own local models) is seen as essential to maintain independence and avoid manipulation. A growing imbalance of more sellers than buyers suggests further downward pressure on real estate prices. Political pressure is influencing Fed policy, with previous rate cuts seen as potentially timed to impact elections. Global conflict, such as recent Middle East tensions, is having surprisingly little impact on the stock market. Investors should focus on risk management given the unpredictability and detachment from fundamentals. Today's Panelists: Kirk Chisholm | Innovative Wealth Douglas Heagren | ProCollege Planners Follow on Facebook: https://www.facebook.com/moneytreepodcast Follow LinkedIn: https://www.linkedin.com/showcase/money-tree-investing-podcast Follow on Twitter/X: https://x.com/MTIPodcast For more information, visit the show notes at https://moneytreepodcast.com/ai-just-taught-me-this-cool-thing-723
The future of AI isn't coming; it's already here. With NVIDIA's recent announcement of forthcoming 600kW+ racks, alongside the skyrocketing power costs of inference-based AI workloads, now's the time to assess whether your data center is equipped to meet these demands. Fortunately, two-phase direct-to-chip liquid cooling is prepared to empower today's AI boom—and accommodate the next few generations of high-powered CPUs and GPUs. Join Accelsius CEO Josh Claman and CTO Dr. Richard Bonner as they walk through the ways in which their NeuCool™ 2P D2C technology can safely and sustainably cool your data center. During the webinar, Accelsius leadership will illustrate how NeuCool can reduce energy savings by up to 50% vs. traditional air cooling, drastically slash operational overhead vs. single-phase direct-to-chip, and protect your critical infrastructure from any leak-related risks. While other popular liquid cooling methods carry require constant oversight or designer fluids to maintain peak performance, two-phase direct-to-chip technologies require less maintenance and lower flow rates to achieve better results. Beyond a thorough overview of NeuCool, viewers will take away these critical insights: The deployment of Accelsius' Co-Innovation Labs—global hubs enabling data center leaders to witness NeuCool's thermal performance capabilities in real-world settings Our recent testing at 4500W of heat capture—the industry record for direct-to-chip liquid cooling How Accelsius has prioritized resilience and stability in the midst of global supply chain uncertainty Our upcoming launch of a multi-rack solution able to cool 250kW across up to four racks Be sure to join us to discover how two-phase direct-to-chip cooling is enabling the next era of AI.
In this episode, Ben Bajarin and Jay Goldberg discuss the recent Marvell custom semiconductor event, the challenges faced in the custom chip business, and the evolving role of hyperscalers in chip design. They explore the future of ASICs versus GPUs, the growth trajectory of the semiconductor industry, and the impact of software on compute needs. The conversation also delves into chip design innovations, particularly the rise of chiplets, and the changing economics of semiconductor manufacturing.
Fore more episodes: https://www.utilizingtech.com/Storage software running on modern hardware can deliver incredible performance and capability to support AI applications. This episode of Utilizing Tech wraps up our season with a discussion of WEKA's data platform for AI with Alan McSeveney, Scott Shadley of Solidigm, and host Stephen Foskett. Modern hardware is capable of incredible performance, but bottlenecks remain. The limiting factor for AI processors is memory capacity: GPUs are hungry for data and must be refreshed from storage quickly enough to keep them running at scale. Storage can also be used to share data between GPUs across the data center and to cache working data to accelerate calculation. The secret to scalability, from storage to applications to AI, is distribution and parallel processing. Modern software runs at incredible scale, and all elements of the stack must match. Technologies like Kubernetes allow applications to use huge clusters of workers all contributing to scale and performance. WEKA runs this way, matching the GPU clusters and web applications we rely on today.Guest: Alan McSeveney, Field CTO of Media and Entertainment, WEKAHosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesJeniece Wnorowski, Head of Influencer Marketing at Solidigm Scott Shadley, Leadership Narrative Director and Evangelist at SolidigmFollow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon. Visit the Tech Field Day website for more information on upcoming events. For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Smart Agency Masterclass with Jason Swenk: Podcast for Digital Marketing Agencies
Would you like access to our advanced agency training for FREE? https://www.agencymastery360.com/training How do you turn a $99 course, launched before it was even fully built, into a 7-figure coaching business? Today's guest did just that. And he's here to share why scrappier beats slick every time. If you've ever second-guessed launching messy, this episode will feel like validation. Brent Weaver is on the show talking about his start with UGURUS, the valuable learning that can come from starting before everything's in place, and why what came after selling his business wasn't exactly what he had expected. Today we kick off a two-parter with Brent Weaver, the founder of UGURUS, who went from building websites in high school to launching one of the most successful coaching programs for digital agency owners. If you've ever second-guessed your “build it as you go” approach — or wondered whether selling $99 courses online could ever turn into something real—this episode will feel like a shot of validation. In this episode, we'll discuss: Launching and selling without a net. The real reason Brent Weaver sold UGURUS. The unexpected, gut-punch part of selling. Subscribe Apple | Spotify | iHeart Radio Sponsors and Resources Wix: Today's episode of the Smart Agency Masterclass is sponsored by Wix Studio, the all-in-one platform designed to help agencies scale without the headaches. With intuitive tools, robust native business solutions, and low maintenance, Wix Studio lets your team focus on what matters most—delivering exceptional value to your clients. Ready to take your agency to the next level? Visit wix.com/studio and discover how Wix Studio can transform your workflow, boost profits, and strengthen client relationships. Building Something Before It's Built In 2012, Brent's agency was building on a tool called Business Catalyst, which led to a side project called BC Gurus, a blog for Business Catalyst users that eventually turned into a full-fledged business. That little blog became a membership site where his team posted business content on how to grow a Business Catalyst agency and, after selling his agency, was the seed for what eventually became UGURUS, a platform offering training and coaching to help agency owners close more deals and scale their businesses. Just as they were preparing to move forward with the site without the Business Catalyst element, as this tool had been discontinued, Brent found the name UGURUS had just gone up for auction. It all seemed serendipitous as they easily won this auction and the new stage of the business began. Lessons in Launching (and Selling) Without a Net Throughout their journey, Brent and his team learned something that every agency owner needs to hear: you don't need everything figured out before you start. And in fact, if you try to, you'll likely never launch at all. The early success of their $200 self-paced course helped them build an audience. But it wasn't until they started offering deeper, high-ticket coaching that things clicked into place. Selling a few $2,000 seats was way more scalable than chasing thousands of low-ticket customers. They did all of this without the luxury of a huge marketing budget or slick automation. Just hustle, relationships, partnerships, and a whole lot of belief in what they were doing. This is something Brent and Jason have both experienced. They agree it's better to go out, execute with what you have, and get feedback, rather than waiting for the perfect moment. Brent Weaver on Building, Selling, and What Came Next Brent and his team didn't start with a fully polished product. In fact, when they first launched their flagship 10K Bootcamp, they spent all their time selling it before creating it. In their view, if they couldn't sell it, they wouldn't build it. But they sold it. About 30 seats at $2,000 a pop. Of course, it did help that they weren't starting from scratch. They had a list of about 10,000 emails from their time running BC Gurus, which helped immensely. And then they had one week to create the first session. What followed was a whirlwind of late nights and Adobe Connect calls (for those who remember what that was) as Brent stayed one step ahead of each week's live session. It was clunky. It was imperfect. But it worked. Why? Because Brent was committed. He responded immediately to the slightest client dissatisfaction. He personally handled delivery. And he overdelivered wherever possible. That scrappy MVP became the foundation for a business that helped thousands of agencies get out of the feast-and-famine cycle. This kind of growth doesn't happen when you wait for the stars to align. It happens when you ship early, listen hard, and iterate fast. The $22,850 Lead Magnet That Took 6 Minutes to Create Let's talk about lead magnets that actually convert. The first product Brent ever sold was a gloriously titled “the $22,850 Website Proposal.” That wasn't a gimmick. It was a real client proposal that closed a big deal—with cross-sells, recurring revenue, and multi-location projects all baked in. Instead of building something fancy, he stripped out client details, dropped it into a Google Doc, and gave it away. Six minutes of work. Hundreds of thousands of downloads. The lesson? Your most valuable assets are often sitting in a dusty folder, not in your imagination. Proof beats polish every time. The Real Reason Brent Sold UGURUS So why sell a successful business? For Brent, it wasn't burnout—it was the pull toward a bigger vision. After buying out his co-founder and riding the COVID rollercoaster, things just weren't lighting him up anymore. Then came Cloudways—and more importantly, a series of conversations with their CMO, Santi. In a way, he was no longer getting what he wanted from the business, and the more he spoke with Santi, and saw what they were doing with their platform, the more he dreamed about turning that into an agency growth community. Hence, what started as co-branded webinars and strategy calls evolved into shared vision sessions. Eventually, Cloudways pitched an acquisition. The appeal? A chance to bring agency coaching to a massive platform with 13,000+ agency users. Brent saw an opportunity to merge purpose with scale and went all in. When the Buyer Gets Bought Here's the plot twist: just ten months after the acquisition, Cloudways got acquired by DigitalOcean, and suddenly UGURUS was a small fish in a billion-dollar pond. DigitalOcean was focused on AI, GPUs, and hardcore infrastructure—not coaching communities. So eventually, Brent's team and vision were sidelined. He stayed on. He fought for his team. But like he says—when you sell, it's no longer yours. And if the buyer shifts priorities, you've got to live with it. That's the tradeoff. Don't Sell Unless You Know What's Next The hard truth here is don't sell unless you know what you're waking up to the next day. Brent thought he had his next chapter lined up. He had a six-month transition plan. A roadmap. But then came the cultural disconnect. Engineering talk at happy hours. Roadmaps that had nothing to do with agency growth. The adventure he signed up for didn't look like what it became. That's the gut-punch part of selling. You can have a clean exit and still feel like you lost something. That's why clarity before the exit is non-negotiable. Next Time on Part Two: What really happens after the exit? Brent pulls back the curtain on post-sale culture shock, why some big opportunities fizzled, and how his next move with E2M caught even him by surprise. You won't want to miss this. Want to Build an Exclusive, Scalable Agency That Clients Line Up For? Our Agency Blueprint helps you identify growth bottlenecks, build community-driven strategies, and position your agency as a category of one.
A handheld Xbox that's really an ROG Ally with a new Ryzen processor?? An LCD that actually NEEDS bright sunlight like a Game Boy Color?? (Oh, and Josh's legendary food segment.) There's some EVGA sad news mixed in there with a cool new GOG feature and too many security stories.Timestamps:00:00 Intro00:39 Patreon01:20 Food with Josh03:30 ASUS ROG Xbox Ally handhelds have new AMD Ryzen Z2 processors06:51 Nintendo sold a record number of Switch 2 consoles08:37 NVIDIA N1X competitive with high-end mobile CPUs?12:38 Samsung now selling 3GB GDDR7 modules16:27 Apple uses car model years now, and Tahoe is their last OS supporting Intel22:01 EVGA motherboards have issues with RTX 50 GPUs?27:48 Josh talks about a new PNY flash drive30:01 (in)Security Corner54:07 Gaming Quick Hits1:00:46 Eazeye Monitor 2.0 - an RLCD monitor review1:11:53 Picks of the Week1:33:21 Outro ★ Support this podcast on Patreon ★
In this episode of the Data Center Frontier Show, we sit down with Kevin Cochrane, Chief Marketing Officer of Vultr, to explore how the company is positioning itself at the forefront of AI-native cloud infrastructure, and why they're all-in on AMD's GPUs, open-source software, and a globally distributed strategy for the future of inference. Cochrane begins by outlining the evolution of the GPU market, moving from a scarcity-driven, centralized training era to a new chapter focused on global inference workloads. With enterprises now seeking to embed AI across every application and workflow, Vultr is preparing for what Cochrane calls a “10-year rebuild cycle” of enterprise infrastructure—one that will layer GPUs alongside CPUs across every corner of the cloud. Vultr's recent partnership with AMD plays a critical role in that strategy. The company is deploying both the MI300X and MI325X GPUs across its 32 data center regions, offering customers optimized options for inference workloads. Cochrane explains the advantages of AMD's chips, such as higher VRAM and power efficiency, which allow large models to run with fewer GPUs—boosting both performance and cost-effectiveness. These deployments are backed by Vultr's close integration with Supermicro, which delivers the rack-scale servers needed to bring new GPU capacity online quickly and reliably. Another key focus of the episode is ROCm (Radeon Open Compute), AMD's open-source software ecosystem for AI and HPC workloads. Cochrane emphasizes that Vultr is not just deploying AMD hardware; it's fully aligned with the open-source movement underpinning it. He highlights Vultr's ongoing global ROCm hackathons and points to zero-day ROCm support on platforms like Hugging Face as proof of how open standards can catalyze rapid innovation and developer adoption. “Open source and open standards always win in the long run,” Cochrane says. “The future of AI infrastructure depends on a global, community-driven ecosystem, just like the early days of cloud.” The conversation wraps with a look at Vultr's growth strategy following its $3.5 billion valuation and recent funding round. Cochrane envisions a world where inference workloads become ubiquitous and deeply embedded into everyday life—from transportation to customer service to enterprise operations. That, he says, will require a global fabric of low-latency, GPU-powered infrastructure. “The world is going to become one giant inference engine,” Cochrane concludes. “And we're building the foundation for that today.” Tune in to hear how Vultr's bold moves in open-source AI infrastructure and its partnership with AMD may shape the next decade of cloud computing, one GPU cluster at a time.
Talk Python To Me - Python conversations for passionate developers
If you're looking to leverage the insane power of modern GPUs for data science and ML, you might think you'll need to use some low-level programming language such as C++. But the folks over at NVIDIA have been hard at work building Python SDKs which provide nearly native level of performance when doing Pythonic GPU programming. Bryce Adelstein Lelbach is here to tell us about programming your GPU in pure Python. Episode sponsors Posit Agntcy Talk Python Courses Links from the show Bryce Adelstein Lelbach on Twitter: @blelbach Episode Deep Dive write up: talkpython.fm/blog NVIDIA CUDA Python API: github.com Numba (JIT Compiler for Python): numba.pydata.org Applied Data Science Podcast: adspthepodcast.com NVIDIA Accelerated Computing Hub: github.com NVIDIA CUDA Python Math API Documentation: docs.nvidia.com CUDA Cooperative Groups (CCCL): nvidia.github.io Numba CUDA User Guide: nvidia.github.io CUDA Python Core API: nvidia.github.io Numba (JIT Compiler for Python): numba.pydata.org NVIDIA's First Desktop AI PC ($3,000): arstechnica.com Google Colab: colab.research.google.com Compiler Explorer (“Godbolt”): godbolt.org CuPy: github.com RAPIDS User Guide: docs.rapids.ai Watch this episode on YouTube: youtube.com Episode #509 deep-dive: talkpython.fm/509 Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Craig Dunham is the CEO of Voltron Data, a company specializing in GPU-accelerated data infrastructure for large-scale analytics, AI, and machine learning workloads. Before joining Voltron Data, he served as CEO of Lumar, a SaaS technical SEO platform, and held executive roles at Guild Education and Seismic, where he led the integration of Seismic's acquisition of The Savo Group and drove go-to-market strategies in the financial services sector. Craig began his career in investment banking with Citi and Lehman Brothers before transitioning into technology leadership roles. He holds a MBA from Northwestern University and a BS from Hampton University. In this episode… In a world where efficiency and speed are paramount, how can companies quickly process massive amounts of data without breaking the bank on infrastructure and energy costs? With the rise of AI and increasing data volumes from everyday activities, organizations face a daunting challenge: achieving fast and cost-effective data processing. Is there a solution that can transform how businesses handle data and unlock new possibilities? Craig Dunham, a B2B SaaS leader with expertise in go-to-market strategy and enterprise data systems, tackles these challenges head-on by leveraging GPU-accelerated computing. Unlike traditional CPU-based systems, Voltron Data's technology uses GPUs to greatly enhance data processing speed and efficiency. Craig shares how their solution helps enterprises reduce processing times from hours to minutes, enabling organizations to run complex analytics faster and more cost-effectively. He emphasizes that Voltron Data's approach doesn't require a complete overhaul of existing systems, making it a more accessible option for businesses seeking to enhance their computing capabilities. In this episode of the Inspired Insider Podcast, Dr. Jeremy Weisz interviews Craig Dunham, CEO at Voltron Data, about building high-performance data systems. Craig delves into the challenges and solutions in today's data-driven business landscape, how Voltron Data's innovative solutions are revolutionizing data analytics, and the advantages of using GPU over CPU for data processing. He also shares valuable lessons on leading high-performing teams and adapting to market demands.
Fastfetch and LibreOffice mint new releases, KDE teases Kerton for VM management, and KDE is looking to capture Windows 10 exiles. Bcachefs broke filesystems and then fixed them, AMD releases a couple new GPUs, and there's weird drama in X11 and kernel land. For tips, we have Pipewire node management, notes from Kubuntu beta, and a quick primer on the difference between git fetch and git pull. You can find the show notes at https://bit.ly/4jEM36i Have fun! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Ken McDonald Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
פרק מספר 496 של רברס עם פלטפורמה - באמפרס מספר 86: רן, דותן ואלון באולפן הוירטואלי (באמצעות Riverside.fm - תודה!) עם סדרה של קצרצרים שתפסו את תשומת הלב בתקופה האחרונה (והפעם קצת יותר) - בלוגים מעניינים, דברים מ- GitHub או Twitter וכל מיני דברים שראינו, לפני שהכל יתמלא ב-AI.
Episode 61: What will the next generation of AI-powered PCs mean for your everyday computing—and how will features like on-device AI, privacy controls, and new processors transform our digital lives? Matt Wolfe (https://x.com/mreflow) is joined by Pavan Davuluri (https://x.com/pavandavuluri), Corporate Vice President of Windows and Devices at Microsoft, who's leading the charge on bringing AI to mainstream computers. In this episode of The Next Wave, Matt dives deep with Pavan into the world of AI PCs, exploring how specialized hardware like NPUs (Neural Processing Units) make AI more accessible and affordable. They break down the difference between CPUs, GPUs, and NPUs, and discuss game-changing Windows features like Recall—digging into the privacy safeguards and how AI can now run locally on your device. Plus, you'll hear Satya Nadella (https://x.com/satyanadella), Microsoft's CEO, share his vision for how agentic AI could revolutionize healthcare and what the future holds for AI-powered Windows experiences. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) NPUs: The Third Processor Revolution (05:41) NPU Efficiency in AI Devices (09:31) Windows Empowering Users Faster (13:00) Evolving Windows Ecosystem Opportunities (13:49) AI Enhancing M365 Copilot Research (15:43) Satya Nadella On AI and Healthcare — Mentions: Want the ultimate guide to Advanced Prompt Engineering? Get it here: https://clickhubspot.com/wbv Pavan Davuluri: https://www.linkedin.com/in/pavand/ Satya Nadella: https://www.linkedin.com/in/satyanadella/ Microsoft: https://www.microsoft.com/ Microsoft 365: https://www.microsoft365.com/ Microsoft Recall https://learn.microsoft.com/en-us/windows/ai/recall/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
How does a top AI company scale massive clusters and build AI for the enterprise? In this episode of The Liftoff with Keith, we talk to Ted Shelton, COO of Inflection AI, from the AI Infra Summit 2025. Ted shares how their team pivoted from consumers to enterprise after their Microsoft deal, why seamless infrastructure is key, and what it takes to build AI models that run on NVIDIA, AMD, and Intel.Learn why “getting to the no” is the smartest move for founders, how enterprises can embrace sovereign AI, and how Inflection's approach to model customization unlocks massive business value.
This week, we're diving into the world of PlayerUnknown's Battlegrounds (PUBG) and the highly anticipated new game from PUBG Studios! Is it a sequel, a spin-off, or something entirely unexpected? We break down the rumors, official teasers, and what fans can hope for in this next evolution of battle royale. Plus, we discuss PUBG's enduring legacy and how it stacks up against today's competition. Then, we shift gears to Computex 2025 Day 2, where the biggest names in tech unveiled cutting-edge hardware, from next-gen GPUs to AI-powered peripherals. We recap the most exciting announcements, surprise reveals, and what they mean for gamers and PC enthusiasts. Whether you're a PUBG diehard or a tech junkie, this episode has something for you. Drop in, loot up, and stay tuned for the latest! Our Patreon https://www.patreon.com/TechPrimeMedia Talking Gaming & Tech is Produced by Tech Prime Media and is part of the Dorkening Podcast Network and is brought to you by Deadly Grounds Coffee! https://youtu.be/7Y2rL7v75X4?si=iHKeO4TWA4njqmJF https://deadlygroundscoffee.com/ Send us your feedback online: https://pinecast.com/feedback/talking-gaming-tech/9037b3f8-99e0-4bdb-affb-1b3d3fc9020d This podcast is powered by Pinecast.
Dylan bonds with Nvidia's CFO and I try to keep the GPUs in actual democracies. Outtro Music: FaceTime, Karencici, 2018. https://open.spotify.com/track/2PNDZp0ultOJrQL4AVENPO?si=46cdf72cdffb40a3 Learn more about your ad choices. Visit megaphone.fm/adchoices
Dylan bonds with Nvidia's CFO and I try to keep the GPUs in actual democracies. Outtro Music: FaceTime, Karencici, 2018. https://open.spotify.com/track/2PNDZp0ultOJrQL4AVENPO?si=46cdf72cdffb40a3 Learn more about your ad choices. Visit megaphone.fm/adchoices
Andrew Lindsey, CEO of FLEXNODE, joined us on JSA TV at Metro Connect USA to share how the company is supporting the growing demands of AI and the rapid evolution of digital infrastructure. From AI-driven GPUs to the need for liquid cooling, Flexnode is paving the way for a future-proof, scalable infrastructure with its modular, adaptable designs.
Build and run your AI apps and agents at scale with Azure. Orchestrate multi-agent apps and high-scale inference solutions using open-source and proprietary models, no infrastructure management needed. With Azure, connect frameworks like Semantic Kernel to models from DeepSeek, Llama, OpenAI's GPT-4o, and Sora, without provisioning GPUs or writing complex scheduling logic. Just submit your prompt and assets, and the models do the rest. Using Azure's Model as a Service, access cutting-edge models, including brand-new releases like DeepSeek R1 and Sora, as managed APIs with autoscaling and built-in security. Whether you're handling bursts of demand, fine-tuning models, or provisioning compute, Azure provides the capacity, efficiency, and flexibility you need. With industry-leading AI silicon, including H100s, GB200s, and advanced cooling, your solutions can run with the same power and scale behind ChatGPT. Mark Russinovich, Azure CTO, Deputy CISO, and Microsoft Technical Fellow, joins Jeremy Chapman to share how Azure's latest AI advancements and orchestration capabilities unlock new possibilities for developers. ► QUICK LINKS: 00:00 - Build and run AI apps and agents in Azure 00:26 - Narrated video generation example with multi-agentic, multi-model app 03:17 - Model as a Service in Azure 04:02 - Scale and performance 04:55 - Enterprise grade security 05:17 - Latest AI silicon available on Azure 06:29 - Inference at scale 07:27 - Everyday AI and agentic solutions 08:36 - Provisioned Throughput 10:55 - Fractional GPU Allocation 12:13 - What's next for Azure? 12:44 - Wrap up ► Link References For more information, check out https://aka.ms/AzureAI ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Nvidia, as you probably know, makes chips — more specifically, GPUs, which are needed to power artificial intelligence systems. But as AI adoption ramps up, why does it feel like Nvidia's still the only chipmaker in the game? In this episode, why the California-based firm is, for now, peerless, and which companies may be angling to compete. Plus: Dwindling tourists worry American retailers, Dick's Sporting Goods sticks to its partly-sunny forecast and the share of single women as first-time homebuyers grows.Every story has an economic angle. Want some in your inbox? Subscribe to our daily or weekly newsletter.Marketplace is more than a radio show. Check out our original reporting and financial literacy content at marketplace.org — and consider making an investment in our future.
Nvidia, as you probably know, makes chips — more specifically, GPUs, which are needed to power artificial intelligence systems. But as AI adoption ramps up, why does it feel like Nvidia's still the only chipmaker in the game? In this episode, why the California-based firm is, for now, peerless, and which companies may be angling to compete. Plus: Dwindling tourists worry American retailers, Dick's Sporting Goods sticks to its partly-sunny forecast and the share of single women as first-time homebuyers grows.Every story has an economic angle. Want some in your inbox? Subscribe to our daily or weekly newsletter.Marketplace is more than a radio show. Check out our original reporting and financial literacy content at marketplace.org — and consider making an investment in our future.
In this episode of the Additive Snack Podcast, host Fabian Alefeld explores the critical role of mathematics in additive manufacturing with guest Harshil Goel, founder, and CEO of Dyndrite. Harshil shares his unconventional entry into the additive manufacturing industry, driven by his deep background in mathematics and mechanical engineering. The conversation delves into how Dyndrite's software provides solutions for complex additive manufacturing challenges, from data preparation to materials and process development. Harshil also discusses various customer success stories and how their software helps streamline qualification processes, ultimately enhancing productivity. Additionally, they discussed the upcoming Dyndrite roadshow aimed at educating users on advanced additive manufacturing techniques, featuring hands-on sessions and practical demonstrations.Comments about the show or wish to share your AM journey? Contact us at additive.snack@eos-na.com. The Additive Snack Podcast is brought to you by EOS. For more information about Dyndrite's innovative solutions, visit their website and connect with Harshil Goel on LinkedIn. 01:17 Meet Harshil Go: From Mathematics to Additive Manufacturing02:33 The Birth of Dyndrite: Solving Software Challenges05:22 Understanding Dyndrite's Core Offerings09:53 Dyndrite's Unique Approach to Build Preparation15:07 Customer Success Stories and Real-World Applications19:47 Empowering Engineers with Python Integration23:04 Learning and Adapting to Dyndrite's Tools26:51 Multilingual Proficiency and Family Background27:36 Transitioning to Coding and Tool Integration28:17 Optimizing Production with Dendrite30:05 Challenges and Innovations in Qualification33:20 Deep Dive into Aviation Qualification39:17 Additive Manufacturing Industry Trends43:09 The Role of GPUs and AI in Additive Manufacturing45:59 Dyndrite Roadshow and Conclusion
We're just a pair with a shorter show this week as we chat a bit more about Doom: The Dark Ages and the nature of PC system requirements (and the state of PC hardware and GPUs right now), plus Crashlands 2, Final Destination Bloodlines, then a bunch of your emails. This week's music: Finishing Move Inc. - Unchained Predator
Timestamps: 0:00 See ya on Wed, May 21 0:09 Epic's plan for Apple to block Fortnite 3:29 Intel Arc Pro B60, RX 9060 XT 4:27 OpenAI Codex, Grok's breakdown 5:50 MSI! 6:41 QUICK BITS INTRO 6:47 Spotify podcast play counts 7:14 The Steam data breach that wasn't 7:41 Australian rocket top fell off 8:07 BREAKING: Vader is bad guy NEWS SOURCES: https://lmg.gg/oRJxT Learn more about your ad choices. Visit megaphone.fm/adchoices
The AI revolution is underway, and the U.S. and China are racing to the top. At the heart of this competition are semiconductors—especially advanced GPUs that power everything from natural language processing to autonomous weapons. The U.S. is betting that export controls can help check China's technological ambitions. But will this containment strategy work—or could it inadvertently accelerate China's drive for self-sufficiency? Those who think chip controls will work argue that restricting China's access gives the U.S. critical breathing room to advance AI safely, set global norms, and maintain dominance. Those who believe chip controls are inadequate, or could backfire, warn that domestic chipmakers, like Nvidia and Intel, also rely on sales from China. Cutting off access could harm U.S. competitiveness in the long run, especially if other countries don't fully align with U.S. policy. As the race for AI supremacy intensifies, we debate the question: Can the U.S. Outpace China in AI Through Chip Controls? Arguing Yes: Lindsay Gorman, Managing Director and Senior Fellow of the German Marshall Fund's Technology Program; Venture Scientist at Deep Science Ventures Will Hurd, Former U.S. Representative and CIA Officer Arguing No: Paul Triolo, Senior Vice President and Partner at DGA-Albright Stonebridge Group Susan Thornton, Former Diplomat; Visiting Lecturer in Law and Senior Fellow at the Yale Law School Paul Tsai China Center Emmy award-winning journalist John Donvan moderates This debate was produced in partnership with Johns Hopkins University. This debate was recorded on May 14, 2025 at 6 PM at Shriver Hall, 3400 N Charles St Ste 14, in Baltimore, Maryland. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Episode 71: We chat about potential upcoming Intel Arc GPUs including the B770 and B580 24GB, discuss the RX 9060 XT and what AMD might do with pricing, and round out the chat with some discoveries about laptop GPUs.CHAPTERS00:00 - Intro05:13 - Intel Arc B770, Is It Real?22:10 - New Arc B580 Configurations?26:18 - Lead-Up to the RX 9060 XT39:04 - Price Considerations and Concerns with 9060 XT1:04:30 - The RTX 5090 Laptop is a Joke1:14:25 - Updates From Our Boring LivesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
In Episode 6 The Bitcoin Policy Hour, the Bitcoin Policy Institute team unpacks the emerging “Riyadh Accord,” a sweeping geopolitical realignment where the United States, Saudi Arabia, and other Gulf nations are bundling AI, Bitcoin, and techno-industrial leverage into a new framework of global influence.As Blackwell chips begin to replace F-35s as diplomatic bargaining tools, and sovereign wealth funds quietly accumulate Bitcoin, Riyadh is fast becoming the epicenter of digital energy, intelligence infrastructure, and monetary power. The conversation explores how U.S. foreign policy is shifting from military entanglements toward high-tech trade agreements and capital co-investments — with AI and Bitcoin at the core.PLUS, they explore the recent slew of BTC treasury companies amping up activity and how they fit into the picture of jurisdictional and memetic arbitrage.Chapters:00:00 - Introduction01:45 - What Is the “Riyadh Accord”?07:30 - Blackwell Chips Replace F-35s in Middle East Bargains13:30 - AGI Infrastructure: Will the AI Run Happen in Riyadh?17:00 - A New Multipolar World Centered on Energy and Compute21:00 - Samourai Wallet, Legal Overreach, and Bitcoin's Core Ethos26:30 - Policy Hypocrisy: Bitcoin Freedom vs. Surveillance State31:00 - AI Feudalism and the Fight for Decentralized Money36:00 - The Open Source AI vs. Corporate Subscription Future41:00 - Bearer Assets: Bitcoin, GPUs, and Energy as Sovereign Tools46:00 - Global Reflexivity and Bitcoin Treasury Companies51:00 - Metaplanet, Nakamoto, and the Meme Wrapped in Arbitrage56:00 - Bitcoin's Geopolitical Moment: What Comes Next?01:01:00 - Closing Thoughts: Bitcoin Banks, Policy Risks, and Meme Economics⭐ Join top policymakers, technologists, and Bitcoin industry leaders at the 2025 Bitcoin Policy Summit, June 25–26 in Washington, D.C.
In 2014, when Lisa Su took over as CEO of Advanced Micro Devices, AMD was on the verge of bankruptcy. Su bet hard on hardware and not only pulled the semiconductor company back from the brink, but also led it to surpass its historical rival, Intel, in market cap. Since the launch of ChatGPT made high-powered chips like AMDs “sexy” again, demand for chips has intensified exponentially, but so has the public spotlight on the industry — including from the federal government. In a live conversation, at the Johns Hopkins University Bloomberg Center, as part of their inaugural Discovery Series, Kara talks to Su about her strategy in face of the Trump administration's tariff and export control threats, how to safeguard the US in the global AI race, and what she says when male tech leaders brag about the size of their GPUs. Listen to more from On with Kara Swisher here. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Gary Marcus is a cognitive scientist, author, and longtime AI skeptic. Marcus joins Big Technology to discuss whether large‑language‑model scaling is running into a wall. Tune in to hear a frank debate on the limits of “just add GPUs" and what that means for the next wave of AI. We also cover data‑privacy fallout from ad‑driven assistants, open‑source bio‑risk fears, and the quest for interpretability. Hit play for a reality check on AI's future — and the insight you need to follow where the industry heads next. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Today we're in conversation with Siddhant Mehta, Project Manager at Skanska, to explore how AI is transforming construction. From choosing the right tools to critiquing SaaS pricing models, Sid shares insights on tech adoption, AI coding, and the future of project management.00:46 – Sid's Journey AbroadSid Mehta shares his story from Mumbai to the U.S., managing multimillion-dollar projects and finding his place in construction management.02:03 – Building Tech NetworksHow Skanska leverages emerging tech groups, vendor evaluations, and peer networks to spread innovation across teams.03:55 – Tech Adoption RealitiesSid challenges perceptions of slow adoption in construction, highlighting why pilot projects need time to show results.05:14 – The Feedback GapWhy construction tech tools often miss the mark, and how missing field feedback hurts tool development.06:43 – Choosing the Right ToolSid explains why not every tech solution fits every project, stressing the importance of aligning tools with project type and phase.09:06 – SaaS Pricing RantA frank critique of SaaS pricing in construction, questioning project-based fees versus simpler subscriptions.12:00 – Naming Names (Kinda) A playful yet pointed critique of familiar industry pricing models—without naming names (but we all know who).17:05 – Rise of AI CodingExploring tools like Replit, Claude, and Cursor, and the rise of “vibe coding” in construction tech and software development.23:02 – AI's Development ImpactHow AI coding shifts the role of developers, and why front-end engineering faces more disruption than back-end.28:00 – Data Centers & DemandHow AI's growth drives demand for data centers, reshaping infrastructure needs for GPUs, power, and cooling.35:00 – Environmental ImpactsA look at the ecological consequences of data center expansion, from water usage to energy demands.40:48 – AI Saves the DayReal-world examples of AI replacing executive assistants, saving hours on email, scheduling, and admin tasks in construction.45:00 – Skanska's Internal AIHow Skanska built internal chatbots to automate project schedules, saving schedulers hours every week.47:26 – Ripple Effect of AISid reflects on how AI's time savings can scale across thousands of employees, transforming workflows organization-wide.50:00 – Marketing's AI ShiftWhy SEO strategies are changing in an AI world, and how creative content is being reshaped by generative tools.54:00 – AI's Rapid AccelerationClosing thoughts on how quickly AI is evolving, and why getting on board now is key for construction leaders.Go build something awesome!CHECK OUT THE PARTNERS THAT MAKE OUR SHOW POSSIBLE: https://www.brospodcast.com/partnersFIND US ONLINE: -Our website: https://www.brospodcast.com -LinkedIn: / constructionbrospodcast -Instagram: / constructionbrospodcast -TikTok: https://www.tiktok.com/@constructionbrothers?lang=en-Eddie on LinkedIn: / eddie-c-057b3b11 -Tyler on LinkedIn: / tylerscottcampbell If you enjoy the podcast, please rate us on Apple Podcasts or wherever you listen to us! Thanks for listening!
In 2014, when Lisa Su took over as CEO of Advanced Micro Devices, AMD was on the verge of bankruptcy. Su bet hard on hardware and not only pulled the semiconductor company back from the brink, but also led it to surpass its historical rival, Intel, in market cap. Since the launch of ChatGPT made high-powered chips like AMDs “sexy” again, demand for chips has intensified exponentially, but so has the public spotlight on the industry — including from the federal government. In a live conversation, at the Johns Hopkins University Bloomberg Center, as part of their inaugural Discovery Series, Kara talks to Su about her strategy in face of the Trump administration's tariff and export control threats, how to safeguard the US in the global AI race, and what she says when male tech leaders brag about the size of their GPUs. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices