POPULARITY
Categories
In this episode, we welcome Lead Principal Technologist Hari Kannan to cut through the noise and tackle some of the biggest myths surrounding AI data management and the revolutionary FlashBlade//EXA platform. With GPU shipments now outstripping CPUs, the foundation of modern AI is shifting, and legacy storage architectures are struggling to keep up. Hari dives into the implications of this massive GPU consumption, setting the stage for why a new approach is desperately needed for companies driving serious AI initiatives. Hari dismantles three critical myths that hold IT leaders back. First, he discusses how traditional storage is ill-equipped for modern AI's millions of small, concurrent files, where metadata performance is the true bottleneck—a problem FlashBlade//EXA solves with its metadata-data separation and single namespace. Second, he addresses the outdated notion that high-performance AI is file-only, highlighting FlashBlade//EXA's unified, uncompromising delivery of both file and object storage at exabyte scale and peak efficiency. Finally, Hari explains that GPUs are only as good as the data they consume, countering the belief that only raw horsepower matters. FlashBlade//EXA addresses this by delivering reliable, scalable throughput, efficient DirectFlash Modules up to 300 TB, and the metadata performance required to keep expensive GPUs fully utilized and models training faster. Join us as we explore the blind spots in current AI data strategies during our "Hot Takes" segment and recount a favorite FlashBlade success story. Hari closes with a compelling summary of how Pure Storage's complete portfolio is perfectly suited to provide the complementary data management essential for scaling AI. Tune in to discover why FlashBlade//EXA is the non-compromise, exabyte-scale solution built to keep your AI infrastructure running at its full potential. For more information, visit: https://www.pure.ai/flashblade-exa.html Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 04:30 Primer on FlashBlade 11:32 Stat of the Episode on GPU Shipments 13:25 What is FlashBlade//EXA 18:58 Myth #1: Traditional Storage Challenges for AI Data 22:01 Myth #2: AI Workloads are not just File-based 26:42: Myth #3: AI Needs more than just GPUs 31:35 Hot Takes Segment
David Kirtley is a nuclear fusion engineer and CEO of Helion Energy, a company working on building the world's first commercial fusion power plant by 2028. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep485-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/david-kirtley-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: David's X: https://x.com/dekirtley David's LinkedIn: https://bit.ly/4qX0KXp Helion: https://www.helionenergy.com/ Helion's YouTube: https://youtube.com/HelionEnergy SPONSORS: To support this podcast, check out our sponsors & get discounts: UPLIFT Desk: Standing desks and office ergonomics. Go to https://upliftdesk.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Miro: Online collaborative whiteboard platform. Go to https://miro.com/ LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex BetterHelp: Online therapy and counseling. Go to https://betterhelp.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex OUTLINE: (00:00) - Introduction (03:00) - Sponsors, Comments, and Reflections (11:35) - Nuclear fission vs fusion (21:35) - Physics of E=mc^2 (26:50) - Is nuclear fusion safe? (32:11) - Chernobyl (38:38) - Geopolitics (40:33) - Extreme scenarios (47:28) - How nuclear fusion works (1:20:20) - Extreme temperatures (1:25:21) - Fusion control and simulation (1:37:15) - Electricity from fusion (2:11:20) - First fusion power plant in 2028 (2:18:13) - Energy needs of GPU clusters (2:28:38) - Kardashev scale (2:36:33) - Fermi Paradox PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
成為這個頻道的會員並獲得福利:https://www.youtube.com/channel/UCJIPFjZSCWR15_jxBaK2fQQ/join你有沒有發現,AI 生態系裡的錢好像多到花不完?Nvidia 股價一飛沖天,OpenAI、xAI 的估值都高到嚇人。但你有沒有想過,這些錢到底是怎麼流動的? 投資人把錢給 AI 公司,AI 公司拿去買 GPU,然後 Nvidia 再把賺來的錢,回頭投資這些 AI 公司... 聽起來是不是有點耳熟?更瘋狂的是,還有人拿 GPU 去抵押貸款,再用借來的錢買更多的 GPU。這集我們就來聊聊:
This week, Addy and Joey dive into the Nano Banana 2 leaks and their potential implications for media authenticity. Then we explore Comfy Cloud's powerful browser-based workflow ($20/month for 8 GPU hours daily), Veo 3.1's new camera controls, and real-time AI video manipulation that's bringing unprecedented control to creators. Plus: Figma acquires Weavy, ByteDance launches affordable video upscaling, and practical AI tools that are transforming post-production workflows.--The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
“Predictions are hard,” Yogi Berra once quipped, “especially about the future”. Yes they are. But in today's AI boom/bubble, how exactly can we predict the future? According to Silicon Valley venture capitalist Aman Verjee, access to the future lies in the past. In his new book, A Brief History of Financial Bubbles, Verjee looks at history - particularly the 17th century Dutch tulip mania and the railway mania of 19th century England - to make sense of today's tech economics. So what does history teach us about the current AI exuberance: boom or bubble? The Stanford and Harvard-educated Verjee, a member of the PayPal Mafia who wrote the company's first business plan with Peter Thiel, and who now runs his own venture fund, brings both historical perspective and insider experience to this multi-trillion-dollar question. Today's market is overheated, the VC warns, but it's more nuanced than 1999. The MAG-7 companies are genuinely profitable, unlike the dotcom darlings. Nvidia isn't Cisco. Yet “lazy circularity” in AI deal-making and pre-seed valuations hitting $50 million suggests traditional symptoms of irrational exuberance are returning. Even Yogi Berra might predict that. * Every bubble has believers who insist “this time is different” - and sometimes they're right. Verjee argues that the 1999 dotcom bubble actually created lasting value through companies like Amazon, PayPal, and the infrastructure that powered the next two decades of growth. But the concurrent telecom bubble destroyed far more wealth through outright fraud at companies like Enron and WorldCom.* Bubbles always occur in the world's richest country during periods of unchallenged hegemony. Britain dominated globally during its 1840s railway mania. America was the sole superpower during the dotcom boom. Today's AI frenzy coincides with American technological dominance - but also with a genuine rival in China, making this bubble fundamentally different from its predecessors.* The current market shows dangerous signs but isn't 1999. Unlike the dotcom era when 99% of fiber optic cable laid was “dark” (unused), Nvidia could double GPU production and still sell every chip. The MAG-7 trade at 27-29 times earnings versus the S&P 500's 70x multiple in 2000. Real profitability matters - but $50 million pre-seed valuations and circular revenue deals between AI companies echo familiar patterns of excess.* Government intervention in markets rarely ends well. Verjee warns against America adopting an industrial policy of “picking winners” - pointing to Japan's 1980s bubble as a cautionary tale. Thirty-five years after its collapse, Japan's GDP per capita remains unchanged. OpenAI is not too big to fail, and shouldn't be treated as such.* Immigration fuels American innovation - full stop. When anti-H1B voices argue for restricting skilled immigration, Verjee points to the counter-evidence: Elon Musk, Sergey Brin, Sundar Pichai, Satya Nadella, Max Levchin, and himself - all H1B visa holders who created millions of American jobs and trillions in shareholder value. Closing that pipeline would be economically suicidal.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
A Intel está entrando em uma nova fase com o lançamento dos processadores Intel Core Ultra Series 3, conhecidos internamente como Panther Lake, e que chegam com tecnologias inéditas como RibbonFET e PowerVia, além da nova litografia Intel 18A, considerada estratégica para o futuro dos semicondutores no Ocidente. No episódio de hoje do Podcast Canaltech, Sérgio Oliveira conversa com Yuri Dalian, sales engineer da Intel Brasil, para entender o que muda na prática para quem usa notebook no dia a dia: mais bateria, mais desempenho, menos aquecimento e uma arquitetura totalmente redesenhada para IA, GPU integrada e eficiência energética. Yuri explica também por que a Intel voltou a investir em E-cores tradicionais, como funciona o chiplet modular que separa CPU, GPU e NPU, e o impacto geopolítico da empresa voltar a fabricar em solo americano. Você também vai conferir: Senado aprova incentivo para empresas que treinam jovens em TI, hospital brasileiro usa óculos de realidade aumentada em cirurgia inédita e nova biometria promete aposentar o Face ID. Este podcast foi roteirizado e apresentado por Fernada Santos e contou com reportagens de Marcelo Fischer, Nathan Vieira e Vinicius Moschen, sob coordenação de Anaísa Catucci. A trilha sonora é de Guilherme Zomer, a edição de Lívia Strazza e a arte da capa é de Erick Teixeira. See omnystudio.com/listener for privacy information.
Le patron français de la recherche en IA de Meta sur le départ, cyberattaque pilotée par une IA, microprocesseur souverain : encore un épisode bourré de tech et d'IA !
A volatile week in markets as Mackenzie Sigalos breaks down the big crypto selloff and Steve Liesman explains the Fed's dilemma. Sam Stovall of CFRA and Jose Rasco of HSBC analyze market reactions, while Seema Mody highlights the GPU-driven tech selloff and debates over Oracle and CoreWeave valuations. Futurm's Daniel Newman on how this week fits into the broader AI narrative. Katie Stockton of Fairlead walks through the technical on Nvidia, VIX and more. Courtney Reagan discusses Walmart's new CEO. Finally, Adam Crisafulli of Vital Knowledge previews next week's market catalysts. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode, Conor and Bryce record live from C++ Under the Sea! We interview Ray and Paul from NVIDIA, talk about Parrot, scans and more!Link to Episode 260 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)SocialsADSP: The Podcast: TwitterConor Hoekstra: Twitter | BlueSky | MastodonBryce Adelstein Lelbach: TwitterAbout the Guests:Ray is a Senior Systems Software Engineer at NVIDIA since 2022. Studied Software Engineering at the University of Amsterdam. Founded the Dutch C++ Meetup in 2013 and co-organizes C++ Under the Sea since 2023. He has been programming for more than 25 years, his journey began on his father's Panasonic CF-2700 MSX--and has been hooked ever since. He is also 'the listener' of ADSP the podcast.Paul Grosse-Bley was first introduced to parallel programming with C+MPI at a student exchange to Umeå (Sweden) in 2017 while studying Physics. In the following years he learned more about MPI, OpenMP, OpenACC, Thrust/parSTL and CUDA C++. After finishing his Master's degree in Physics at Heidelberg University (Germany) in 2021, he became a PhD candidate in Computational Science and Engineering researching the acceleration of iterative solvers in sparse linear algebra while being head-tutor for a course on GPU Algorithm Design. He learned using Thrust in 2019 shortly before learning C++ and became enamored with parallel algorithms which led to numerous answers on StackOverflow, contributions on GitHub, his NVIDIA internship in the summer of 2025 and full position starting in February of 2026.Show NotesDate Recorded: 2025-10-10Date Released: 2025-11-14NVIDIA BCM (Base Command Manager)C++11 std::ignoreC++20 std::bind_frontParrotParrot on GitHubParrot Youtube Video: 1 Problem, 7 Libraries (on the GPU)thrust::inclusive_scanSingle-pass Parallel Prefix Scan with Decoupled Look-back by Duane Merrill & Michael GarlandPrefix Sums and Their Applications by Guy BlellochParallel Prefix Sum (Scan) with CUDANVIDIA ON-Demand VideosA Faster Radix Sort ImplementationIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library
Live from Morgan Stanley's European Tech, Media and Telecom conference in Barcelona, our roundtable of analysts discuss artificial intelligence in Europe, and how the region could enable the Agentic AI wave.Read more insights from Morgan Stanley.----- Transcript -----Paul Walsh: Welcome to Thoughts on the Market. I'm Paul Walsh, Morgan Stanley's European head of research product. We are bringing you a special episode today live from Morgan Stanley's, 25th European TMT Conference, currently underway. The central theme we're focused on: Can Europe keep up from a technology development perspective?It's Wednesday, November the 12th at 8:00 AM in Barcelona. Earlier this morning I was live on stage with my colleagues, Adam Wood, Head of European Technology and Payments, Emmet Kelly, Head of European Telco and Data Centers, and Lee Simpson, Head of European Technology Hardware. The larger context of our conversation was tech diffusion, one of our four key themes that we've identified at Morgan Stanley Research for 2025. For the panel, we wanted to focus further on agentic AI in Europe, AI disruption as well as adoption, and data centers. We started off with my question to Adam. I asked him to frame our conversation around how Europe is enabling the Agentic AI wave. Adam Wood: I mean, I think obviously the debate around GenAI, and particularly enterprise software, my space has changed quite a lot over the last three to four months. Maybe it's good if we do go back a little bit to the period before that – when everything was more positive in the world. And I think it is important to think about, you know, why we were excited, before we started to debate the outcomes. And the reason we were excited was we've obviously done a lot of work with enterprise software to automate business processes. That's what; that's ultimately what software is about. It's about automating and standardizing business processes. They can be done more efficiently and more repeatably. We'd done work in the past on RPA vendors who tried to take the automation further. And we were getting numbers that, you know, 30 – 40 percent of enterprise processes have been automated in this way. But I think the feeling was it was still the minority. And the reason for that was it was quite difficult with traditional coding techniques to go a lot further. You know, if you take the call center as a classic example, it's very difficult to code what every response is going to be to human interaction with a call center worker. It's practically impossible. And so, you know, what we did for a long time was more – where we got into those situations where it was difficult to code every outcome, we'd leave it with labor. And we'd do the labor arbitrage often, where we'd move from onshore workers to offshore workers, but we'd still leave it as a relatively manual process with human intervention in it. I think the really exciting thing about GenAI is it completely transforms that equation because if the computers can understand natural human language, again to our call center example, we can train the models on every call center interaction. And then first of all, we can help the call center worker predict what the responses are going to be to incoming queries. And then maybe over time we can even automate that role. I think it goes a lot further than, you know, call center workers. We can go into finance where a lot of work is still either manual data re-entry or a remediation of errors. And again, we can automate a lot more of those tasks. That's obviously where, where SAP's involved. But basically what I'm trying to say is if we expand massively the capabilities of what software can automate, surely that has to be good for the software sector that has to expand the addressable markets of what software companies are going to be able to do. Now we can have a secondary debate around: Is it going to be the incumbents, is it going to be corporates that do more themselves? Is it going to be new entrants that that benefit from this? But I think it's very hard to argue that if you expand dramatically the capabilities of what software can do, you don't get a benefit from that in the sector. Now we're a little bit more consumer today in terms of spending, and the enterprises are lagging a little bit. But I think for us, that's just a question of timing. And we think we'll see that come through.I'll leave it there. But I think there's lots of opportunities in software. We're probably yet to see them come through in numbers, but that shouldn't mean we get, you know, kind of, we don't think they're going to happen. Paul Walsh: Yeah. We're going to talk separately about AI disruption as we go through this morning's discussion. But what's the pushback you get, Adam, to this notion of, you know, the addressable market expanding? Adam Wood: It's one of a number of things. It's that… And we get onto the kind of the multiple bear cases that come up on enterprise software. It would be some combination of, well, if coding becomes dramatically cheaper and we can set up, you know, user interfaces on the fly in the morning, that can query data sets; and we can access those data sets almost in an automated way. Well, maybe companies just do this themselves and we move from a world where we've been outsourcing software to third party software vendors; we do more of it in-house. That would be one. The other one would be the barriers to entry of software have just come down dramatically. It's so much easier to write the code, to build a software company and to get out into the market. That it's going to be new entrants that challenge the incumbents. And that will just bring price pressure on the whole market and bring… So, although what we automate gets bigger, the price we charge to do it comes down. The third one would be the seat-based pricing issue that a lot of software vendors to date have expressed the value they deliver to customers through. How many seats of the software you have in house. Well, if we take out 10 – 20 percent of your HR department because we make them 10, 20, 30 percent more efficient. Does that mean we pay the software vendor 10, 20, 30 percent less? And so again, we're delivering more value, we're automating more and making companies more efficient. But the value doesn't accrue to the software vendors. It's some combination of those themes I think that people would worry about. Paul Walsh: And Lee, let's bring you into the conversation here as well, because around this theme of enabling the agentic AI way, we sort of identified three main enabler sectors. Obviously, Adam's with the software side. Cap goods being the other one that we mentioned in the work that we've done. But obviously semis is also an important piece of this puzzle. Walk us through your thoughts, please. Lee Simpson: Sure. I think from a sort of a hardware perspective, and really we're talking about semiconductors here and possibly even just the equipment guys, specifically – when seeing things through a European lens. It's been a bonanza. We've seen quite a big build out obviously for GPUs. We've seen incredible new server architectures going into the cloud. And now we're at the point where we're changing things a little bit. Does the power architecture need to be changed? Does the nature of the compute need to change? And with that, the development and the supply needs to move with that as well. So, we're now seeing the mantle being picked up by the AI guys at the very leading edge of logic. So, someone has to put the equipment in the ground, and the equipment guys are being leaned into. And you're starting to see that change in the order book now. Now, I labor this point largely because, you know, we'd been seen as laggards frankly in the last couple of years. It'd been a U.S. story, a GPU heavy story. But I think for us now we're starting to see a flipping of that and it's like, hold on, these are beneficiaries. And I really think it's 'cause that bow wave has changed in logic. Paul Walsh: And Lee, you talked there in your opening remarks about the extent to which obviously the focus has been predominantly on the U.S. ways to play, which is totally understandable for global investors. And obviously this has been an extraordinary year of ups and downs as it relates to the tech space. What's your sense in terms of what you are getting back from clients? Is the focus shifts may be from some of those U.S. ways to play to Europe? Are you sensing that shift taking place? How are clients interacting with you as it relates to the focus between the opportunities in the U.S. and Asia, frankly, versus Europe? Lee Simpson: Yeah. I mean, Europe's coming more into debate. It's more; people are willing to talk to some of the players. We've got other players in the analog space playing into that as well. But I think for me, if we take a step back and keep this at the global level, there's a huge debate now around what is the size of build out that we need for AI? What is the nature of the compute? What is the power pool? What is the power budgets going to look like in data centers? And Emmet will talk to that as well. So, all of that… Some of that argument's coming now and centering on Europe. How do they play into this? But for me, most of what we're finding people debate about – is a 20-25 gigawatt year feasible for [20]27? Is a 30-35 gigawatt for [20]28 feasible? And so, I think that's the debate line at this point – not so much as Europe in the debate. It's more what is that global pool going to look like? Paul Walsh: Yeah. This whole infrastructure rollout's got significant implications for your coverage universe… Lee Simpson: It does. Yeah. Paul Walsh: Emmet, it may be a bit tangential for the telco space, but was there anything you wanted to add there as it relates to this sort of agentic wave piece from a telco's perspective? Emmet Kelly: Yeah, there's a consensus view out there that telcos are not really that tuned into the AI wave at the moment – just from a stock market perspective. I think it's fair to say some telcos have been a source of funds for AI and we've seen that in a stock market context, especially in the U.S. telco space, versus U.S. tech over the last three to six months, has been a source of funds. So, there are a lot of question marks about the telco exposure to AI. And I think the telcos have kind of struggled to put their case forward about how they can benefit from AI. They talked 18 months ago about using chatbots. They talked about smart networks, et cetera, but they haven't really advanced their case since then. And we don't see telcos involved much in the data center space. And that's understandable because investing in data centers, as we've written, is extremely expensive. So, if I rewind the clock two years ago, a good size data center was 1 megawatt in size. And a year ago, that number was somewhere about 50 to 100 megawatts in size. And today a big data center is a gigawatt. Now if you want to roll out a 100 megawatt data center, which is a decent sized data center, but it's not huge – that will cost roughly 3 billion euros to roll out. So, telcos, they've yet to really prove that they've got much positive exposure to AI. Paul Walsh: That was an edited excerpt from my conversation with Adam, Emmet and Lee. Many thanks to them for taking the time out for that discussion and the live audience for hearing us out.We will have a concluding episode tomorrow where we dig into tech disruption and data center investments. So please do come back for that very topical conversation. As always, thanks for listening. Let us know what you think about this and other episodes by leaving us a review wherever you get your podcasts. And if you enjoy Thoughts on the Market, please tell a friend or colleague to tune in today.
The infrastructure behind AI is massive - learn to leverage it with our free guide to advanced prompt engineering: https://clickhubspot.com/fnw Episode 84: What's the real infrastructure powering every mind-blowing AI app and model you see today—and are we heading for a “compute bubble”? Nathan Lands (https://x.com/NathanLands) is joined by Evan Conrad (https://x.com/NathanLands), CEO of SF Compute and a leading expert in data centers and AI infrastructure. Evan, previously an AI audio model founder, now leads SF Compute—building a groundbreaking spot market for AI compute that transforms supercomputers into a tradable, financeable commodity. He's at the heart of the rapidly growing AI data center industry, shaping how AI is built, scaled, and funded. This episode pulls back the curtain on the biggest physical build-out in tech history—compute infrastructure. Nathan and Evan break down how AI companies actually get the raw power they need, the economics behind GPU clusters, credit risk bubbles, power constraints in the US vs. China, and why making compute tradable could make or break the future of AI. Whether you're an investor, founder, or just love tech, this is your crash course in the “invisible” industry driving the AI revolution. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI Cluster Economics Explained (05:59) Venture Capital Dependency Risks (09:34) GPU Cloud Requires Long-Term Contracts (10:23) GPU Contracts and Market Solution (13:57) US vs China: AI Competition (19:33) Crypto vs. Automation Ambitions (22:03) US Plutocracy and Environmental Barriers (26:03) GPU Cluster Investment Management (26:55) Compute as Real Estate Model (29:52) Future Skills for a 12-Year-Old (34:20) Automated In-Home Delivery System — Mentions: Evan Conrad: https://www.linkedin.com/in/evan-conrad SF Compute: https://sfcompute.com/ Pipedream: https://pipedream.com/ Udio: https://www.udio.com/ Suno: https://suno.com/ Anthropic: https://www.anthropic.com/ Waymo: https://waymo.com/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
AI is devouring the planet's electricity ... already using up to 2% of global energy and projected to hit 5% by 2030. But a Spanish-Canadian company, Multiverse Computing, says it can slash that energy footprint by up to 95% without sacrificing performance.They specialize in tiny AI: one model has the processing power of just 2 fruit fly brains. Another tiny model lives on a Raspberry Pi.The opportunities for edge AI are huge. But the opportunities in the cloud are also massive.In this episode of TechFirst, host John Koetsier talks with Samuel Mugel, Multiverse's CEO, about how quantum-inspired algorithms can drastically compress large language models while keeping them smart, useful, and fast. Mugel explains how their approach -- intelligently pruning and reorganizing model weights -- lets them fit functioning AIs into hardware as tiny as a Raspberry Pi or the equivalent of a fly's brain.They explore how small language models could power Edge AI, smart appliances, and robots that work offline and in real time, while also making AI more sustainable, accessible, and affordable. Mugel also discusses how ideas from quantum tensor networks help identify only the most relevant parts of a model, and how the company uses an “intelligently destructive” approach that saves massive compute and power.00:00 – AI's energy crisis01:00 – A model in a fly's brain02:00 – Why tiny AIs work03:00 – Edge AI everywhere05:00 – Agent compute overload06:00 – 200× too much compute07:00 – The GPU crunch08:00 – Smart matter vision09:00 – AI on a Raspberry Pi10:00 – How compression works11:00 – Intelligent destruction13:00 – General vs. narrow AIs15:00 – Quantum inspiration17:00 – Quantum + AI future18:00 – AI's carbon footprint19:00 – Cost of using AI20:00 – Cloud to edge shift21:00 – Robots need fast AI22:00 – Wrapping up
Studio Ulster, a world-class virtual production company in Northern Ireland, has collaborated with Dell Technologies to elevate its virtual production capabilities. By leveraging Dell's AI infrastructure solutions, Studio Ulster is set to redefine the future of on-screen innovation. Why does it matter? Studio Ulster's £72 million virtual production facility in Belfast positions Northern Ireland as a global leader in virtual production. Developed by Ulster University in partnership with Belfast Harbour Commission and supported by Northern Ireland Screen, the facility is home to some of the world's most advanced virtual production technologies, pushing the boundaries of cinematic storytelling through using LED panels to create real-time, in-camera digital environments. The facility also houses CoSTAR Screen Lab, an integrated R&D lab driving advancements in screen and performance technology. Advancing Creativity Through AI Innovation Dell PowerEdge R760 servers provide the computing power required to handle the complex and resource-intensive workloads of virtual production. This technology supports multiple production stages, accelerates workflows and raises the bar for visual quality, enabling teams to bring their cinematic visions to life with confidence and ease. Leveraging Dell's extensive GPU technology ecosystem, the Dell team identified and deployed the optimal graphics cards to meet the environment's demanding rendering requirements, while ensuring seamless compatibility with the Unreal Engine matrix. Dell PowerScale's advanced AI capabilities are transforming how Studio Ulster delivers cutting-edge virtual production solutions. With trillions of data points generated every day, teams can now train machine learning models directly within their workflow - empowering artists to create and customise virtual sets quickly and efficiently from existing libraries. This saves production teams time and resources by eliminating the need to build sets from scratch. Dell PowerScale extends this high-performance foundation with next-generation data management, supporting intensive motion capture and 3D/4D scanning workflows. Its robust, scalable architecture ensures that massive data volumes move securely and quickly. Driving Sustainability and Global Impact Virtual production is transforming entertainment mediums everywhere, from blockbuster films to hit television shows to AAA gaming titles. It's not only faster and more cost-effective than traditional methods but also more sustainable. Ulster University's research shows that virtual production can reduce carbon emissions by up to 50% compared to conventional filming. The studio further amplifies its sustainability efforts by operating on 100% renewable energy and maintaining a BREEAM Excellent certification. Dell and Ulster University share a long-standing research partnership, spanning health, life sciences, and digital media. This collaboration has fueled innovation through PhD research funding and joint projects in media and entertainment. Professor Declan Keeney, CEO of Studio Ulster, said: "As we expand our virtual production capabilities, having the right infrastructure to manage intensive computational workloads is essential. Dell's expertise in compute and storage makes it the ideal partner to support our needs today and in the future. From managing terabytes of daily data to unlocking AI's potential, Dell's solutions are integral to how we're using technology to develop cutting-edge solutions within the entertainment industry." Mark Hopkins, General Manager, Ireland and Northern Ireland, Dell Technologies, said: "With AI transformation accelerating, Dell is empowering businesses across the island of Ireland to seamlessly adopt AI, drive faster insights, improve efficiency, and accelerate business outcomes. Together, with Studio Ulster, we're pioneering advancements in creative production, filmmaking, and immersive experiences for global audiences." More about Irish Tec...
Andy Pernsteiner is the Field CTO at VAST Data, working on large-scale AI infrastructure, serverless compute near data, and the rollout of VAST's AI Operating System.The GPU Uptime Battle // MLOps Podcast #346 with Andy Pernsteiner, Field CTO of VAST Data.Huge thanks to VAST Data for supporting this episode!Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractMost AI projects don't fail because of bad models; they fail because of bad data plumbing. Andy Pernsteiner joins the podcast to talk about what it actually takes to build production-grade AI systems that aren't held together by brittle ETL scripts and data copies. He unpacks why unifying data - rather than moving it - is key to real-time, secure inference, and how event-driven, Kubernetes-native pipelines are reshaping the way developers build AI applications. It's a conversation about cutting out the complexity, keeping data live, and building systems smart enough to keep up with your models. // BioAndy is the Field Chief Technology Officer at VAST, helping customers build, deploy, and scale some of the world's largest and most demanding computing environments.Andy has spent the past 15 years focused on supporting and building large-scale, high-performance data platform solutions. From humble beginnings as an escalations engineer at pre-IPO Isilon, to leading a team of technical Ninjas at MapR, he's consistently been in the frontlines solving some of the toughest challenges that customers face when implementing Big Data Analytics and next-generation AI solutions.// Related LinksWebsite: www.vastdata.comhttps://www.youtube.com/watch?v=HYIEgFyHaxkhttps://www.youtube.com/watch?v=RyDHIMniLro The Mom Test by Rob Fitzpatrick: https://www.momtestbook.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Andy on LinkedIn: /andypernsteinerTimestamps:[00:00] Prototype to production gap[00:21] AI expectations vs reality[03:00] Prototype vs production costs[07:47] Technical debt awareness[10:13] The Mom Test[15:40] Chaos engineering[22:25] Data messiness reflection[26:50] Small data value[30:53] Platform engineer mindset shift[34:26] Gradient description comparison[38:12] Empathy in MLOps[45:48] Empathy in Engineering[51:04] GPU clusters rolling updates[1:03:14] Checkpointing strategy comparison[1:09:44] Predictive vs Generative AI[1:17:51] On Growth, Community, and New Directions[1:24:21] UX of agents[1:32:05] Wrap up
On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss the latest tech news stories that made headlines. This week's handpicked topics include: OpenAI signs a 7-year, $38B cloud deal with AWS to secure massive GPU capacity and diversify beyond Microsoft. Announced Nov 3 (Pat) https://x.com/PatrickMoorhead/status/1985358316592079054?s=20 https://x.com/PatrickMoorhead/status/1985352179444863072?s=20 U.S. to block Nvidia's scaled-down AI chips to China; Nvidia CEO says there are "no active discussions" to sell Blackwell there. Nov 7. (Dan) https://x.com/PatrickMoorhead/status/1986380431260815544?s=20 https://x.com/PatrickMoorhead/status/1985690392239513969?s=20 Microsoft's UAE Nvidia AI chip deal: Microsoft secured U.S. government approval to ship 60,000 Nvidia AI chips to the UAE, with a broader $9.7 billion contract for AI cloud services. (Dan) Google makes Ironwood TPUs generally available and adds new Axion Arm VMs for inference-heavy workloads. Announced Nov 6 (Pat) https://x.com/PatrickMoorhead/status/1986854566835748959?s=20 Kimi 2 Thinking costs $4.6M Dan https://x.com/PatrickMoorhead/status/1986930007554531470?s=20 Cheap MacBook (pat) https://x.com/PatrickMoorhead/status/1985774551318368696?s=20 Bulls & Bears Arm Earnings https://x.com/PatrickMoorhead/status/1986771536960008301 https://investors.arm.com/static-files/bde7f15e-4bc8-4524-a0e9-e016889b520d https://x.com/danielnewmanUV/status/1986203347439824919 AMD Earnings https://x.com/PatrickMoorhead/status/1985979219352908272 https://ir.amd.com/news-events/press-releases/detail/1265/amd-reports-third-quarter-2025-financial-results https://x.com/danielnewmanUV/status/1985822986063266013 https://x.com/danielnewmanUV/status/1985835687309165063 https://x.com/danielnewmanUV/status/1985895752372289986 Qualcomm Earnings https://x.com/PatrickMoorhead/status/1986764581293977813 https://investor.qualcomm.com/news-events/press-releases/news-details/2025/Qualcomm-Earnngs-Release-Available-on-Companys-Investor-Relations-Website-c44bca3cf/default.aspx https://x.com/danielnewmanUV/status/1986185406744871045 Coherent Earnings https://x.com/PatrickMoorhead/status/1986774118985794034 https://www.coherent.com/news/press-releases/first-quarter-fiscal-year-2026-results https://x.com/danielnewmanUV/status/1986205975171190977 Lattice Semiconductor Earnings https://x.com/PatrickMoorhead/status/1985983349916143977 https://x.com/danielnewmanUV/status/1985475655241232529
Wie baust du Mobile Games, die nicht nur Spaß machen, sondern auch auf jeder Plattform funktionieren und sich selbst tragen? In dieser Episode sprechen wir über Mobile Gaming: von der Idee über den Game Loop bis zur Monetarisierung. Mit dabei ist Fabi Fink, Game Lead bei Lotum. Lotum steht für Social Casual und Puzzle Hits wie Quiz Planet und Word Blitz, hat die Marke von 1 Milliarde Installationen geknackt und spielt technisch die gesamte Klaviatur von Web bis Native.Wir klären, warum Mobile inzwischen rund die Hälfte des Gaming-Umsatzes ausmacht und ordnen Hypercasual, Casual, Midcore und Hardcore mit vielen Beispielen ein. Wir zeigen, was Mobile heute bedeutet: Native Apps in App Store und Play Store, aber auch Games als Facebook Instant Games sowie Integrationen für Reddit, Discord, TikTok und Netflix. Du erfährst, wie Social Loops auf Plattformen funktionieren, warum asynchrones Multiplayer ein Growth-Hebel ist und was Viralität gegenüber klassischer User Acquisition auszeichnet.Technisch gehen wir tief rein: Warum Lotum für viele Titel auf Vue.js setzt und Game-UX wie eine hochinteraktive Web-App denkt. Wir sprechen über Performance-Details, GPU-freundliche Animationen und warum beim WordBlitz-Core Plain JavaScript die Nase vorn hat. Im Backend wird es handfest mit WebSockets, Redis-Clustern und Realtime-Events in der Google Cloud. Dazu kommen Tools und Plattformen wie Nakama (Open Source Backend for Games) und SpacetimeDB, plus eine ehrliche Kostenstory rund um Firebase.Natürlich geht es auch ums Geld: Ads vs. In-App Purchases, Hybrid-Modelle, ROAS über 180 Tage und was erfolgreiche Titel wirklich auszeichnet. Wir teilen KPI-Realität, A/B-Testing-Erkenntnisse, warum kleine UX-Texte große Effekte haben können und welche Schwelle ein Spiel bei Lotum erreichen sollte, um weiterverfolgt zu werden.Wenn du wissen willst, wie moderne Mobile Games entstehen – technologisch, produktseitig und monetär – schnapp dir diese Episode.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:
Gonka is a decentralized network for high-efficiency AI compute, designed to maximize the use of global GPU power for meaningful AI workloads. By eliminating centralized gatekeepers, Gonka provides builders and researchers with permissionless access to compute resources, while rewarding participants through its native token, GNK.David and Daniil Liberman are Los Angeles-based futurists, serial entrepreneurs, investors, ex-Snap directors of Product, founders of ProductScience.ai and Humanism Co. by Libermans Co., as well as the creators of the Gonka protocol. David recently joined the Bitcoin.com News Podcast to talk about the technology.Inspired by Bitcoin's ability to build massive, decentralized infrastructure, Gonka aims to unite GPU owners globally to create a transformer-based Proof-of-Work (PoW) network for AI compute. Liberman discusses the inefficiencies of centralized cloud models for GPUs and the critical shift towards specialized hardware like ASICs, drawing parallels to Bitcoin mining's evolution.Liberman explains Gonka's focus on inference over training and how a PoW-based network aligns incentives with hardware providers, fostering greater infrastructure growth compared to Proof-of-Stake networks. He also addresses the current AI market valuations, suggesting that while AI's long-term impact will be immense, some company valuations might be in a bubble, similar to the dot-com era. The conversation highlights the importance of tokenomics that prioritize hardware providers and innovators to accelerate hardware efficiency.Finally, the podcast explores the broader implications of decentralized AI for global competition and individual autonomy. Liberman argues that distributed compute systems are essential for smaller countries and startups to compete against centralized national players. He emphasizes that a PoW-based network prevents artificial inflation of value by linking coin value to the real cost of compute, leading to a healthier ecosystem. The episode concludes with a discussion on the future of AI and work, stressing the need for billions of independent, decentralized AIs to prevent wealth and power concentration and ensure a future of abundance.About Our GuestDavid and Daniil began their entrepreneurial path with their siblings, co-founding ventures across computer graphics, finance, and AI. They founded Frank Money, pioneering radical financial transparency, and later launched Kernel AR, which was acquired by Snap the same year. David and Daniil and their siblings signed the Founders Pledge, channeling their economic output into Libermans Co., now valued at $ 400 million, and attracted investments from Marc Andreessen, Josh Kushner, and Arielle Zuckerberg.To learn more about the protocol visit Gonka.AI, and follow the team on X.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Daily News Rundown November 11 2025:Welcome to AI Unraveled, Your daily briefing on the real world business impact of AIListen at https://podcasts.apple.com/us/podcast/ai-daily-news-rundown-openai-is-exploring-ai-tools/id1684415169?i=1000736172688In today's edition:
加入頻道會員,解鎖更多專屬福利:
** AWS re:Invent 2025 Dec 1-5, Las Vegas - Register Here! **Learn how Anyscale's Ray platform enables companies like Instacart to supercharge their model training while Amazon saves heavily by shifting to Ray's multimodal capabilities.Topics Include:Ray originated at UC Berkeley when PhD students spent more time building clusters than ML modelsAnyscale now launches 1 million clusters monthly with contributions from OpenAI, Uber, Google, CoinbaseInstacart achieved 10-100x increase in model training data using Ray's scaling capabilitiesML evolved from single-node Pandas/NumPy to distributed Spark, now Ray for multimodal dataRay Core transforms simple Python functions into distributed tasks across massive compute clustersHigher-level Ray libraries simplify data processing, model training, hyperparameter tuning, and model servingAnyscale platform adds production features: auto-restart, logging, observability, and zone-aware schedulingUnlike Spark's CPU-only approach, Ray handles both CPUs and GPUs for multimodal workloadsRay enables LLM post-training and fine-tuning using reinforcement learning on enterprise dataMulti-agent systems can scale automatically with Ray Serve handling thousands of requests per secondAnyscale leverages AWS infrastructure while keeping customer data within their own VPCsRay supports EC2, EKS, and HyperPod with features like fractional GPU usage and auto-scalingParticipants:Sharath Cholleti – Member of Technical Staff, AnyscaleSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
Australia needs control over its intelligence layer, not just its data. We explore SCX's sovereign AI cloud, Project Magpie's cultural reasoning, and why inference economics and time-to-market beat hype-driven buildouts.• sovereign AI as control and context, not just security• SCX's inference cloud and partnership with SambaNova• Project Magpie fine-tuning the reasoning layer for Australia• training vs inference split to optimize cost and speed• tokens per kilowatt as the core unit economics• open source vs closed models in enterprise adoption• retrofitting existing data centers with pre-assembled racks• moving pilots to production through cost, control, and confidence• regional strategy across Southeast Asia and exportable tokens• agents shifting work to domain teams, doing more not just cutting costs• candid MBA debate on value, narrative, and people skills• playful Spark Tank on pickleball and rapid-fire personal insightsWhat if a nation's most critical asset isn't oil, power, or spectrum—but intelligence? We sit down with Southern Cross AI (SCX) founder David Keane, co-founder and CSO Akash Agrawal, and SambaNova's Chief Product and Strategy Officer Abhi Ingle to unpack how a sovereign AI cloud can protect context, culture, and control while still competing on cost and speed. From Australia's national needs to regional demand across Southeast Asia, we chart a pragmatic route from vision to working systems.David explains why SCX is built around inference as a service and how Project Magpie fine-tunes the reasoning layer so models “think like an Australian,” reflecting local law, language, and norms. Abhi breaks down training vs inference in plain English, clarifying why pretraining might live on massive GPU clusters while high-throughput, energy-efficient inference thrives on SambaNova's ASIC-based systems. Akash digs into enterprise realities—data sovereignty, runaway costs, and integration roadblocks—and makes the case for open source models you can fork, fine-tune, and operate within your perimeter.We get practical about tokens per kilowatt as the new ROI, pre-assembled racks that drop into existing data centers, and managed services that cut time-to-market from years to months. We explore why most buyers don't care which chip is under the hood—they care about latency, reliability, and price—and how that shifts competition from hardware logos to delivered outcomes. Go to SCX.ai to experience the future of sovereign AI.Remember, in order to win the “$1,000 token credit" you'll have to explain what a magpie is in the comments, and the team at SCX will judge the winner!David Keane - https://www.linkedin.com/in/dakeane/David serves as the Founder & CEO of SouthernCrossAI (SCX.ai), an Inference-as-a-Service platform dedicated to establishing sovereign, scalable, and cost-efficient AI infrastructure tailored for Australian requirements.Akash Agarwal - https://www.linkedin.com/in/aagarwal/Currently, Akash serves as the Chief Strategy Officer and Co-Founder of SouthernCrossAI (SCX.ai).Abhi Ingle - https://www.linkedin.com/in/ingle-abhi/Abhi Ingle - Currently, Abhi serves as the Chief Product & Strategy Officer (CPSO) at SWebsite: https://www.position2.com/podcast/Rajiv Parikh: https://www.linkedin.com/in/rajivparikh/Sandeep Parikh: https://www.instagram.com/sandeepparikh/Email us with any feedback for the show: sparkofages.podcast@position2.com
Nvidia's $5 trillion valuation is more than a headline — it's a turning point. Every major tech breakthrough now runs on its architecture. The world has officially entered the GPU era.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle
Discover how biotech and healthcare teams are fast-tracking research and development through AI and high-performance cloud infrastructure. Ross Katz sits down with Hugo Shi of Saturn Cloud and Ilya Burkov of Nebius to explore scalable, secure solutions for GPU-heavy AI workloads. From compliance to cost savings, this episode unpacks what it takes to innovate at scale in life sciences. What You'll Learn in This Episode: >> Why AI workloads in biotech demand specialized infrastructure >> How Saturn Cloud and Nebius simplify compliance, scale, and security for life sciences >> Real-world examples of gene editing, RNA sequencing, and medical imaging powered by cloud AI >> The trade-offs between hyperscalers and NeoClouds for GPU availability and cost >> Strategies for deploying, optimizing, and managing large-scale AI models Meet Our Guests Hugo Shi is the CTO and Founder of Saturn Cloud and a co-founder of Anaconda. He brings deep expertise in data science, AI infrastructure, and open-source development, helping teams scale complex workloads with minimal friction. Ilya Burkov is Global Head of Healthcare & Life Sciences Growth at Nebius. With a background in medicine and cloud technology, Ilya leads strategy and partnerships to empower biotech teams with secure, high-performance compute solutions. About The Host Ross Katz is Principal and Data Science Lead at CorrDyn. Ross specializes in building intelligent data systems that empower biotech and healthcare organizations to extract insights and drive innovation. Connect with Our Guest: Sponsor: CorrDyn, a data consultancyConnect with Hugo Shi on LinkedIn Connect with IIya Burkov on LinkedIn Connect with Us: Follow the podcast for more insightful discussions on the latest in biotech and data science.Subscribe and leave a review if you enjoyed this episode!Connect with Ross Katz on LinkedIn Sponsored by… This episode is brought to you by CorrDyn, the leader in data-driven solutions for biotech and healthcare. Discover how CorrDyn is helping organizations turn data into breakthroughs at CorrDyn.
In today's Cloud Wars Live, Mahesh Thiagarajan, EVP, Oracle Cloud Infrastructure, speaks with Bob Evans about Oracle's bold strategy to lead in the AI infrastructure race. He details how Oracle is scaling zeta-level compute, launching a 1.5 gigawatt GPU campus, and engineering full-stack solutions that combine bare-metal hardware, custom networking, and advanced software. With OCI's rapid innovation and massive scale, Oracle is positioning itself as a serious challenger to cloud incumbents like AWS, Microsoft, and Google Cloud.Scaling AI at OracleThe Big Themes:Enterprise Data Continuity and Cloud Strategy: Enterprises rely on mission-critical data, such as databases, and migrating that data to the cloud remains a major strategic priority. The challenge isn't simply moving data: It's building a cloud platform that delivers real value to customers. As Thiagarajan and his team began developing Oracle Cloud Infrastructure to support these needs, they focused on core fundamentals: performance, cost efficiency, and security. This illustrates that for today's cloud providers, success isn't just about innovative features, but about engineering deep, resilient infrastructure.Customer‑First Execution: Thiagarajan repeatedly states there is no perfect playbook. The approach: wake up every day, talk to partners, figure out what customers need and execute. This mindset emphasises responsiveness and pragmatism. Given the rapid pace of change in cloud and AI, large providers cannot wait for general frameworks to emerge. They must iterate, partner, and build in real time.“Late” As An Advantage: Thiagarajan observes that arriving in cloud later gave Oracle the ability to learn from first movers' mistakes and benefit from newer hardware generations without legacy baggage. While first movers often carry large legacy systems, later entrants can design for new architectures (bare‑metal, custom networking) from the ground up. That doesn't guarantee success but presents an advantage if leveraged.The Big Quote: “You earn trust with [partners] by getting their products out to market fast into the hands of the customers, because that really translates to them, the end customer, being happy."More from Mahesh Thiagarajan and Oracle:Connect with Mahesh Thiagarajan on LinkedIn or take a look at his Oracle blog posts. Visit Cloud Wars for more.
In this episode, Ben sits down with David Choi, Co-Founder of USD.AI. USD.AI is an emblematic project for crypto's promise in the real world. The protocol leverages the strengths of DeFi to solve a bottle neck and pain point in the rabid AI infrastructure build out. By facilitating GPU backed loans, USD.AI unlocks capital efficiency for these companies scaling AI while providing real yield to on-chain lenders. David and the USD.AI team have caught lightning in a bottle and we're thrilled to share their story with the Scenius Studio audience.Disclaimer Ben Jacobs is a partner at Scenius Capital Management. All views expressed by Ben and the guests of this podcast are solely their opinions and do not reflect the opinions of Scenius Capital Management. Guests and the host may maintain positions in the assets or funds discussed in this podcast. You should not treat any opinion expressed by anyone on this podcast as a specific inducement to make a particular investment or follow a particular strategy but only as an expression of their personal opinion. This podcast is for informational purposes only.
Nvidia's name now stands beside the greatest in history. Its innovations in GPU design have made it the heartbeat of AI. Investors see no limit to how far it can go.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle
In the first of a two-part roundtable discussion, our Global Head of Research joins our Global Head of Thematic Research and Head of Firmwide AI to discuss how the economic and labor impacts of AI adoption.Read more insights from Morgan Stanley.----- Transcript ----- Kathryn Huberty: Welcome to Thoughts on the Market. I'm Katy Huberty, Morgan Stanley's Global Head of Research, and I'm joined by Stephen Byrd, Global Head of Thematic Research, and Jeff McMillan, Morgan Stanley's Head of Firm-wide AI.Today and tomorrow, we have a special two-part episode on the number one question everyone is asking us: What does the future of work look like as we scale AI?It's Tuesday, November 4th at 10am in New York.I wanted to talk to you both because Stephen, your groundbreaking work provides a foundation for thinking through labor and economic impacts of implementing AI across industries. And Jeff, you're leading Morgan Stanley's efforts to implement AI across our more than 80,000 employee firm, requiring critical change management to unlock the full value of this technology.Let's start big picture and look at this from the industry level. And then tomorrow we'll dig into how AI is changing the nature of work for individuals.Stephen, one of the big questions in the news – and from investors – is the size of AI adoption opportunity in terms of earnings potential for S&P 500 companies and the economy as a whole. What's the headline takeaway from your analysis?Stephen Byrd: Yeah, this is the most popular topic with my children when we talk about the work that I do. And the impacts are so broad. So, let's start with the headline numbers. We did a deep dive into the S&P 500 in terms of AI adoption benefits. The net benefits based on where the technology is now, would be about little over $900 billion. And that can translate to well over 20 percent increased earnings power that could generate over $13 trillion of market cap upon adoption. And importantly, that's where the technology is now.So, what's so interesting to me is the technology is evolving very, very quickly. We've been writing a lot about the nonlinear rate of improvement of AI. And what's especially exciting right now is a number of the big American labs, the well-known companies developing these LLMs, are now gathering about 10 times the computational power to train their next model. If scaling laws hold that would result in models that are about twice as capable as they are today. So, I think 2026 is going to be a big year in terms of thinking about where we're headed in terms of adoption. So, it's frankly challenging to basically take a snapshot because the picture is moving so quickly.Kathryn Huberty: Stephen, you referenced just the fast pace of change and the daily news flow. What's the view of the timeline here? Are we measuring progress at the industry level in months, in years?Stephen Byrd: It's definitely in years. It's fast and slow. Slow in the sense that, you know, it's taken some companies a little while now and some over a year to really prepare. But now what we're seeing in our CIO survey is many companies are now moving into the first, I'd say, full fledged adoption of AI, when you can start to really see this in numbers.So, it sort of starts with a trickle, but then in 2026, it really turns into something much, much bigger. And then I go back to this point about non-linear improvement. So, what looks like, areas where AI cannot perform a task six months from now will look very different. And I think – I'm a former lawyer myself. In the field of law, for example, this has changed so quickly as to what AI can actually do. So, what I expect is it starts slow and then suddenly we look at a wide variety of tasks and AI is fairly suddenly able to do a lot more than we expect.Kathryn Huberty: Which industries are likely to be most impacted by the shift? And when you broke down the analysis to the industry and job level, what were some of the surprises?Stephen Byrd: I thought what we would see would be fairly high-tech oriented sectors – and including our own – would be top of the list. What I found was very different. So, think instead of sectors where there's fairly low profit per employee, often low margin businesses, very labor-intensive businesses. A number of areas in healthcare staples came to the top. A few real estate management businesses. So, very different than I expected.The very high-tech sectors actually had some of the lowest numbers, simply because those companies in high-tech tend to have extremely high profit per employee. So, the impact is a lot less. So that was surprising learning. A lot of clients have been digging into that.Kathryn Huberty: I could see why that would've surprised you. But let's focus on banking for a moment since we have the expert here. Jeff, what are some of the most exciting AI use cases in banking right now?Jeff McMillan: You know, I would start with software development, which was probably the first Gen AI use case out of the gate. And not only was it first, but it continues to be the most rapidly advancing. And that's probably; mostly a function of the software, you know, development community. I mean, these are developers that are constantly fiddling and making the technology better.But productivity continues to advance at a linear pace. You know, we have over 20,000 folks here at Morgan Stanley. That's 25 percent of our population. And, you know, the impact both in terms of the size of that population and the efficiencies are really, really significant.So, I would start there. And then, you know, once you start moving past that, it may not seem, you know, sexy. It's really powerful around things like document processing. Financial services firms move massive amounts of paper. We take paper in, whether it be an account opening, whether it be a contract. Somebody reads that information, they reason about it, and then they type that information into a system. AI is really purpose built for that.And then finally, just document generation. I mean, the number of presentations, portfolio reviews, you know, even in your world, Katy, research reports that we create. Once again, AI is really just – it's right down the middle in terms of its ability to generate just content and help people reduce the time and effort to do that.Kathryn Huberty: There's a lot of excitement around AI, but as Stephen mentioned, it's not a linear path. What are the biggest challenges, Jeff, to AI adoption for a big global enterprise like Morgan Stanley? What keeps you up at night?Jeff McMillan: I've often made the analogy that we own a Ferrari and we're driving around circles in a parking lot. And what I mean by that is that the technology has so far advanced beyond our own capacity to leverage it. And the biggest issue is – it's our own capacity and awareness and education.So, what keeps me up at night? it's the firm's understanding. It's each person's and each leader's ability to understand what this technology can do. Candidly, it's the basics of prompting. We spend a lot of time here at the firm just teaching people how to prompt, understanding how to speak to the machine because until you know how to do that, you don't really understand the art of the possible. I tell people, if you have $100 to spend, you should start spending [$]90, on educating your employee base. Because until you do that, you cannot effectively get the best out of the technology.Kathryn Huberty: And as we look out to 2026, what AI trends are you watching closely and how are we preparing the firm to take advantage of that?Jeff McMillan: You and I were just out in Silicon Valley a couple of weeks ago, and seemingly overnight, every firm has become an agentic one. While much of that is aspirational, I think it's actually going to be, in the long term, a true narrative, right? And I think that step where we are right now is really about experimentation, right? I think we have to learn which tools work, what new governance processes we need to put in place, where the lines are drawn. I think we're still in the early stage, but we're leaning in really hard.We've got about 20 use cases that we're experimenting with right now. As things settle down and the vendor landscape really starts to pan out, we'll be down position to fully take advantage of that.Kathryn Huberty: A key element of the agentic solutions is linking to the data, the tools, the application that we use every day in our workflow. And that ecosystem is developing, and it feels that we're now on the cusp of those agentic workflow applications taking hold.Stephen Byrd: So, Katy, I want to jump in here and ask you a question too. With your own background as an IT hardware analyst, how does the AI era compare to past tech or computing cycles? And what sort of lessons from those cycles shape your view of the opportunities and challenges ahead?Kathryn Huberty: The other big question in the market right now is whether an AI bubble is forming. You hear that in the press. It's one of the questions all three of us are hearing regularly from clients. And implicit in that question is a view that this doesn't look like past cycles, past trends. And I just don't believe that to be the case.We actually see the development of AI following a very similar path. If you go back to mainframe and then minicomputer, the PC, internet, mobile, cloud, and now AI. Each compute cycle is roughly 10 times larger in terms of the amount of installed compute.The reality is we've gone from millions to billions to trillions, and so it feels very different. But the reality is we have a trillion dollars of installed CPU compute, and that means we likely need $10 trillion of installed GPU compute. And so, we are following the same pattern. Yes, the numbers are bigger because we keep 10x-ing, but the pattern is the same. And so again, that tells us we're in the early innings. You know, we're still at the point of the semiconductor technology shipping out into infrastructure. The applications will come.The other pattern from past cycles is that exponential growth is really difficult for humans to model. So, I think back to the early days when Morgan Stanley's technology team was really bullish, laying the groundwork for the PC era, the internet era, the mobile era. When we go back and look at our forecasts, we always underestimated the potential. And so that would suggest that what we've seen with the upward earnings revisions for the AI enablers and soon the AI adopters is likely to continue.And so, I see many patterns, you know, that are thread across computing cycles, and I would just encourage investors to realize that AI so far is following similar patterns.Jeff McMillan: Katy, you make the point that much of the playbook is the same. But is there anything fundamentally different about the AI cycle that investors should be thinking about?Kathryn Huberty: The breadth of impact to industries and corporates, which speaks to Stephen's work. We have now four times over mapped the 3,700 companies globally that Morgan Stanley research covers to understand their role in this theme.Are they enabling AI? Are they adopting? Are they disrupted by it? How important is it to the thesis? Do they have pricing power? It's very valuable data to go and capture the alpha. But I was looking at that dataset recently and a third of those nearly 4,000 companies we cover, our analysts are saying that AI has an impact on the investment thesis. A third. And yet we're still in the early innings. And so, what may be different, and make the impact much bigger and broader is just the sheer number of corporations that will be impacted by the theme.Let's pause here and pick up tomorrow with more on workforce transformation and the impact on individual workers.Thank you to our listeners. Please join us tomorrow for part two of our conversation. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.
The equipment that fills data centers is evolving rapidly, driven by the need to fulfill the seemingly insatiable appetite of AI applications. The Open Compute Project (OCP) was founded by Meta/Facebook to promulgate equipment standards and its annual Summit has grown from a small specialized gathering, to an event that strains the capacity of the San Jose Convention Center. Senior research analyst Perkins Liu returns to offer his take on this meteoric growth with host Eric Hanselman. AI requirements are pushing ever greater scale both logically and physically, with the width of server racks doubling in the Open Rack Wide (ORW) specification to support greater density and better serviceability. The OCP Foundation is also working on silicon interoperability and is setting specifications for chiplet integration. Liquid cooling has moved from a nice to have feature to a required capability as a means to dissipate the huge amount of energy drawn by ever denser GPU arrays. Energy delivery is changing with the advent of higher voltage DC power. The early OCP efforts on 48 volt DC are paling in the face of new 800 volts designs. The OCP Foundation is also expanding its mission to include education, with the establishment of the OCP Academy. It aims to raise workforce skills in open hardware and will offer online training in data center technologies. That underscores not only the expansion of the OCP Foundation's mission, but also the increasing scale of the ecosystem that supports data center environments and complexity and interdependency that AI creates. More S&P Global Content: Sustainability continues to drive datacenter infrastructure evolution Webinar: Talk to the Expert - Artificial intelligence, datacenters and energy: Is APAC ready for th… For S&P Global subscribers: Air cooling remains prevalent, but liquid cooling is gaining momentum – Highlights from VotE: Datac… Adjusted definitions of datacenter markets in China align with socioeconomic processes Datacenters increasingly use direct current to cope with AI workloads Credits: Host/Author: Eric Hanselman Guest: Perkins Liu Producer/Editor: Feranmi Adeoshun Published With Assistance From: Sophie Carr, Kyra Smith
At this year's Data Center Frontier Trends Summit, Honghai Song, founder of Canyon Magnet Energy, presented his company's breakthrough superconducting magnet technology during the “6 Moonshot Trends for the 2026 Data Center Frontier” panel—showcasing how high-temperature superconductors (HTS) could reshape both fusion energy and AI data-center power systems. In this episode of the Data Center Frontier Show, Editor in Chief Matt Vincent speaks with Song about how Canyon Magnet Energy—founded in 2023 and based in New Jersey and Stony Brook University—is bridging fusion research and AI infrastructure through next-generation magnet and energy-storage technology. Song explains how HTS magnets, made from REBCO (Rare Earth Barium Copper Oxide), operate at 77 Kelvin with zero electrical resistance, opening the door to new kinds of super-efficient power transmission, storage, and distribution. The company's SMASH (Superconducting Magnetic Storage Hybrid) system is designed to deliver instant bursts of energy—within milliseconds—to stabilize GPU-driven AI workloads that traditional batteries and grids can't respond to fast enough. Canyon Magnet Energy is currently developing small-scale demonstration projects pairing SMES systems with AI racks, exploring integration with DC power architectures and liquid-cooling infrastructure. The long-term roadmap envisions multi-mile superconducting DC lines connecting renewables to data centers—and ultimately, fusion power plants providing virtually unlimited clean energy. Supported by an NG Accelerate grant from New Jersey, the company is now seeking data-center partners and investors to bring these technologies from the lab into the field.
1부 [텍코노미] 젠슨황의 GPU 선물.. AI 판도 바꿀까 - 김덕진 IT커뮤니케이션 연구소 소장 2부 [쩐설의 김선생] 우리가 배달의 민족이 되기까지 - 재원쌤
AI infrastructure spending is rapidly increasing, with major technology companies such as Google, Microsoft, and Amazon projected to invest over $300 billion by the end of 2025. This surge is primarily driven by the demand for data centers and GPU capacity to support artificial intelligence initiatives. While the Federal Reserve indicates that current investments differ from the dot-com boom due to stronger earnings among leading firms, analysts caution that excessive debt-fueled expansion could pose risks if AI does not deliver the anticipated returns. The insurance sector is particularly optimistic, with 67% of CEOs expecting returns on AI investments within three years, a significant increase from 20% the previous year.Alphabet reported a quarterly revenue of $102.35 billion, exceeding expectations, with a notable 34% growth in cloud revenue. Amazon's cloud division, AWS, also showed resilience, generating $33 billion in net sales despite a global outage. Microsoft reported $49 billion in cloud revenue, with Azure experiencing a 40% year-over-year growth. However, Microsoft faced challenges with capacity shortages, which could limit revenue potential. Additionally, OpenAI reported a net loss of $11.5 billion, impacting Microsoft's financials due to its significant investment in the AI company.In the small business sector, sales transactions increased by 8% in the third quarter of 2025, although owner confidence has declined due to inflation and rising operational costs. Many small business owners are motivated to sell before conditions worsen, with a notable number of buyers identified as corporate refugees seeking stability in essential service sectors. This trend presents an opportunity for IT providers to offer tailored technology solutions to these new business owners, who may lack expertise in setting up necessary systems.For Managed Service Providers (MSPs) and IT service leaders, the current environment presents both challenges and opportunities. As AI drives infrastructure upgrades, providers must help clients navigate the complexities of AI investments, ensuring they understand when AI initiatives are financially viable versus when they may be overhyped. Additionally, the increase in small business sales indicates a demand for reliable technology setups, providing MSPs with a chance to offer essential services that facilitate smoother transitions for new business owners. Three things to know today00:00 AI Spending Soars Past $300 Billion as Cloud Titans Post Record Earnings and Mounting Risks06:15 AI Gold Rush: Big Tech's Billion-Dollar Bet Fuels Cloud Expansion, CEO Optimism, and Debt Warnings10:26 Small Business Sales Surge as Inflation Saps Confidence and Corporate Refugees Step In This is the Business of Tech. Supported by: https://scalepad.com/dave/https://getflexpoint.com/msp-radio/
2부 [이슈 인터뷰1] GPU 26만장, AI 시장 어떻게 바뀌나 - 임문영 국가AI전략위원회 상근부위원장 [6분집중] 경제방송 믿고 땅 샀는데... - 유승민 작가
We are back with episode 188, Eric, Zac and Deric share their latest building blunders, biggest pet peeves and judge your house.Eric shares the dramatic tale of his oversized, custom-built closet cabinets. Due to the unexpected density of his plywood , the cabinets were nearly 300 pounds. In a wild twist of measuring tape versus reality, he had to whip out his multi-tool and literally cut a hole in ceiling just to get the final piece wedged into the closet!Zac dives back into his custom desk PC build, detailing his struggle with a new, wider graphics card creating clearance issues in his drawer. Can he find a specialty, low-profile drawer slides to save the project and avoid taking a belt sander to a $2,500 GPU?We get down and dirty discussing the things that stand out and kinda bug us when visiting someone else's home. Don't worry, we would never say anything to your face, just our podcast audience.Got a question that you want us to answer? Send us an email at offthecutpodcast@gmail.com -------------------------AftershowGet access to the aftershow and unlock tons of cool perks over on Patreon-https://www.patreon.com/offthecutpodcast -------------------------Hang Out with UsWatch the live stream of the podcast on YouTube!https://www.youtube.com/channel/UCcRJPIp6OaffQtvCZ2AtWWQ-------------------------Pick Up Some Merch!Off The Cut Podcast- https://www.spencleydesignco.com -------------------------Follow ZacInstagram - https://www.instagram.com/zacbuilds YouTube - https://www.youtube.com/c/@ZacBuilds TikTok - https://www.tiktok.com/@zacbuilds -------------------------Follow EricInstagram - https://www.instagram.com/spencleydesignco YouTube - https://youtube.com/@spencleydesignco TikTok - https://www.tiktok.com/@spencleydesignco -------------------------Follow DericInstagram/YouTube/TikTok @PecanTreeDesign https://linktr.ee/pecantreedesign ---------------------------This episode is proudly sponsored by:KM Tools - Check out everything they have to offer at kmtools.com/SPENCLEYDESIGNCO WTB Woodworking - Check out the giveaway over at:https://www.wtbwoodworking.com/giveaway Gorilla Glue - Built By You; Backed By Gorilla www.gorillatough.com Interested in starting your own podcast? Check out Streamyard: https://streamyard.com/pal/c/5926541443858432 #Woodworking #DIY #3DPrinting #Maker #ContentCreation #YouTuber #OffTheCutPodcast #Sponsored #KMTools #WTBWoodworking #GorillaGlue
1-2부 [뉴스신세계] 1. 엔비디아, 韓에 GPU 26만장 푼다…삼성·SK·현대차와 'AI 동맹' 2. 대통령실 "내일 한중회담서 한반도 비핵화·평화 논의" 3. 李대통령 "韓日, 이웃임을 부정할수도 협력의 손 놓을수도 없어“ - MBC 현영준 기자 (출연) [이슈하이킥1] 검찰개혁·특검·국감…권력 견제의 방향은? - 조국혁신당 박은정 의원 (출연)
Graphics processing units are essential to training and deploying artificial intelligence models, but they don't come cheap. Big Tech companies like Meta, Microsoft and xAI have spent billions, amassing hundreds of thousands or even millions of them. For those without such deep pockets, access to this kind of computing power has gotten out of reach. Recently, the state of California launched an initiative called CalCompute to look into building its own public GPU cluster for startups and non-profit researchers to use. There are similar public compute pilots in New York state and at the federal level. Marketplace's Meghan McCarty Carino tells us more.
Graphics processing units are essential to training and deploying artificial intelligence models, but they don't come cheap. Big Tech companies like Meta, Microsoft and xAI have spent billions, amassing hundreds of thousands or even millions of them. For those without such deep pockets, access to this kind of computing power has gotten out of reach. Recently, the state of California launched an initiative called CalCompute to look into building its own public GPU cluster for startups and non-profit researchers to use. There are similar public compute pilots in New York state and at the federal level. Marketplace's Meghan McCarty Carino tells us more.
In this conversation from a16z's Runtime conference, Gavin Baker, Managing Partner and CIO of Atreides Management, joins David George, General Partner at a16z, to unpack the macro view of AI: the trillion-dollar data center buildout, the new economics of GPUs, and what this boom means for investors, founders, and the global economy. Resources:Follow Gavin on X: https://x.com/GavinSBakerFollow Atreides Management on X: https://x.com/atreidesmgmtFollow David on X: https://x.com/DavidGeorge83 Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Welcome to another exciting episode of Data Driven! On this week's show, hosts Frank La Vigne and Candace Gillhoolley take you inside the NVIDIA GTC conference in Washington, DC—an event that's rapidly evolved from a hardware showcase into a crossroads for AI, robotics, and quantum computing innovation. Frank shares his firsthand experience navigating the expo floor, offering a glimpse into the latest cutting-edge tech, from GPU supercomputers to quantum networking breakthroughs and swarms of robots. Candace and Frank discuss the growing intersections between fields like quantum biology and AI, and share stories about the curious mix of attendees—from government officials and policymakers to technologists, students, and even a few cosplay “Jensen Huangs.”Whether you're a data enthusiast, a future-focused technologist, or simply quantum curious, this episode dives into the national security implications of AI, the importance of lifelong learning in tech, and how the rise of quantum and robotics will disrupt careers and industries alike. Tune in for insider anecdotes, expert encounters, and a dose of practical wisdom about adapting in the world of emerging technologies—the future is here, and it's happening faster than ever.LinksFrank gets a shout out from Pluralsight -https://www.pluralsight.com/resources/blog/upskilling/frank-lavigne-customer-storyJensen Huang's DTC DC Keynote -https://www.youtube.com/watch?v=lQHK61IDFH4Mariya & Python Simplified -https://www.youtube.com/@pythonsimplifiedTime Stamps00:00 "Nvidia GTC Highlights and Expo"03:36 "Quantum, AI, and Innovation Highlights"07:18 Tech and Government Amid Furlough10:24 "Tech Components in Booz Allen Vehicle"14:37 Meeting Maria Shah19:11 Career Shifts and Evolutions23:06 From Books to Tech Publishing24:44 Quantum Insights with Researcher29:01 "Nvidia: Emerging Defense Contractor"32:25 Tech Innovations: AI, Quantum, Robotics36:44 "Live Streaming Quantum & Tech"37:40 "High-Speed Quantum Interconnects"41:01 Technical Translation for Accessibility44:19 "High School, Future, Nvidia Innovation"49:28 "Guest's Learning Experience"50:45 "Quantum Business Strategy"53:36 British AI Outro Stream
Welcome to a special crossover episode of Impact Quantum, where we dive straight into the action from the floor of Nvidia GTC in Washington, D.C.! In this episode, hosts Frank La Vigne and Candace Gillhoolley take you behind the scenes of one of the tech world's most exciting conferences—from AI-driven robots and GPU-powered supercomputers to a surprising amount of quantum computing buzz.Join us as Frank shares his firsthand experiences, including the latest hardware reveals, government involvement, unique swag, and even the cosplay antics of attendees impersonating Nvidia's CEO. Candace and Frank also unpack the growing intersections of AI, quantum computing, robotics, and national security, while highlighting the importance of adaptability and lifelong learning in the face of rapid technological change.Whether you're a seasoned technologist, a quantum curious newcomer, or just here for the epic robot sightings, this episode is packed with insights, laughs, and actionable advice for navigating a future shaped by emerging technologies. Hit play and get ready for a front-row look at the innovations, opportunities, and human stories fueling the data-driven quantum revolution!LinksFrank gets a shout out from Pluralsight – https://www.pluralsight.com/resources/blog/upskilling/frank-lavigne-customer-storyJensen Huang's DTC DC Keynote – https://www.youtube.com/watch?v=lQHK61IDFH4Mariya & Python Simplified – https://www.youtube.com/@pythonsimplifiedTime Stamps00:00 Nvidia GTC Highlights03:36 "AI, Quantum, and Innovation"09:15 "Robots, Shorts, and Streaming Woes"11:43 Digital Twins Attract Buzz13:56 DGX Bars: The Hot Giveaway19:10 Career Shifts in Tech23:05 From Publishing to Tech Transition24:43 "Quantum, Satellites, and Lasers"29:01 "Nvidia as Defense Contractor"32:25 Tech Innovations Powering the Future34:59 AI & Quantum Computing Insights37:37 "Quantum Networking Revolution Explained"42:18 "Unique Tech-Security Conference Highlights"46:05 "Re-recording for Authenticity"49:26 "Learning from a System Launch"50:33 "Quantum Impact for Innovators"
En este episodio, exploramos a fondo Ghostty, el emulador de terminal que está generando entusiasmo en la comunidad Linux y de código abierto. Si buscas una terminal que combine velocidad y funcionalidad sin compromisos, Ghostty es la respuesta.Te explico cómo Ghostty logra ser ULTRA-RÁPIDO utilizando la aceleración por GPU y por qué su diseño nativo lo hace sentir tan integrado en tu escritorio Linux (GTK4 en el caso de Linux).Puntos clave que cubriremos:La importancia del Renderizado por GPU y cómo mejora el rendimiento en Linux.Características avanzadas de productividad: Gestión nativa de pestañas, ventanas y paneles (splits).El innovador Protocolo Gráfico Kitty, que te permite ver imágenes directamente en la terminal.Cómo la configuración sencilla y el soporte para ligaduras y temas automáticos impulsan tu flujo de trabajo.Ghostty es una alternativa sólida y moderna a Kitty o Alacritty, ofreciendo velocidad y una serie de características avanzadas que lo posicionan como uno de los emuladores de terminal del futuro.Más información y enlaces en las notas del episodio
Recorded live at Sequoia's Europe100 event: Michael Kagan, co-founder of Mellanox and CTO of Nvidia, explains how the $7 billion Mellanox acquisition helped transform Nvidia from a chip company into the architect of AI infrastructure. Kagan breaks down the technical challenges of scaling from single GPUs to 100K and eventually million-GPU data centers. He reveals why network performance—not just compute power—determines AI system efficiency. He discusses the shift from training to inference workloads, and his vision for AI as humanity's "spaceship of the mind," and why he thinks AI may help us discover laws of physics we haven't yet imagined. Hosted by Sonya Huang and Pat Grady
Analizamos las primeras pruebas de los nuevos dispositivos de Apple con el chip M5, destacando su significativo salto en potencia, especialmente en tareas de GPU e inteligencia artificial local, donde notamos que aplicaciones como DrawThings funcionan el doble de rápido. Comentamos que el rendimiento en juegos del MacBook Pro es comparable a una Nvidia 4050, lo cual es notable para un portátil que no se calienta. Explicamos que, si bien no es una actualización esencial para usuarios de M3 o M4, los poseedores de un M1 podrían considerar el cambio debido a la duplicación de rendimiento. También revisamos el nuevo iPad Pro con M5, elogiando su pantalla, la mejor de la gama Apple, y su soporte para Wi-Fi 7, aunque observamos que se calienta considerablemente bajo cargas de IA generativa intensa.Profundizamos en las novedades del Vision Pro con el chip M5, señalando mejoras en la definición de texto y la calidad del "passthrough", lo que nos genera cierto "FOMO" (bueno, a Ángel) aunque no justifique su elevado precio de actualización.Analizamos las bajadas de precio del Vision Pro en Europa y Reino Unido, sugiriendo que las tasas de cambio del euro y la libra frente al dólar podrían estar influyendo más que la eliminación del cargador en otros productos. Además, comentamos la sorprendente noticia de que el Vision Pro M5 se está fabricando en Vietnam, interpretando este movimiento como parte de la estrategia de Apple para diversificar su producción fuera de China, a pesar de las complejidades logísticas y políticas que esto implica.Estudiamos la preocupante situación de las ventas del iPhone Air, con rumores de importantes recortes de producción, y especulamos sobre si la preferencia del consumidor por más cámaras o la mejor relación calidad-precio del iPhone 17 normal están afectando su demanda. Comparamos esta tendencia con la posible cancelación del modelo "Edge" de Samsung, sugiriendo una aversión general del mercado a los teléfonos muy delgados con menos prestaciones. Finalmente, discutimos los desafíos internos de Apple con el desarrollo de la próxima versión de Siri, que incorporará capacidades tipo ChatGPT. Sostenemos que la búsqueda de la "perfección" por parte de Apple y el miedo a un fallo reputacional están retrasando un lanzamiento que, a nuestro juicio, debería ocurrir pronto, aceptando que todos los modelos de IA tienen sus limitaciones. Probamos el iPad Pro M5: más potencia, mismo iPad Pro Gadgets Probamos el MacBook Pro M5: centrado en IA Creadores Jon Prosser misses deadline, Apple's lawsuit to move ahead - 9to5Mac Apple Slated To Launch Its Clamshell Foldable iPhone In 2028, Bringing Back The Flip Era With A Modern Twist M5 Apple Silicon: It's All About the Cache And Tensors - Creative Strategies Apple instalará 100 cámaras ante la Juve como prueba de su proyecto Bernabéu Infinito Apple slashes iPhone Air production plans, boosts other 17 models: sources - Nikkei Asia Samsung Galaxy XR - Wikipedia, la enciclopedia libre
Send us a textSecurity gets sharper when we stop treating AI like magic and start treating it like an untrusted user. We sit down with Eric Galinkin to unpack the real-world ways red teams and defenders are using language models today, where they fall apart, and how to build guardrails that hold up under pressure. From MCP servers that look a lot like ordinary APIs to the messy truths of model hallucination, this conversation trades buzzwords for practical patterns you can apply right now.Eric shares takeaways from Offensive AI Con: how models help triage code and surface likely bug classes, why decomposed workflows beat “find all vulns” prompts, and what happens when toy benchmarks meet stubborn, real binaries. We explore reinforcement learning environments as a scalable way to train security behaviors without leaking sensitive data, and we grapple with the uncomfortable reality that jailbreaks aren't going away—so output validation, sandboxing, and principled boundaries must do the heavy lifting.We also dig into Garak, the open-source system security scanner that targets LLM-integrated apps where it hurts: prompted cross-site scripting, template injection in Jinja, and OS command execution. By mapping findings to CWE, Garak turns vague model “misbehavior” into concrete fixes tied to known controls. Along the way, we compare GPT, Claude, and Grok, talk through verification habits to counter confident nonsense, and zoom out on careers: cultivate niche depth, stay broadly literate, and keep your skepticism calibrated. If you've ever wondered how to harness AI without handing it the keys to prod, this one's for you.Enjoyed the episode? Follow, share with a teammate, and leave a quick review so more builders and defenders can find the show.Inspiring Tech Leaders - The Technology PodcastInterviews with Tech Leaders and insights on the latest emerging technology trends.Listen on: Apple Podcasts SpotifySupport the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast Affiliates➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh➡️ OffGrid Coupon Code: JOE➡️ Unplugged Phone: https://unplugged.com/Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.
In this episode of Web3 with Sam Kamani, we dive deep into the future of AI and decentralized compute with Gaurav from io.net. From building in Linux file systems to scaling GPU infrastructure for global AI workloads, Gaurav shares what it takes to create a high-impact product in today's Web3 x AI gold rush.We explore the vision behind io.net—making AI compute more accessible and affordable by decentralizing infrastructure. Gaurav Sharma also opens up about the real challenges of scaling, how their community drives product evolution, and what founders often get wrong when launching AI startups.If you're a founder, developer, or investor in AI or Web3, this episode is packed with practical wisdom and behind-the-scenes insights from someone who's building at the bleeding edge.Key Learnings + Timestamps[00:01:00] Gaurav's journey from Linux systems to Web3 and AI[00:03:30] The problem with monopolies in AI compute pricing[00:05:00] io.net's vision: Giving power back to builders[00:07:00] Decentralized compute vs. traditional hyperscalers[00:08:30] Two types of users io.net serves (technical & abstracted)[00:10:30] Why agents will thrive in Web3 and how AI agents will need crypto[00:12:00] Real-world use cases and who is already using io.net[00:14:30] Challenges of building decentralized infra with real utility[00:17:00] What most people misunderstand about building AI products[00:18:00] Why the compute demand will keep growing – the flywheel effect[00:21:30] What Gaurav would do differently if starting io.net today[00:24:00] How they ensure GPU quality across a decentralized network[00:27:00] Advice for new founders: Start with utility, not just narrative[00:30:00] io.net's 6-month roadmap and product ecosystem vision[00:32:00] Call for collaborators, data engineers, and AI sales talentConnectX: https://x.com/ionetDiscord: https://discord.com/invite/ionetofficialTelegram: https://t.me/io_netLinkedIn: linkedin.com/company/ionet-official/Medium: https://medium.com/@ionetGaurav Sharma: https://www.linkedin.com/in/searchgauravsharma/?originalSubdomain=th DisclaimerNothing mentioned in this podcast is investment or financial advice and please do your own research.Be a guest on the podcast or contact us - https://www.web3pod.xyz/
What happens when simplicity meets AI on the world's biggest tech stage? In this episode, recorded live at GITEX Global in Dubai, I sit down with Sohaib Zaheer, Senior Vice President and General Manager at DigitalOcean, to talk about how the company is staying true to its founding vision of accessibility and simplicity while entering the age of AI. For years, DigitalOcean has been known as the cloud that “speaks the language of builders,” empowering developers and startups to innovate without unnecessary complexity. Now, with the launch of its Gradient AI platform and Cloudways Copilot, the company is bringing that same philosophy to AI development, helping teams go from idea to production-ready agents without huge DevOps teams or fragmented toolchains. Sohaib explains how DigitalOcean's unified stack is making AI agent development faster, easier, and more transparent. We discuss the startling statistic that 95% of AI projects never make it past the prototype stage, and explore how Gradient AI aims to change that through agent templates, debugging tools, and built-in guardrails. We also look under the hood at AI inferencing, GPU optimization, and why performance and cost efficiency still matter as much as cutting-edge innovation. If you have ever wondered how AI can become truly accessible, or how simplicity might just be the next big breakthrough, this conversation offers a grounded, real-world perspective from one of the most down-to-earth leaders in cloud technology. Recorded live on the show floor at GITEX Global, this episode is a reminder that great tech is not about hype, it is about helping people build, test, and create with confidence.
Join The Full Nerd gang as they talk about the latest PC building news. In this episode the gang is joined by special guest Tom Petersen, Fellow at Intel, to talk about all things Xe3, Panther Lake gaming, GPU benchmarking, and much more. And of course we answer your questions live! Links: - Panther Lake deep-dive: https://www.pcworld.com/article/2928765/panther-lake-unveiled-a-deep-dive-into-intels-next-gen-laptop-cpu.html - Interview with Tom Petersen: https://youtu.be/Bjdd_ywfEkI?si=Jn_YH_jZWntwsqVV - Thread Director interview: https://youtu.be/VcvzIGA6qA4?si=adxnHtMTiGvWQzY4 Join the PC related discussions and ask us questions on Discord: https://discord.gg/WWnEzTDhw Follow the crew on X and Bluesky: @AdamPMurray @BradChacos @MorphingBall @WillSmith ============= Read PCWorld! Website: http://www.pcworld.com Newsletter: http://www.pcworld.com/newsletters/signup ============= #podcast #news #pcgaming
Send us a text!Watch this episode on YouTubeThis week, don't call it an Apple Event, but the first M5 devices are here! The MacBook Pro, the iPad Pro, and the Vision Pro too… I guess. Also: What we're buying, and the Steve Jobs commemorative $1 coin.This episode supported by:Listeners like you. Your support helps us fund CultCast Off-Topic, a new weekly podcast of bonus content available for everyone; and helps us secure the future of the podcast. You also get access to The CultClub Discord, where you can chat with us all week long, give us show topics, and even end up on the show. Support The CultCast at support.thecultcast.com — or unsubscribe at unfork.thecultcast.comCleanMyMac is your ultimate solution for Mac control and care. Get tidy today — try 7 days free and use my code CULTCAST for 20% off at clnmy.com/CultCastMost companies only act after a breach. Be the one that's prepared. Defend your business with NordStellar. Get an exclusive offer: Unlock your 10% discount on NordStellar with the coupon code cultcast-10 at NordStellar.com/CultCast. Just mention it to NordStellar!This week's stories:Apple's new M5 chip pumps up AI and graphics processing powerWith release of the new MacBook Pro and more, Apple's new M5 chip pumps up AI and graphics processing power amid other gains.14-inch MacBook Pro gets M5 power boostApple's refreshed entry-level MacBook Pro delivers a massive leap in CPU, GPU and AI performance thanks to the next-gen M5 processor.Apple supercharges iPad Pro with next-gen M5 processorApple just revealed the 2025 iPad Pro, which gets a speed boost from the new M5 processor, and some variants get 50% more RAM.Going large on M5 iPad Pro storage also scores premium performanceSpatial computing gets more powerful (and comfortable) with M5 Vision ProApple upgrades the Vision Pro to the latest M5 chip. The improved headset also features a more comfortable strap and better battery life.Vision Pro owners can get its biggest upgrade for just $99Steve Jobs $1 coin looks nothing like Steve JobsA weird portrait of Apple co-founder Steve Jobs sitting in a California meadow will appear on a $1 coin from the U.S. Mint in 2026.
In another wave of tariff news, Trump announced a 100% tariff on Chinese goods that will take effect in November. The constant back and forth of tariff policy has left import-reliant business owners frustrated, defeated and wondering how long they can hold out. Also in this episode: Slowing immigration explains a change in break-even employment, California explores public AI compute projects to create shared GPU infrastructure, and GDP may grow more than expected, despite economic uncertainty.Every story has an economic angle. Want some in your inbox? Subscribe to our daily or weekly newsletter.Marketplace is more than a radio show. Check out our original reporting and financial literacy content at marketplace.org — and consider making an investment in our future.