POPULARITY
Categories
How can you scale AI at the enterprise, yet still hit your climate goals? And can heavy AI usage and an enterprise's ESG mission co-exist? Ashutosh Ahuja lays it out for us. Aligning AI With Climate And Business Goals -- An Everyday AI Chat with Jordan Wilson and Ashutosh AhujaNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:AI's Environmental Impact and Climate ConcernsCompanies Aligning AI with ESG GoalsAI Adoption Versus Carbon Footprint TradeoffsMetrics for Measuring AI's Environmental ImpactBusiness Efficiency Gains from AI AdoptionReal-World Examples: AI Offsetting Carbon FootprintIndustry Opportunities for Sustainable AI IntegrationFuture Trends: Efficient AI Models and Edge ComputingTimestamps:00:00 Everyday AI Podcast & Newsletter05:52 Balancing Progress and Legacy07:03 "Should Companies Limit AI Usage?"12:02 "Sentiment Analysis for Business Growth"17:07 "Energy Efficiency Impacts ESG Metrics"19:40 Robots, Energy, and AI Opportunity21:41 AI Efficiency and Climate Balance25:04 "Trust Instincts in Investments"Keywords:AI and climate, climate goals, aligning AI with ESG, environmental impact of AI, carbon footprint, energy use in AI data centers, water cooling for GPUs, sustainable business practices, enterprise AI strategy, ESG compliance, climate pledges, AI adoption in business, carbon footprint metrics, machine learning for sustainability, predictive analytics, ethical AI, green AI solutions, renewable energy sector, AI in waste management, camera vision for waste sorting, delivery robots, edge AI, small business AI implementation, AI efficiency, sentiment analysis, customer patterns, predictive maintenance, IoT data, auto scaling, cloud computing, resource optimization, SEC filings, brand sentiment tracking, LLM energy consumption, environmental considerations for AI, future of AI in climate action, business efficiency, human in the loop, philanthropic business practices, sustainable architecture, large language models and climate, tech industry climate initiatives, AI-powered resource savings, operational sustainability.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
This Week In Startups is made possible by:Northwest Registered Agent - https://www.northwestregisteredagent.com/twistCrusoe - http://crusoe.ai/buildGusto - https://www.gusto.com/twistToday's show: Delegating is its own unique skill, requiring training and a real investment of time and attention.On TWiST, Jason chats for a full hour with the founder of one of his favorite startups, Athena, which trains online assistants and pairs them with busy founders and executives. (Jason has 2!) But getting the MOST out of your executive assistants is less obvious than it looks. Jonathan unpacks some of the secrets to “Black Diamond Delegating,” and how he manages to keep 6 different high-level helpers operating at once.Plus, Jason and Jonathan look back at the Open Angel Forum days, where Jason invested in Jonathan's previous company, Thumbtack, praise the “Checklist Manifesto,” discuss the telltale signs you've achieved market pull, and lots more insights.Timestamps:(01:53) We're joined by Jonathan Swanson from one of JCal's fav startups, Athena!(02:02) Jason and Jonathan first met during the Open Angel Forum, when Jonathan was working on Thumbtack(06:44) Finding the “little touches” that can help make an app more delightful(9:47) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(12:05) The shift from Thumbtack to Athena was all about time(12:52) How Jonathan delegates to 6 exec assistants at once(14:22) Pricing Athena's EAs: Jason runs the numbers(15:09) Why Athena made Jason believe in hiring assistants again(18:15) Getting past the “Cardinal Sins of Delegation”(19:38) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(20:48) Will AI ever be able to replace Athena assistants?(23:41) Inside how Athena finds and trains assistants from around the world(27:01) How JCal became an Athena Ambassador… and almost crashed the system!(30:55) Gusto - Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at https://www.gusto.com/twist(32:11) The magic of having assistants work on “backstop projects” and creative tasks(37:14) How to know when you have achieved market pull(40:05) Why getting the most out of delegating takes real investment and training(44:36) More praise for the Checklist Manifesto(46:26) Jonathan gives us a peek at what “Black Diamond Delegation” looks like(52:14) Jason's early experiences hiring overseas assistants, from the Mahalo days*Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm/*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis/*Thank you to our partners:(9:47) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(19:38) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(30:55) Gusto - Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at https://www.gusto.com/twist
CoreWeave CFO Nitin Agrawal joins Run the Numbers to unpack the finance engine behind one of the fastest-growing AI infrastructure companies on the planet. CJ and Nitin dive into what it takes to build financial discipline in an environment where business models are being invented in real time, discussing the company's 700% growth last year and massive first-quarter performance as a newly public company. They cover capex strategy, securitizing GPUs, managing billion-dollar revenue backlogs, and structuring incentives for hyperscale deals, all while keeping investors grounded and servers running at full tilt. If you want a front-row seat to finance in the AI arms race, this episode delivers.—SPONSORS:Tipalti automates the entire payables process—from onboarding suppliers to executing global payouts—helping finance teams save time, eliminate costly errors, and scale confidently across 200+ countries and 120 currencies. More than 5,000 businesses already trust Tipalti to manage payments with built-in security and tax compliance. Visit https://www.tipalti.com/runthenumbers to learn more.Aleph automates 90% of manual, error-prone busywork, so you can focus on the strategic work you were hired to do. Minimize busywork and maximize impact with the power of a web app, the flexibility of spreadsheets, and the magic of AI. Get a personalised demo at https://www.getaleph.com/runFidelity Private Shares is the all-in-one equity management platform that keeps your cap table clean, your data room organized, and your equity story clear—so you never risk losing a fundraising round over messy records. Schedule a demo at https://www.fidelityprivateshares.com and mention Mostly Metrics to get 20% off.Sage Intacct is the cloud financial management platform that replaces spreadsheets, eliminates manual work, and keeps your books audit-ready—so you can scale without slowing down. It combines accounting, ERP, and real-time reporting for retail, financial services, logistics, tech, professional services, and more. Sage Intacct delivers fast ROI, with payback in under six months and up to 250% return. Rated #1 in customer satisfaction for eight straight years. Visit Sage Intacct and take control of your growth: https://bit.ly/3Kn4YHtMercury is business banking built for builders, giving founders and finance pros a financial stack that actually works together. From sending wires to tracking balances and approving payments, Mercury makes it simple to scale without friction. Join the 200,000+ entrepreneurs who trust Mercury and apply online in minutes at https://www.mercury.comRightRev automates the revenue recognition process from end to end, gives you real-time insights, and ensures ASC 606 / IFRS 15 compliance—all while closing books faster. For RevRec that auditors actually trust, visit https://www.rightrev.com and schedule a demo.—LINKS:Nitin on LinkedIn: https://www.linkedin.com/in/nitin-agrawal-cloudcfo/Company: https://www.coreweave.com/CJ on LinkedIn: https://www.linkedin.com/in/cj-gustafson-13140948/Mostly metrics: https://www.mostlymetrics.com—RELATED EPISODES:The Art and Science of a Day-One IPO Pop with OneStream Software CFO Bill Koefoedhttps://youtu.be/kYCn7XNkCBcFrom Facebook's Hypergrowth to Daffy's Disruption: A CFO's Playbook for Saying Yeshttps://youtu.be/bRIZ6oNPGD0—TIMESTAMPS:00:00:00 Preview and Intro00:02:54 Sponsors – Tipalti | Aleph | Fidelity Private Shares00:06:12 Interview Begins: Scaling CoreWeave00:06:52 CoreWeave's Pivot From Crypto to AI00:11:41 Why CoreWeave Is Uniquely Positioned to Lead AI Infrastructure00:13:32 Hiring for Both Scrappiness and Scale00:16:01 Post-IPO Whirlwind: Acquisitions, Debt Raises, and 10-Year Deals00:16:43 Sponsors – Sage Intacct | Mercury | RightRev00:20:13 Managing Investor Expectations With Radical Transparency00:22:39 Doubling Active Power in Six Months00:25:19 Risk-Balanced Capital Deployment: Power First, GPUs Second00:27:12 Financing GPUs With Delayed-Draw Facilities00:29:38 CoreWeave Rated Platinum for GPU Cluster Performance00:32:25 Compute as the Bottleneck for AI Growth00:33:47 Explaining Revenue Backlog Shape & Timing00:35:06 The Strength of Reserved Instance Contracts00:36:07 Giving Tight but Honest Guidance00:40:26 How Mega-Deals Require C-Suite Participation00:42:19 Tackling Revenue Concentration Through Diversification00:44:05 Building an AI-Only Cloud, Not a General-Purpose Cloud00:46:27 Capital Markets Muscle: Raising Billions at Speed00:47:47 Accounting Complexity in a Business With No Precedent00:49:33 Even the CFO Must Unlearn Old Cloud Assumptions00:51:29 Scaling Public-Company Processes in 90-Day Cycles00:54:42 The Couch Fire vs. House Fire Framework00:57:17 Balancing Risk Mitigation With Opportunity Seeking01:00:30 No Downtime for ERP Changes During Hypergrowth01:02:33 Why the Team Stays Energized Despite the Chaos#RunTheNumbersPodcast #CFOInsights #Hypergrowth #AIInfrastructure #FinanceStrategy This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit cjgustafson.substack.com
Send us a textWe break down Cloudflare's outage, why a small config change caused big waves, and what better guardrails could look like. We then unpack AWS and Google's cross‑cloud link, Megaport's move into bare metal and GPUs, Webex adding deepfake defenses, and a new startup aiming to tune AI networks at microsecond speed.• Cloudflare outage root cause and fallout• Automation guardrails, validation and rollbacks• AWS–Google cross‑cloud connectivity preview• Pricing, routing and policy gaps to watch• Megaport acquires Latitude SH for compute• Bare metal and GPU as a service near clouds• Webex integrates deepfake and fraud detection• Accuracy risks, UX and escalation paths• Apstra founders launch Aria for AI networks• Microburst telemetry, closed‑loop control and SLAsIf you enjoyed this please give us some feedback or share this with a friend we would love to hear from you as well and we will see you in two weeks with another episodePurchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Monthly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet's approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. The complete show notes for this episode can be found at https://twimlai.com/go/757.
The UK Investor Magazine was delighted to welcome Krystal Lai, Head of Communications at Majestic Corporation, back to the Podcast for another insightful discussion around critical minerals and urban mining.This time, we focus on critical metals and AI infrastructure.Download the 'Critical Minerals in AI Infrastructure' report.This episode explores AI infrastructure's material demands and the data centre waste recycling opportunity.We look beyond AI's power consumption to the physical metals driving the technology and their end-of-life value.The conversation examines specific metals in GPUs and AI hardware, assessing how infrastructure buildout affects global metal demand and how recycling is a key element to securing future supply.Krystal details which AI infrastructure metals urban miner Majestic recovers, their current exposure to data centre waste, and the projected growth in this segment.We finish by looking at how Majestic is positioning to capture the opportunity. Hosted on Acast. See acast.com/privacy for more information.
Darrick Horton is the CEO and co-founder of TensorWave, the company making waves in AI infrastructure by building high-performance compute on AMD chips. In 2023, he and his team took the unconventional path of bypassing Nvidia, a bold bet that has since paid off with nearly $150 million raised from Magnetar, AMD Ventures, Prosperity7, and others. TensorWave is now operating a dedicated training cluster of around 8,000 AMD Instinct MI325X GPUs and has already hit a $100 million revenue run rate. Darrick is a serial entrepreneur with a track record of building infrastructure companies. Before TensorWave, he co-founded VMAccel, sold Lets Rolo to LifeKey, and co-founded the crypto mining company VaultMiner. He began his career as a mechanical engineer and plasma physicist at Lockheed Martin's Skunk Works, where he worked on nuclear fusion energy. While he studied physics and mechanical engineering at Andrews University, he left early to pursue entrepreneurship and hasn't looked back since.In this conversation we discussed:Why Darrick chose AMD over Nvidia to build TensorWave's AI infrastructure, and how that decision created a competitive advantage in a GPU-constrained marketWhat makes training clusters more versatile than inference clusters, and why TensorWave focused on the former to meet broader customer needsHow Neocloud providers like TensorWave can move faster and innovate more effectively than legacy hyperscalers in deploying next-generation AI infrastructureWhy power, not GPUs, is becoming the biggest constraint in scaling AI workloads, and how data center architecture must evolve to address itWhy Darrick predicts AI architectures will continue to evolve beyond transformers, creating constant shifts in compute demandHow massive increases in model complexity are accelerating the need for green energy, tighter feedback loops, and seamless integration of compute into AI workflowsResources:Subscribe to the AI & The Future of Work NewsletterConnect with Darrick on LinkedInAI fun fact articleOn How the new definition of work
In this episode, Venkat Kirishnamurthy, a principal Architect at Cisco, explains what are the ins and outs of designin an AI datacenter. How is it different from another datacenter ? What kind of scale are we talking about ? What is the throughput required to connect 1000 GPUs together ? Learn more about CX Services and how we can help you design your datacenter: https://www.cisco.com/site/us/en/services/index.html
Most enterprises burn millions on idle GPUs while developers wait weeks for access. Haseeb Budhani, CEO of Rafay Systems, built a global GPU orchestration platform after exits at Soha Systems (acquired by Akamai) and brings deep infrastructure expertise to solving the $100B GPU waste crisis. He reveals why 93% of Fortune 500 companies achieve sub-85% GPU utilization, how sovereign AI requirements are driving hundreds of "Neo clouds" globally, and the specific multi-tenancy frameworks that transform expensive compute from sunk cost into competitive advantage.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, Venkat Kirishnamurthy, a principal Architect at Cisco, explains what are the ins and outs of designin an AI datacenter. How is it different from another datacenter ? What kind of scale are we talking about ? What is the throughput required to connect 1000 GPUs together ?Learn more about CX Services and how we can help you design your datacenter:https://www.cisco.com/site/us/en/services/index.html
A corrida pelos semicondutores virou peça central da disputa tecnológica contemporânea. Entre todas as nações que tentam garantir seu espaço nessa arena, nenhuma avança tão rapidamente quanto a China. A pauta da autossuficiência tecnológica virou prioridade máxima em Pequim, e o setor de chips, antes um ponto vulnerável, transformou-se no centro de uma estratégia nacional de longo prazo. Thiago de Aragão, analista político Hoje, começa a ficar claro que a China está perigosamente próxima de alcançar independência em áreas que eram quase monopólio dos Estados Unidos e de seus aliados. Isso mexe profundamente com o equilíbrio geopolítico global, com as grandes empresas do setor e com o futuro da própria inovação. Até poucos anos atrás, a China importava praticamente tudo o que havia de sofisticado em semicondutores. Dependia de fornecedores estrangeiros para inteligência artificial, supercomputação e boa parte da indústria moderna. Mas decidiu inverter essa lógica. Por meio de políticas industriais agressivas, investimentos estatais bilionários e incentivos fiscais capazes de remodelar cidades inteiras, o país passou a construir uma cadeia de semicondutores completa, capaz de operar desde o design até a fabricação e o encapsulamento. Não foi um movimento tímido. A China atraiu engenheiros de outros países, formou centenas de milhares de profissionais qualificados, ergueu parques industriais dedicados exclusivamente ao setor e começou a desenvolver seus próprios equipamentos e softwares. Ainda há setores sensíveis em que o país não alcançou a liderança, como a litografia ultravioleta extrema, mas o avanço foi tão rápido que o atraso deixou de ser determinante. A Huawei, por exemplo, conseguiu produzir um smartphone com chip nacional de 7 nanômetros, algo que poucos analistas consideravam possível em tão pouco tempo. Pequim persegue a meta de autossuficiência não como retórica, mas como projeto de Estado de longo prazo. Esse avanço chinês ocorre ao mesmo tempo em que a disputa geopolítica entre China e Estados Unidos atinge temperaturas inéditas. Para os americanos, chips avançados deixaram de ser apenas componentes industriais e passaram a ser tratados como ativos fundamentais de segurança nacional. A resposta de Washington foi endurecer as regras de exportação, restringindo profundamente o acesso chinês aos semicondutores mais sofisticados e às máquinas usadas para produzi-los. A China, por sua vez, lê essas restrições como tentativa de contenção e responde reforçando sua própria musculatura industrial. Ao limitar exportações de minerais estratégicos e ao aumentar a escala de investimentos internos, Pequim sinaliza que está preparada para travar sua própria batalha assimétrica. O que antes era uma disputa essencialmente econômica virou uma disputa sistêmica entre dois modelos de poder. Impacto nos EUA Nesse ambiente, as gigantes do setor começaram a sentir impactos muito concretos. A Nvidia foi a mais atingida. Por anos, dominou o mercado chinês de chips de inteligência artificial. Quando os Estados Unidos restringiram a exportação dos modelos mais avançados, a empresa viu sua participação ser praticamente zerada no maior mercado de IA do planeta. Tentou adaptar-se criando versões menos potentes de seus chips, mas até essas passaram a enfrentar risco de bloqueio. Ao mesmo tempo, empresas chinesas se posicionaram para ocupar o espaço deixado. A Huawei avançou agressivamente com seus próprios chips de IA e passou a abastecer grande parte dos projetos domésticos. Startups chinesas ganharam impulso imediato, amparadas por um governo disposto a substituição tecnológica acelerada. Para a AMD e a Intel, o cenário segue a mesma linha. A exigência de que data centers ligados ao Estado utilizem apenas chips nacionais reduziu as perspectivas de crescimento dessas empresas e deixou claro que o impulso à autossuficiência chinesa não será revertido. Mesmo em PCs e servidores comuns, cresce a aposta chinesa em projetar e fabricar suas próprias CPUs e GPUs, erosão lenta porém contínua do espaço das fabricantes americanas. A Qualcomm enfrenta um tipo diferente de vulnerabilidade. Quase metade de sua receita global depende do ecossistema chinês de smartphones. Se a China consolidar produção própria de chips móveis em escala industrial, e se empresas como a Huawei retomarem posição dominante nas redes 5G e nos aparelhos premium, a Qualcomm enfrenta o risco real de perder um de seus pilares de receita. Revisão de estratégias Enquanto tudo isso acontece, o resto do mundo tenta reagir. Os Estados Unidos lançaram o CHIPS Act para trazer fábricas ao território nacional e fortalecer sua indústria. A Europa adotou suas próprias medidas, tentando recuperar relevância num setor que abandonou décadas atrás. Japão, Coreia do Sul, Taiwan e Índia entraram na disputa com incentivos fiscais, diplomacia tecnológica e promessas de redução de dependência externa. Pela primeira vez em décadas, países passaram a reorganizar cadeias de suprimento não pelo critério econômico clássico, mas por alinhamento político e percepção de risco. A lógica é simples: amigos produzem com amigos. O preço é a perda de eficiência e o aumento dos custos. O ganho é a sensação, ainda que relativa, de segurança estratégica. Mesmo assim, fragmentar um sistema global tão integrado quanto o dos semicondutores significa mexer com toda a estrutura da economia digital. As cadeias que antes conectavam Japão, Taiwan, Holanda, China e Estados Unidos passam agora a se reconfigurar em blocos paralelos, fragmentando o que já foi o setor mais globalizado do planeta. É um processo lento, caro e turbulento, mas inevitável à medida que as tensões aumentam. Avanço chinês Tudo isso mostra que o avanço chinês na fabricação de semicondutores não é um fato isolado. Ele redefine mercados, geopolítica e modelos de desenvolvimento. Empresas como Nvidia, AMD, Intel e Qualcomm percebem que, mesmo sendo líderes históricas, perderam um mercado onde o jogo mudou de regras. Países percebem que o fluxo de tecnologia deixou de ser neutro e se tornou arma estratégica. Consumidores perceberão, nos próximos anos, que existem tecnologias que só estarão disponíveis em determinados blocos, enquanto outros seguirão caminhos diferentes. A história ainda está sendo escrita, e ainda é cedo para dizer quem terá a vantagem definitiva. Mas uma coisa é clara: a disputa por chips é hoje a disputa pelo controle do futuro digital, da inteligência artificial, da computação avançada, da defesa e de tudo que depende de processamento. A China está acelerando, e o resto do mundo precisa decidir se corre junto, se cria obstáculos ou se tenta reinventar o jogo. O século XXI será escrito, em grande parte, por quem dominar essa indústria. E esse domínio já não está tão concentrado quanto esteve no passado recente.
Flush de la semana con lo mejor en noticias que se dieron en la semanadéjame tu comentario Redes Sociales Oficiales:► https://linktr.ee/DrakSpartanOficialCualquier cosa o situación contactar a Diego Walker:diegowalkercontacto@gmail.comFecha Del Video[30-11-2025]#flush #amd #ram #nvidia #amd #conectores #gpu #fire #conectorgpu#drakspartan #drak #elflush
Intel Nova Lake bzw. Intel Core Ultra 400 soll im Herbst 2026 erscheinen und dank eines Eintrags in der FAQ von Noctua wissen wir bereits, dass aktuelle Kühler weiter kompatibel mit dem neuen Sockel bleiben werden auch ohne neues Mounting Kit. Etwas weniger gesichert, aber doch als recht zuverlässig anzusehen, sind die Gerüchte, dass es Varianten mit zusätzlichem Cache geben soll ähnlich wie AMDs X3D-CPUs. Der Herbst 2026 verspricht heiß zu werden mit Intel Nova Lake und AMD Zen 6. Wir haben wieder gespielt! Meep hat das Mobile-Spiel „Audioroids“ ausprobiert, kurz gesagt Asteroids für Blinde und Sehbehinderte. Ein spannendes Konzept, dessen Umsetzung noch Feinschliff und vor allem Quality-of-Life-Features gut tun würde. Mike und Mo hingegegen haben sich „Kingdom of Night“ angesehen, ein isometrisches Action-Rollenspiel mit Pixelgrafik und einem Setting, das eindeutig von Stranger Things inspiriert ist: Ein kleines amerikanisches Städtchen in den 80ern, zwielichtige Kuttenträger (not the Metal kind), Dämonen, Zombies und eine Kneipe als zentraler Hub (not the Winchester). Mike hat endlich den PC für seine Nichte gebaut: AMD Ryzen 5 7500F, Radeon 9060 XT 16GB und 32GB RAM. Zweiunddreißig Gigabyte Random Access Memory? In this economy? Alles gut, er hat es gerade noch erwischt, bevor die Preise in komplett Absurde gestiegen sind. Insgesamt ein sehr schöner Build, preislich noch im Rahmen, wenn auch nicht das allergünstigste, AM5-Plattform leicht aufrüstbar, Grafikkarte mit genug Bumms und VRAM für die nächsten Jahre. Das Lian Li Lancool 207 ist ein durchdachtes Gehäuse, in dem es sich angenehm bauen lässt, dabei aber nicht zu groß. Nur ein zusätzlicher Staubfilter in der Front wäre noch nett. Viel Spaß mit Folge 284! Sprecher:innen: Meep, Michael Kister, Mohammed Ali DadAudioproduktion: Michael KisterVideoproduktion: Michael KisterTitelbild: MeepBildquellen: Kingdom of Night/Friends of Security/Dangen Entertainment/PixabayAufnahmedatum: 28.11.2025 Besucht unsim Discord https://discord.gg/SneNarVCBMauf Bluesky https://bsky.app/profile/technikquatsch.deauf TikTok https://www.tiktok.com/@technikquatschauf Youtube https://www.youtube.com/@technikquatschauf Instagram https://www.instagram.com/technikquatschauf Twitch https://www.twitch.tv/technikquatsch RSS-Feed https://technikquatsch.de/feed/podcast/Spotify https://open.spotify.com/show/62ZVb7ZvmdtXqqNmnZLF5uApple Podcasts https://podcasts.apple.com/de/podcast/technikquatsch/id1510030975 00:00:00 Herzlich willkommen zu Technikquatsch Folge 284! 00:02:33 Mike hat einen Rechner für seine Nichte gebaut, Speicherkrise ärgert Mo auf der Arbeithttps://geizhals.de/wishlists/4798872 00:20:56 Intel Nova Lake bleibt kompatibel zu bisherigen Kühlern und Mounting Kitshttps://www.computerbase.de/news/kuehlung/core-ultra-400-nova-lake-cpu-kuehler-kompatibilitaet.95216/ 00:24:19 Gerüchte/News: Intel nächster Desktop-Prozessor Nova Lake soll auch Varianten mit extra Cache bekommenhttps://www.heise.de/news/Nova-Lake-Intels-naechster-Desktop-Prozessor-soll-riesigen-Cache-erhalten-11095779.html 00:28:49 Zen 6 aktueller Stand nach Gerüchten: 12 Kerne pro CCD und Low-Power-Kerne im I/O-Die 00:30:42 Speicher-Hersteller verbuchen Margen von 30 Prozent, Nvidia bundelt wohl nicht mehr GPUs mit VRAM laut Gerüchtenhttps://www.computerbase.de/news/grafikkarten/nvidia-gpu-speicher-bundles-abzuschaffen-haette-immense-folgen.95243/ 00:33:37 Aktienkurse, Verstrickungen von AI- und Techfirmenhttps://www.computerbase.de/news/grafikkarten/china-tech-giganten-trainieren-ki-auf-nvidia-gpus-im-ausland.95242/https://www.wsj.com/tech/meta-ai-data-center-finances-d3a6b464 00:43:23 Unterschiede beim Marketing von AMD und Nvidia: ML-powered bei AMD, AI bei Nvidia 00:46:56 Produktionsabläufe bei Technikquatsch, Aufruf zu Feedback 00:50:02 Mo hat Grok ausprobiert, Mike findet genAI doof und Meep bleibt skeptisch 01:00:50 Mit Katzenminze gefülltes Spielzeug und Waschbären, die sich domestizierenhttps://www.t-online.de/leben/aktuelles/id_101017672/studie-zeigt-moegliche-anzeichen-von-selbstdomestizierung-bei-waschbaeren.html 01:04:00 Spiele! Audioroids: Mobile Action für Blindehttps://play.google.com/store/apps/details?id=com.CodedArt.audioroids&hl=dehttps://apps.apple.com/de/app/audioroids-audio-shooter/id6740243875 01:16:27 Kingdom of Night: isometrisches Action-RPG im Stranger-Things-Style (Release am 02.12.2025)https://store.steampowered.com/app/1094600/Kingdom_of_Night/ 01:33:14 Hörer-Kommentar und Verabschiedung
Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Black Friday secrets 2) Google may sell its TPUs to Meta and financial institutions 3) Nvidia sends an antsy tweet 4) How does Google's TPU stack up next to NVIDIA's GPUs 5) Could Google package the TPU with cloud services? 6) NVIDIA responds to the criticism 7) HSBC on how much OpenAI needs to earn to cover its investments 8) Thinking about OpenAI's advertising business 9) ChatGPT users lose touch with reality 10) Ilya Sustkever's mysterious product and revenue plans 11) X reveals our locations --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Send us a textIn this episode of The Skinny on Wall Street, Kristen and Jen unpack the story stirring up markets: Michael Burry's latest warning that Big Tech is overstating earnings by extending the “useful life” assumptions on their GPUs. The conversation becomes a real-time teach-in on depreciation, useful life estimates, GAAP vs. tax depreciation, and how a small shift in an accounting estimate can meaningfully inflate EPS—especially for mega-cap tech stocks that trade heavily on P/E multiples. Kristen walks through exactly how depreciation affects valuation, and why some metrics (like EBITDA) and methodologies (like the DCF) are untouched by the choice of useful life. The big question the duo wrestle with: is Burry identifying a real risk, or is this a nothingburger amplified by market paranoia? From there, Jen shifts to the fixed income landscape ahead of the December Fed meeting—one the central bank must navigate without key data (payrolls and CPI) that won't arrive until after the rate decision. She breaks down how Powell is managing optionality near the end of his term, how the market is pricing a December cut, and what a likely dovish successor (Kevin Hassett) could mean for rates in 2026. They also dig into credit markets: years of high coupons have fueled relentless reinvestment demand, but an uptick in issuance—especially from AI-heavy hyperscalers—may finally rebalance supply and demand. The duo look abroad as well, analyzing the UK's newly announced national property tax and what it signals about global fiscal stress.The episode wraps with big updates from The Wall Street Skinny: the long-awaited launch of their Financial Modeling Course, the continued fixed income course presale, and new January 2026 office hours, plus the return date for HBO's Industry (January 11!). To get 25% off all our self paced courses, use code BLACKFRIDAY25 at checkout!Learn more about 9fin HERE Shop our Self Paced Courses: Investment Banking & Private Equity Fundamentals HEREFixed Income Sales & Trading HERE Wealthfront.com/wss. This is a paid endorsement for Wealthfront. May not reflect others' experiences. Similar outcomes not guaranteed. Wealthfront Brokerage is not a bank. Rate subject to change. Promo terms apply. If eligible for the boosted rate of 4.15% offered in connection with this promo, the boosted rate is also subject to change if base rate decreases during the 3 month promo period.The Cash Account, which is not a deposit account, is offered by Wealthfront Brokerage LLC ("Wealthfront Brokerage"), Member FINRA/SIPC. Wealthfront Brokerage is not a bank. The Annual Percentage Yield ("APY") on cash deposits as of 11/7/25, is representative, requires no minimum, and may change at any time. The APY reflects the weighted average of deposit balances at participating Program Banks, which are not allocated equally. Wealthfront Brokerage sweeps cash balances to Program Banks, where they earn the variable APY. Sources HERE.
This is a recap of the top 10 posts on Hacker News on November 27, 2025. This podcast was generated by wondercraft.ai (00:30): Migrating the main Zig repository from GitHub to CodebergOriginal post: https://news.ycombinator.com/item?id=46064571&utm_source=wondercraft_ai(01:52): Penpot: The Open-Source FigmaOriginal post: https://news.ycombinator.com/item?id=46064757&utm_source=wondercraft_ai(03:14): Tell HN: Happy ThanksgivingOriginal post: https://news.ycombinator.com/item?id=46065955&utm_source=wondercraft_ai(04:36): Linux Kernel ExplorerOriginal post: https://news.ycombinator.com/item?id=46066280&utm_source=wondercraft_ai(05:58): DIY NAS: 2026 EditionOriginal post: https://news.ycombinator.com/item?id=46065034&utm_source=wondercraft_ai(07:20): AI CEO – Replace your boss before they replace youOriginal post: https://news.ycombinator.com/item?id=46072002&utm_source=wondercraft_ai(08:42): Same-day upstream Linux support for Snapdragon 8 Elite Gen 5Original post: https://news.ycombinator.com/item?id=46070668&utm_source=wondercraft_ai(10:04): We're losing our voice to LLMsOriginal post: https://news.ycombinator.com/item?id=46069771&utm_source=wondercraft_ai(11:26): TPUs vs. GPUs and why Google is positioned to win AI race in the long termOriginal post: https://news.ycombinator.com/item?id=46069048&utm_source=wondercraft_ai(12:48): The Nerd Reich – Silicon Valley Fascism and the War on DemocracyOriginal post: https://news.ycombinator.com/item?id=46066482&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
At KubeCon + CloudNativeCon 2025 in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Idit Levine of Solo.io, Shaun O'Meara of Mirantis, Sean O'Dell of Dynatrace and James Harmison of Red Hat - explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices. Jonathan Bryce of the CNCF argued that true AI-native systems depend on robust inference layers, which have been overshadowed by the hype around chatbots and agents. As organizations push AI to the edge and demand faster, more personalized experiences, Fermyon's Kate Goldenring highlighted WebAssembly as a way to bundle and securely deploy models directly to GPU-equipped hardware, reducing latency while adding sandboxed security.Dynatrace's Sean O'Dell noted that AI dramatically increases observability needs: integrating LLM-based intelligence adds value but also expands the challenge of filtering massive data streams to understand user behavior. Meanwhile, Mirantis CTO Shaun O'Meara emphasized a return to deeper infrastructure awareness. Unlike abstracted cloud native workloads, AI workloads running on GPUs require careful attention to hardware performance, orchestration, and energy constraints. Managing power-hungry data centers efficiently, he argued, will be a defining challenge of the AI native era.Learn more from The New Stack about evolving cloud native ecosystem to an AI native eraCloud Native and AI: Why Open Source Needs Standards Like MCPA Decade of Cloud Native: From CNCF, to the Pandemic, to AICrossing the AI Chasm: Lessons From the Early Days of CloudJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Love AWS Fargate, but occasionally hit the “I need more control” wall (GPUs, storage, network bandwidth, instance sizing)? In this episode of AWS Bites, Eoin and Luciano put the brand-new Amazon ECS Managed Instances (ECS MI) under the microscope as the “middle path” between Fargate simplicity and ECS on EC2 flexibility. We unpack what ECS MI actually is and where it fits in the ECS spectrum, especially how it changes the way you think about clusters and capacity providers. From there we get practical: we talk through the pricing model (EC2 pricing with an additional ECS MI fee that can be a bit counterintuitive if you rely heavily on Reserved Instances or Savings Plans), and we share what it feels like to finally get GPU support in an experience that's much closer to Fargate than to “full EC2 fleet management”. To make it real, we walk through what we built: a GPU-enabled worker that transcribes podcast audio using OpenAI Whisper, including the end-to-end setup in CDK (roles, capacity provider wiring, task definitions, and service configuration). Along the way we call out the rough edges we ran into, like configuration options that look like they might enable Spot-style behavior, and the operational realities you should expect, such as tasks taking roughly 3–4 minutes to start when ECS needs to provision fresh capacity. We close by mapping out the workloads where ECS MI shines (queue-driven GPU jobs, HPC-ish compute, tighter storage/network control) and the scenarios where it's probably the wrong choice, like when you need custom AMIs, SSH access, or stricter isolation guarantees.In this episode, we mentioned the following resources: Amazon ECS Managed Instances: https://aws.amazon.com/ecs/managed-instances/ ECS Managed Instances documentation: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ManagedInstances.html Amazon Bottlerocket (what it is): https://aws.amazon.com/bottlerocket/ Our CDK ECS MI template: https://github.com/fourTheorem/cdk-ecs-mi-template Ep 42. How do you containerise and run your API with Fargate?: https://awsbites.com/42-how-do-you-containerise-and-run-your-api-with-fargate/ Ep 72. How do you save cost with ECS?: https://awsbites.com/72-how-do-you-save-cost-with-ecs/ Ep 10. Lambda or Fargate for containers?: https://awsbites.com/10-lambda-or-fargate-for-containers/ Ep 38. How do you choose the right compute service on AWS?: https://awsbites.com/38-how-do-you-choose-the-right-compute-service-on-aws/ Ep 143. Is App Runner better than Fargate?: https://awsbites.com/143-is-app-runner-better-than-fargate/ Do you have any AWS questions you would like us to address?Leave a comment here or connect with us on X/Twitter, BlueSky or LinkedIn:- https://twitter.com/eoins | https://bsky.app/profile/eoin.sh | https://www.linkedin.com/in/eoins/- https://twitter.com/loige | https://bsky.app/profile/loige.co | https://www.linkedin.com/in/lucianomammino/
Kony Kwong is the CEO and Co-Founder of GAIB AI, a pioneering platform transforming physical GPUs into a new yield-bearing asset class. As AI drives exponential demand for computational infrastructure, GAIB stands at the forefront of a new financial frontier — where AI and DeFi converge, to provide additional funding channels for cloud/data centers, while offering investors direct access to the explosive AI economy. With a background spanning quantitative trading, machine learning engineering at top-tier firms, and early bets on the AI + blockchain convergence, Kony brings a sharp operator-investor lens to one of the fastest-moving sectors in tech. In this episode, Kony unpacks WAGMI Ventures' thesis on why AI agents are the killer app for crypto in 2025 and beyond, how GAIB is transforming physical GPUs into a new yield-bearing asset class, designing autonomous onchain agents that actually own assets, trade, and compound value, and the unique model of simultaneously building and investing behind its own convictions. He dives deep into the technical and economic breakthroughs needed for truly agentic crypto systems, the massive alpha in AI economies, and how GAIB is positioning itself as building the first economic layer for AI compute, bringing new investment possibilities into this surging sector.
A crowded server board with ten thousand parts doesn't forgive sloppy inspection—and neither do pricey GPUs and chiplets. From the floor of Productronica in Munich, we dig into how automated optical inspection keeps advanced packages honest once they hit the PCB line, where solder quality, coplanarity, and sheer component variety can make or break yield. Vidya Vijay from Nordson Test & Inspection joins us to unpack why AOI remains the fastest path to actionable insight, when X‑ray is the smarter choice, and how new sensor design changes the game for reflective, high‑mix assemblies.We explore the real pain points engineers face today: shiny dies that confuse cameras, BGAs packed with I/O where hidden defects hide under the body, and miniature passives that crowd tight keep‑outs. Vidya explains how three‑phase profilometry creates true 3D height maps by projecting fringe patterns and reading them from multiple angles, enabling precise checks for corner fill, underfill, and coplanarity. We also get into multi‑reflection suppression, Nordson's approach to filtering glare and ghost images so the system sees the joint, not the noise. With true RGB on side cameras and higher resolution, AOI can now pick out tiny solder balls and subtle surface issues at speed—fuel for stronger AI autoprogramming and more reliable defect classification.If throughput is king, data is queen. We talk about closing the loop from inspection back to the line to prevent bad lots—flagging stencil drift, placement offsets, and paste issues before they explode into scrap. Then we spotlight Nordson's launched SQ5000 Pro: faster cycle times, a wider field of view, and configurable 7 µm or 10 µm sensors designed for modern PCBA demands. Whether you're chasing yield on high‑value GPUs or balancing AOI with AXI on dense boards, this conversation offers a practical roadmap for choosing the right tool, tackling reflectivity, and using insight to drive predictable quality.Nordson Test and Inspection Delivering best-in-class test, inspection, and metrology solutions for semiconductor applications. Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show
On November 25th, 2025, Nvidia did something they've never done before: they publicly defended themselves. After reports broke that Meta is negotiating a multi-billion dollar deal to buy Google's AI chips instead of Nvidia's GPUs, Nvidia posted a defensive tweet claiming they're "a generation ahead" of ASICs like Google's TPUs.But if Nvidia is so far ahead, why are they tweeting about it? And why is Meta—their second-largest customer—trying to break free?In this deep dive, we break down the secret war for AI chips, analyze Nvidia's "panic tweet" line by line, and explain why Google is now racing toward a $4 trillion valuation while Nvidia's monopoly crumbles.TIMESTAMPS(0:00) Nvidia's Defensive Tweet(0:45) The Meta Betrayal: Google TPU Deal Explained(1:45) Google's $4 Trillion Comeback & Why TPUs Win(2:44) Is Nvidia in Trouble?(3:14) The Verdict: Confidence or Desperation?KEY TAKEAWAYS✅ Why Meta is negotiating to lease and buy Google TPUs starting in 2026✅ The hidden weakness Nvidia accidentally revealed in their tweet✅ How Google's "ASICs" are 30% faster and 60% more energy-efficient✅ Why spending $50 billion/year makes efficiency matter more than versatility✅ The end of Nvidia's monopoly pricing powerTHE TWEET BREAKDOWNWe analyze Nvidia's November 25th response where they claim superiority over "ASICs" (Google's TPUs), why this is defensive PR, and what it reveals about the shifting power dynamics in AI hardware.SUBSCRIBE FOR MORE VC & STARTUP STRATEGYVC10X breaks down the most important stories in tech, startups, and investing every week. If you want actionable insights to help you build or invest in the next great company, subscribe now.LET'S CONNECTWebsite: https://VC10X.comX / Twitter: https://x.com/choubeysahabLinkedIn: https://linkedin.com/in/choubeysahabCOMMENT BELOWIs Nvidia's tweet confident or desperate? Who wins the battle for Meta: Jensen Huang or Sundar Pichai? Let us know in the comments.#Nvidia #Google #Meta #AIChips #TPU #JensenHuang #SundarPichai #TechNews #VentureCapital #Alphabet
This blog is the best explanation of AI intelligence increase I've seen: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/ ### Defining Market Bubbles - Traditional definition: 20%+ share price decline with economic slowdown/recession - Alternative perspective: hype/story not matching reality over time (dot-com example) - Duncan's view: share prices ahead of future expectations - Share prices predict future revenue/profit - Decline when reality falls short of predictions ### Historical Bubble Context - Recent cycles analyzed: - COVID (2020) - pandemic-led, quickly reversed with government intervention - GFC (2008) - housing bubble, financial crisis, deeper impact - Tech bubble (1999) - NASDAQ fell 80%, expectations vs reality mismatch - S&L crisis (1992) - mini financial crisis - Volcker era (1980s) - interest rates raised to break inflation ### Current AI Market Dynamics - OpenAI: fastest growing startup ever, $20B revenue run rate in 2 years - Anthropic: grew from $1B to $9B revenue run rate this year - Big tech revenue acceleration through AI-improved ad platform ROI - Key concern: if growth rates plateau, valuations become unsustainable ### Nvidia as Market Bellwether - Central position providing GPUs for data center buildout - Recent earnings beat analyst expectations but share price fell - Market expectations vs analyst expectations are different metrics - 80% of market money judged on 12-month performance vs long-term value creation ### AI Technology Scaling Laws - Intelligence capability doubling every 7 months for 6 years - Progress from 2-second tasks to 90-minute complex programming tasks - Cost per token declining 100x annually on frontier models - Current trajectory: potential for year-long human-equivalent tasks by 2028 ### Investment Scale and Infrastructure - $3 trillion committed to data center construction this year - Power becoming primary bottleneck (not chip supply) - 500-acre solar farms being built around data centers - 7-year backlog on gas turbines, solar+battery fastest deployment option ### Bubble vs Boom Scenarios - Bear case: scaling laws plateau, power constraints limit growth - Short-term revenue slowdown despite long-term potential - Circular investment dependencies create domino effect - Bull case: scaling laws continue, GDP growth accelerates to 5%+ - Current 100% GPU utilization indicates strong demand - Structural productivity gains justify investment levels ### Market Structure Risks - Foundation model layer: 4 roughly equal competitors (OpenAI, Anthropic, Google, XAI) - No clear “winner takes all” dynamic emerging - Private company valuations hard to access for retail investors - Application layer: less concentrated, easier to build sustainable businesses - Chip layer: Nvidia dominance but Google TPUs showing competitive performance
ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI
They highlight transformer workloads as the clearest comparison. Nvidia claims this is where they shine most. We examine the workload differences.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Nvidia claims its latest designs leave Google's TPUs in the dust. The performance gap is widening, not shrinking. This episode examines why Nvidia thinks it's pulling away.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Nvidia wants the industry to see TPU limitations clearly. They're using bold messaging to reinforce it. We examine the strategy.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome back to AI Unraveled (November 26th 2025), your daily strategic briefing on the business impact of AI. Today, the fundamental laws of AI development are being questioned. We analyze Ilya Sutskever's shocking declaration that the "age of scaling" is ending, a pivot that could redefine capital allocation in the sector. We also track the escalating war of words between Nvidia and Google over chip dominance, and the labor market shockwave as Claude Opus 4.5 outscores human engineers on hiring exams while HP cuts 6,000 jobs.Strategic Pillars & Key Takeaways:Strategy & The Future of Compute: Ilya Sutskever says AI's 'age of scaling' is ending; Anthropic claims AI could double U.S. productivity growth; HP to cut about 6,000 jobs in AI push.Hardware Wars: Nvidia says its GPUs are a 'generation ahead' of Google's AI chips (TPUs); Nvidia responds to concerns over Google's TPUs gaining a foothold citing "greater fungibility."Model Performance & Benchmarks: Anthropic tested Claude Opus 4.5 on a take-home exam, scoring “higher than any human candidate ever”; Google's Gemini 3 Pro set a new high score for AI models on Tracking AI's offline IQ test (130); Tencent's Hunyuan open-sources HunyuanOCR.Media, Commerce & Applications: Warner Music partners with Suno after settling lawsuit; ChatGPT merges voice and text into one chat window; Use ChatGPT and Perplexity shopping research to find best deals; Black Forest Labs' Flux.2 image generation suite; Musk proposes Grok 5 match against best League of Legends team.
As Americans prep their Thanksgiving feasts, one hotline is bracing for its busiest day of the year. Nicole Johnson, director of the Butterball Turkey Talk-Line, explains the most common turkey questions. Then, Harvard professor Arthur Brooks shares advice for navigating family dynamics, handling holiday anxiety, and finding common ground at the dinner table. Plus, Nvidia says its GPUs are a generation ahead of Google's AI chips, and Campbell's Soup responds to leaked audio claiming its food is made for “poor people.” Arthur Brooks 13:34Nicole Johnson 21:14 In this episode:Nicole Johnson, @butterballArthur Brooks, @arthurbrooksBecky Quick, @BeckyQuickAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
In this episode, we break down Nvidia's bold statement that its latest GPUs outperform Google's AI chips by a full generation. We explore what this means for the AI hardware race and how it could shape future model development.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We're told that AI progress is slowing down, that pre-training has hit a wall, that scaling laws are running out of road. Yet we're releasing this episode in the middle of a wild couple of weeks that saw GPT-5.1, GPT-5.1 Codex Max, fresh reasoning modes and long-running agents ship from OpenAI — on top of a flood of new frontier models elsewhere. To make sense of what's actually happening at the edge of the field, I sat down with someone who has literally helped define both of the major AI paradigms of our time.Łukasz Kaiser is one of the co-authors of “Attention Is All You Need,” the paper that introduced the Transformer architecture behind modern LLMs, and is now a leading research scientist at OpenAI working on reasoning models like those behind GPT-5.1. In this conversation, he explains why AI progress still looks like a smooth exponential curve from inside the labs, why pre-training is very much alive even as reinforcement-learning-based reasoning models take over the spotlight, how chain-of-thought actually works under the hood, and what it really means to “train the thinking process” with RL on verifiable domains like math, code and science. We talk about the messy reality of low-hanging fruit in engineering and data, the economics of GPUs and distillation, interpretability work on circuits and sparsity, and why the best frontier models can still be stumped by a logic puzzle from his five-year-old's math book.We also go deep into Łukasz's personal journey — from logic and games in Poland and France, to Ray Kurzweil's team, Google Brain and the inside story of the Transformer, to joining OpenAI and helping drive the shift from chatbots to genuine reasoning engines. Along the way we cover GPT-4 → GPT-5 → GPT-5.1, post-training and tone, GPT-5.1 Codex Max and long-running coding agents with compaction, alternative architectures beyond Transformers, whether foundation models will “eat” most agents and applications, what the translation industry can teach us about trust and human-in-the-loop, and why he thinks generalization, multimodal reasoning and robots in the home are where some of the most interesting challenges still lie.OpenAIWebsite - https://openai.comX/Twitter - https://x.com/OpenAIŁukasz KaiserLinkedIn - https://www.linkedin.com/in/lukaszkaiser/X/Twitter - https://x.com/lukaszkaiserFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold open and intro(01:29) – “AI slowdown” vs a wild week of new frontier models(08:03) – Low-hanging fruit: infra, RL training and better data(11:39) – What is a reasoning model, in plain language?(17:02) – Chain-of-thought and training the thinking process with RL(21:39) – Łukasz's path: from logic and France to Google and Kurzweil(24:20) – Inside the Transformer story and what “attention” really means(28:42) – From Google Brain to OpenAI: culture, scale and GPUs(32:49) – What's next for pre-training, GPUs and distillation(37:29) – Can we still understand these models? Circuits, sparsity and black boxes(39:42) – GPT-4 → GPT-5 → GPT-5.1: what actually changed(42:40) – Post-training, safety and teaching GPT-5.1 different tones(46:16) – How long should GPT-5.1 think? Reasoning tokens and jagged abilities(47:43) – The five-year-old's dot puzzle that still breaks frontier models(52:22) – Generalization, child-like learning and whether reasoning is enough(53:48) – Beyond Transformers: ARC, LeCun's ideas and multimodal bottlenecks(56:10) – GPT-5.1 Codex Max, long-running agents and compaction(1:00:06) – Will foundation models eat most apps? The translation analogy and trust(1:02:34) – What still needs to be solved, and where AI might go next
Big thanks to Cisco for sponsoring this video and sponsoring my trip to Cisco Partner Summit San Diego 2025. This video is a deep dive with Jeetu Patel on why the real AI revolution is happening in infrastructure and networking, not just in GPUs or chatbots. Jeetu explains that we are massively underestimating how much AI infrastructure the world will need. Power becomes the core constraint, GPUs are the core asset, and networking is the force multiplier that lets thousands of GPUs act as one system. He walks through how we went from models on a single GPU → 4–8 GPUs in a server → racks with hundreds of GPUs (like NVL-72 with 500+ GPUs) → clusters of racks, and now “scale across” between data centers when power and real estate are scattered across different regions. The conversation then shifts to edge AI and Cisco Unified Edge: instead of doing all token generation in big data centers, some inference and token generation must move to the edge (branches, factories, hospitals, stadiums, stores) where data is created. Jeetu explains why edge devices need to be plug-and-play, remotely managed, and integrate compute, networking, security and observability in a single platform. He also introduces the idea that AI is now constrained by three big bottlenecks: • Infrastructure • A trust deficit (people don't trust AI yet) • A data gap (models are mostly trained on human internet data, not on rich machine data) Jeetu explains how security becomes a prerequisite for productivity, not a trade-off, and describes Cisco's work with Splunk, open-sourced time-series models, and machine data (logs, metrics, traces) to close the data gap by correlating machine data with human-generated data for better insights. Globally, he talks about the “token generation race” – how every country now cares about having enough AI token generation capacity because it directly links to GDP and national security. He cites huge infrastructure build-outs with partners like G42 in the Middle East, at gigawatt and trillions-of-dollars scale. Finally, Jeetu tackles the “AI will take my job” fear. He outlines three stages of thinking: 1. “AI will take my job.” 2. “Someone who uses AI better will take my job.” 3. “Without AI, I won't be able to do my job.” His message to younger viewers: be excited, adopt AI as a companion, own your learning, and learn fast because AI compresses the time it takes to build skills. // Jeetu Patel's SOCIALS // LinkedIn: / jeetupatel Website: https://www.cisco.com/ X: https://x.com/jpatel41 // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // Menu // 0:00 - Coming up 0:33 - "Networking is sexy" 02:24 - Scale up, scale out and scale across explained 04:47 - Cisco and Nvidia partnership 05:55 - Cisco and G42 partnership // Addressing the AI bubble 08:11 - New Cisco Unified Edge 11:08 - Agentic AI in the future 13:05 - Huge demand for networking 13:57 - The three constraints 16:38 - AI in the real world 19:26 - How AI will take jobs away 21:38 - Conclusion Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only.
MRKT Matrix - Tuesday, November 25th Dow rallies nearly 700 points on increased hope for a December rate cut (CNBC) Consumer confidence hits lowest point since April as job worries grow (CNBC) Market Volatility Underscores Epic Buildup of Global Risk (NYTimes) Nvidia says its GPUs are a ‘generation ahead' of Google's AI chips (CNBC) Google, the Sleeping Giant in Global AI Race, Now ‘Fully Awake' (Bloomberg) OpenAI needs to raise at least $207bn by 2030 so it can continue to lose money, HSBC estimates (FT) Oracle-Linked Borrowing Binge Worries Lenders (The Information) Private Credit's Sketchy Marks Get Warning Shot From Wall Street's Top Cop (Bloomberg) --- Subscribe to our newsletter: https://riskreversalmedia.beehiiv.com/subscribe MRKT Matrix by RiskReversal Media is a daily AI powered podcast bringing you the top stories moving financial markets Story curation by RiskReversal, scripts by Perplexity Pro, voice by ElevenLabs
Fei-Fei Li and Justin Johnson are cofounders of World Labs, who have recently launched Marble (https://marble.worldlabs.ai/), a new kind of generative “world model” that can create editable 3D environments from text, images, and other spatial inputs. Marble lets creators generate persistent 3D worlds, precisely control cameras, and interactively edit scenes, making it a powerful tool for games, film, VR, robotics simulation, and more. In this episode, Fei-Fei and Justin share how their journey from ImageNet and Stanford research led to World Labs, why spatial intelligence is the next frontier after LLMs, and how world models could change how machines see, understand, and build in 3D. We discuss: The massive compute scaling from AlexNet to today and why world models and spatial data are the most compelling way to “soak up” modern GPU clusters compared to language alone. What Marble actually is: a generative model of 3D worlds that turns text and images into editable scenes using Gaussian splats, supports precise camera control and recording, and runs interactively on phones, laptops, and VR headsets. Fei-fei's essay (https://drfeifei.substack.com/p/from-words-to-worlds-spatial-intelligence) on spatial intelligence as a distinct form of intelligence from language: from picking up a mug to inferring the 3D structure of DNA, and why language is a lossy, low-bandwidth channel for describing the rich 3D/4D world we live in. Whether current models “understand” physics or just fit patterns: the gap between predicting orbits and discovering F=ma, and how attaching physical properties to splats and distilling physics engines into neural networks could lead to genuine causal reasoning. The changing role of academia in AI, why Fei-Fei worries more about under-resourced universities than “open vs closed,” and how initiatives like national AI compute clouds and open benchmarks can rebalance the ecosystem. Why transformers are fundamentally set models, not sequence models, and how that perspective opens up new architectures for world models, especially as hardware shifts from single GPUs to massive distributed clusters. Real use cases for Marble today: previsualization and VFX, game environments, virtual production, interior and architectural design (including kitchen remodels), and generating synthetic simulation worlds for training embodied agents and robots. How spatial intelligence and language intelligence will work together in multimodal systems, and why the goal isn't to throw away LLMs but to complement them with rich, embodied models of the world. Fei-Fei and Justin's long-term vision for spatial intelligence: from creative tools for artists and game devs to broader applications in science, medicine, and real-world decision-making. — Fei-Fei Li X: https://x.com/drfeifei LinkedIn: https://www.linkedin.com/in/fei-fei-li-4541247 Justin Johnson X: https://x.com/jcjohnss LinkedIn: https://www.linkedin.com/in/justin-johnson-41b43664 Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction and the Fei-Fei Li & Justin Johnson Partnership 00:02:00 From ImageNet to World Models: The Evolution of Computer Vision 00:12:42 Dense Captioning and Early Vision-Language Work 00:19:57 Spatial Intelligence: Beyond Language Models 00:28:46 Introducing Marble: World Labs' First Spatial Intelligence Model 00:33:21 Gaussian Splats and the Technical Architecture of Marble 00:22:10 Physics, Dynamics, and the Future of World Models 00:41:09 Multimodality and the Interplay of Language and Space 00:37:37 Use Cases: From Creative Industries to Robotics and Embodied AI 00:56:58 Hiring, Research Directions, and the Future of World Labs
In today's episode, Tyler Herriage dives into the latest market action after a strong start to the week, highlighting the impressive rebound in major indexes and the standout performance of individual stocks. He takes a close look at the current buzz around Google's new TPUs versus Nvidia's GPUs, breaking down what these innovations mean for investors and whether there's truly an AI bubble brewing.
This week Qualcomm is back, and maybe everything is terrible with Arduino. Valve has been funding more Open Source work, and we're reading those tea leaves. Blender is out, AMD is writing code for their next-gen GPUs, and there's finally a remote access solution for Wayland. For tips, we have LibrePods for better AirPod support on Linux, paru for an easier time with the Arch User Repository, and the Zork snap to celebrate this newly Open-Sourced game from yesteryear. You can find the show notes at https://bit.ly/49uSNCy and have a great week! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
This week Qualcomm is back, and maybe everything is terrible with Arduino. Valve has been funding more Open Source work, and we're reading those tea leaves. Blender is out, AMD is writing code for their next-gen GPUs, and there's finally a remote access solution for Wayland. For tips, we have LibrePods for better AirPod support on Linux, paru for an easier time with the Arch User Repository, and the Zork snap to celebrate this newly Open-Sourced game from yesteryear. You can find the show notes at https://bit.ly/49uSNCy and have a great week! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Much attention has been focused in the news on the useful life of GPUs. While the pervasive narrative suggests GPUs have a short lifespan, and operators are “cooking the books,” our research suggests that GPUs, like CPUs before, have a significantly longer useful life than many claim.
Once again NVIDIA had a record earnings quarter (Q3FY26), but the strength of their on-going success will be dependent on many factors that may or may not be within their control. Let's explore those broader factors.SHOW: 978SHOW TRANSCRIPT: The Cloudcast #978 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[TestKube] TestKube is Kubernetes-native testing platform, orchestrating all your test tools, environments, and pipelines into scalable workflows empowering Continuous Testing. Check it out at TestKube.io/cloudcast[Mailtrap] Try Mailtrap for free[Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.SHOW NOTES:NVIDIA Earnings (Q3FY2026 - November 2025)WHAT WILL BE THE NEW METRICS AND MILESTONES TO TRACK?Customer Revenues (e.g. CoreWeave, OpenAI)“Alternatives” Revenues (e.g. Google/TPUs, AMD, China, etc.)Customer Success Stories (%ROI, Business Differentiation, Business Acceleration)Growth of Data Centers (e.g. buildouts, zoning approvals, etc.)Electricity Buildouts (e.g. nuclear, coal, alternative, regulatory changes, municipality adoption)Accounting Deep-Dives into NVIDIA (not fraud, but days receivables, inventory buybacks, etc.)$500B in back orders (Oracle, Microsoft, OpenAI, GrokAI)FEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod
Is the AI trade cooling off, or is this the calm before an even more explosive phase of the build-out? Zaid sits down with Doug O'Loughlin, President at SemiAnalysis, to break down Nvidia's earnings report, why AI spending might actually accelerate from here, how Google's Gemini 3 and in-house TPUs have suddenly shifted the competitive landscape, and why founders like Zuck are willing to pour billions into GPUs even as investors panic. They dive into the real useful life of Nvidia chips, whether OpenAI's shine is fading, how rate cuts fuel the “AI party,” and what happens when hyperscalers start borrowing real money to ramp capex. Plus: why Oracle might actually be betting the company, and Doug's call on when OpenAI could IPO.
Register here to join Founder University Japan's kickoff: https://luma.com/cm0x90mkToday's show:Google and Meta had their cases dismissed (or received a slap on the wrist)… Despite all the backlash and cynicism, AI companies continue making bank and releasing hot new products… What does it all mean?For Jason Calacanis, the signs are pointing to a “major M&A moment,” with huge opportunities for increased efficiency and consolidation among America's favorite brands and largest companies?Who will it be? Join Jason and Alex for a round of hot speculation.PLUS why Jason thinks Michael Burry is both right and wrong about GPU depreciation, why NOTHING is certain about these OpenAI mega-deals, Google's Nano Banana Pro can make infographics and they're VERY impressive… and much more.Timestamps:(1:54) Jason's calling in from Vegas… He's doing a hot lap at F1!(3:18) How restaurants are becoming the new Hot IP(6:50) Founder University is heading to TOKYO!(9:27) Why Jason thinks the future of startups is truly global(10:06) Pipedrive - Bring your entire sales process into one elegant space. Get started with a 30 day free trial at pipedrive.com/twist(11:39) Nvidia killed it on the numbers… but what are the vibes around AI? Jason sounds off.(13:05) Why nothing is certain when it comes to the Nvidia/OpenAI deal(19:40) Is Google now WINNING consumer adoption of AI? How did it get this close?(19:57) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(26:07) Meanwhile, AI apps are still dominating the iOS Store(27:09) Why Jason and Alex think Michael Burry's both right and wrong about GPU depreciation(30:13) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(37:46) We're testing out Nano Banana Pro on a BBQ infographic challenge(43:42) What a week for AI models! It doesn't seem like things are slowing down…(46:12) Kalshi is growing fast, but can it catch Polymarket?(47:50) Is a rate cut coming? Jason and Alex read the tea leaves.(50:13) Why Jason predicts a “major M&A moment” in the next six months(52:09) VIEWER QUESTION: What should a software engineer be working on RIGHT NOW.(54:02) Founder Friday is now… STARTUP SUPPER CLUBSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:06) Pipedrive - Bring your entire sales process into one elegant space. Get started with a 30 day free trial at pipedrive.com/twist(19:57) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(30:13) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com
ZKsync Airbender proves L1 blocks using two 5090 GPUs. Layerswap discloses a bridge bug. Trails introduces its Universal Intents Platform. And Sigma Prime releases Lighthouse hotfix v8.0.1. Read more: https://ethdaily.io/829 Sponsor: Arkiv is an Ethereum-aligned data layer for Web3. Arkiv brings the familiar concept of a traditional Web2 database into the Web3 ecosystem. Find out more at Arkiv.network Content is for informational purposes only, not endorsement or investment advice. The accuracy of information is not guaranteed.
In a world of Rust, Go, and Python, why does C++ still matter? Dr. Gabriel Dos Reis joins Scott to explain how C++ continues to shape everything from GPUs and browsers to AI infrastructure. They talk about performance, predictability, and the art of balancing power with safety...and how the language's constant evolution keeps it relevant four decades in.
The promise of agentic AI has been massive, autonomous systems that act, reason, and make business decisions, but most enterprises are still struggling to see results.In this episode, host Chris Brandt sits down with Sumeet Arora, Chief Product Officer at Teradata, to unpack why the gap exists between AI hype and actual impact, and what it takes to make AI scale, explainable, and ROI-driven.From the shift toward “AI with ROI” to the new era of human + AI systems and data quality challenges, Sumeet shares how leading enterprises are moving from flashy demos to measurable value and trust in the next phase of AI. CHAPTER MARKERS00:00 The AI Hackathon Era03:10 Hype vs Reality in Agentic AI06:05 Redesigning the Human AI Interface09:15 From Demos to Real Economic Outcomes12:20 Why Scaling AI Still Fails15:05 The Importance of AI Ready Knowledge18:10 Data Quality and the Biggest Bottleneck20:46 Building the Customer 360 Knowledge Layer23:35 Push vs Pull Systems in Modern AI26:15 Rethinking Enterprise Workflows29:20 AI Agents and Outcome Driven Design32:45 Where Agentic AI Works Today36:10 What Enterprises Still Get Wrong39:30 How AI Changes Engineering Priorities55:49 The Future of GPUs and Efficiency Challenges -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What happens when a former NHL player who once faced Wayne Gretzky ends up running a global data company that sits at the center of the AI boom? That question kept coming back to me as I reconnected with Mike McKee, the CEO of Ataccama, seven years after our last conversation. So much has shifted in the world since then, yet the theme that shaped this discussion felt surprisingly grounded. None of the big promises of AI can take hold unless leaders can rely on the data sitting underneath every system they run. Mike brings a rare mix of stories and experience to this theme. His journey from the ice to the C suite feels like its own lesson in discipline, teamwork, and patience, and he openly reflects on the way those early years influence how he leads today. But the heart of this conversation sits in the reality he sees inside global enterprises. Everyone is racing to build AI powered services, yet the biggest blockers are messy records, inconsistent metadata, long forgotten databases, and years of quality issues that were never addressed. It is a blunt problem, and Mike explains why the companies winning with AI right now are the ones treating data trust as a foundation rather than an afterthought. Across the discussion, he shares stories from organisations like T Mobile and Prudential, where millions of records, thousands of systems, and vast volumes of structured and unstructured data must be monitored, understood, and governed in real time. Mike walks through how teams build confidence in their data again, why quality scores matter, and how automation now shapes everything from compliance to customer retention. What stood out most is how quickly the expectations have shifted. Boards and CEOs now treat data as a strategic asset rather than an operational chore, and entire roles have emerged above the chief data officer to steer these programmes. This episode is also a reminder that AI progress is never only about models or GPUs. Mike pulls back the curtain on why organisations struggle to measure AI readiness, how they can avoid bottlenecks, and what it takes to prioritise the work that actually moves the needle. His point is simple. Without trustworthy data, AI remains a promise rather than a practical tool. With it, businesses can act with confidence, respond faster, and make decisions that genuinely improve outcomes for customers and employees. So as AI reaches deeper into systems everywhere, how should leaders rethink their approach to data trust, governance, and quality? And if you have been on your own journey with data challenges, where have you seen progress and where are you still stuck? I would love to hear your thoughts. Tech Talks Daily is Sponsored by NordLayer: Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.
Nvidia (NVDA) muscled a strong rally for the A.I. trade after hours thanks to a notable beat on EPS and revenue. The company also sees guidance for the current quarter higher than Wall Street estimates. Marley Kayden, Sam Vadas, and George Tsilis take investors through the eye-watering numbers of Nvidia's market moving report, from data center revenue, to expectations for Blackwell and cloud GPUs, to the outlook for China sales. ======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
Keith Gangl says Nvidia (NVDA) "checked all the boxes" it needed for its earnings report, calling the numbers "fantastic." He points to Nvidia's "sold-out" Blackwell and cloud GPUs showing no slowdown for A.I. demand even if the trade is "overheated." Keith adds that competitors like AMD Inc. (AMD) and Broadcom (AVGO) also have plenty of room to grow alongside Nvidia. ======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
On this episode of the Alpha Exchange, I'm pleased to welcome back Jordi Visser, CEO of Visser Labs and Head of AI Macro Research at 22V. Our conversation centers on one of the most consequential themes in markets today: the intersection of artificial intelligence, exponential innovation, and market structure. With Nvidia's historic rise as a backdrop and AI's increasing integration into every sector, Jordi pushes back on the tendency to label this cycle a “bubble,” arguing that AI is more akin to electricity — an enabling technology whose applications will permeate everyday life. Demand for compute remains effectively infinite, he notes, and the supply shortfalls in GPUs, data centers, and power capacity shape how investors should think about the buildout phase.Jordi also lays out a framework for navigating volatility in sectors tied to AI buildout — including how to handle 20–30% drawdowns — and why estimate revisions matter more than multiple expansion from here. Beyond markets, we explore the labor dynamics of exponential technology: the K-shaped economy, margin pressure at retailers, and why he believes labor participation will keep drifting lower even without mass layoffs.Finally, we examine the policy environment. Here Jordi asserts that the Fed's framework is backward looking and misses how humanoids, robotaxis, and accelerated drug discovery may drive deflationary pressures.I hope you enjoy this episode of the Alpha Exchange, my conversation with Jordi Visser.
In this episode, we welcome Lead Principal Technologist Hari Kannan to cut through the noise and tackle some of the biggest myths surrounding AI data management and the revolutionary FlashBlade//EXA platform. With GPU shipments now outstripping CPUs, the foundation of modern AI is shifting, and legacy storage architectures are struggling to keep up. Hari dives into the implications of this massive GPU consumption, setting the stage for why a new approach is desperately needed for companies driving serious AI initiatives. Hari dismantles three critical myths that hold IT leaders back. First, he discusses how traditional storage is ill-equipped for modern AI's millions of small, concurrent files, where metadata performance is the true bottleneck—a problem FlashBlade//EXA solves with its metadata-data separation and single namespace. Second, he addresses the outdated notion that high-performance AI is file-only, highlighting FlashBlade//EXA's unified, uncompromising delivery of both file and object storage at exabyte scale and peak efficiency. Finally, Hari explains that GPUs are only as good as the data they consume, countering the belief that only raw horsepower matters. FlashBlade//EXA addresses this by delivering reliable, scalable throughput, efficient DirectFlash Modules up to 300 TB, and the metadata performance required to keep expensive GPUs fully utilized and models training faster. Join us as we explore the blind spots in current AI data strategies during our "Hot Takes" segment and recount a favorite FlashBlade success story. Hari closes with a compelling summary of how Pure Storage's complete portfolio is perfectly suited to provide the complementary data management essential for scaling AI. Tune in to discover why FlashBlade//EXA is the non-compromise, exabyte-scale solution built to keep your AI infrastructure running at its full potential. For more information, visit: https://www.pure.ai/flashblade-exa.html Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 04:30 Primer on FlashBlade 11:32 Stat of the Episode on GPU Shipments 13:25 What is FlashBlade//EXA 18:58 Myth #1: Traditional Storage Challenges for AI Data 22:01 Myth #2: AI Workloads are not just File-based 26:42: Myth #3: AI Needs more than just GPUs 31:35 Hot Takes Segment
Neal Stephenson—legendary sci-fi author who coined "metaverse" in his 1992 novel Snow Crash—and Rebecca Barkin, co-founder of Lamina1, return to the AI XR Podcast for a wide-ranging conversation about building a decentralized creator economy, launching their dystopian AI world-building project Artifact, and why blockchain might finally free creators from Big Tech's chokehold. Joined by Charlie, Ted, and Rony, the discussion spans Neal's lost Magic Leap project, the resurrection of the open metaverse dream, and how decentralized platforms could flip Hollywood's power structure on its head.Rebecca details Lamina1's journey from blockchain currency for the open metaverse to Spaces, a multimedia creator platform built on Ethereum that allows IP owners to retain control, set royalties, and build direct relationships with fans. Think YouTube meets Discord, but on decentralized rails. The goal isn't socialism—it's a creative meritocracy where artists get equity in platforms they help build, instead of just one-time payouts while Netflix captures all the value.Neal unpacks Artifact, Lamina1's first creative test case: a post-Singularity world where 12 competing mega-AIs fight over energy, copper, water, and GPUs while humans live in the interstices. Co-created with Weta Workshop using AI tools like World Labs' marble splats, the project invites fans to co-create lore, not just consume it. It's a living experiment in collaborative IP development—and proof that small teams with AI amplifiers can build Grand Theft Auto-scale worlds.Guest HighlightsNeal Stephenson coined "metaverse" in Snow Crash; former Magic Leap creative lead with lost IP still trapped at the company.Rebecca Barkin pivoted Lamina1 from metaverse currency to Spaces: a decentralized platform for multimedia creators retaining IP rights and earning equity.Artifact launches as Lamina1's test case—collaborative world-building in a dystopian post-AI Singularity where fans shape the narrative.Built on Ethereum with Consensus Network backing; uses blockchain to solve micro-transaction volatility and give creators sustainable economics.Signed Bob's Burgers team (Ghosted Media) and other Hollywood refugees seeking autonomy from studio gatekeepers.News HighlightsValve launches PC cube + wireless Index headset—sub-$1000 system to compete with Xbox/PlayStation and revive PCVR market, but will enthusiasts bite?Meta adds real-time computer vision to AI glasses—Ray-Ban smart glasses gain live AI interpretation, pushing toward inflection point for wearables.Google Maps integrates Gemini AI—natural language directions and real-world context awareness transform navigation into conversational copilot.11 Labs launches voice marketplace—Michael Caine licenses voice cloning; Matthew McConaughey invests but won't sell his own likeness.Disney announces AI user-generated content strategy—Bob Iger teases platforms for fans to create with Disney IP, following Lego's remix culture playbook.Big thanks to our sponsor Zappar. Subscribe for weekly insider perspectives from veterans who aren't afraid to challenge Big Tech. New episodes every Tuesday. Watch full episodes on YouTube. Hosted on Acast. See acast.com/privacy for more information.
Is the current level of AI funding and investment rational or irrational? Is it possible that it's both at the same time? Let's look at some numbers and the thought process behind them.SHOW: 976SHOW TRANSCRIPT: The Cloudcast #976 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[TestKube] TestKube is Kubernetes-native testing platform, orchestrating all your test tools, environments, and pipelines into scalable workflows empowering Continuous Testing. Check it out at TestKube.io/cloudcast[Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.SHOW NOTES:A whole bunch of AI-related statsSam Altman on BG2 podcastDO WE HAVE ANY IDEA HOW TO MEASURE THE IMPACT OF AI?How much is one model better than another (e.g. Gemini vs. CoPilot)?How much improvement should a software developer get?How much improvement should a knowledge worker get?How much cost savings should a chatbot provide?How long should it take to make a model understand a company's data?How many workers can a company displace with AI?OpenAI in 2030 - 26 gigawatts could power between 3.7 million to 17.3 million modern GPU serversOpenAI in 2035 - 50 gigawatts could power between 37 million to 173 million modern GPU serversFEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod
Live from Morgan Stanley's European Tech, Media and Telecom conference in Barcelona, our roundtable of analysts discuss artificial intelligence in Europe, and how the region could enable the Agentic AI wave.Read more insights from Morgan Stanley.----- Transcript -----Paul Walsh: Welcome to Thoughts on the Market. I'm Paul Walsh, Morgan Stanley's European head of research product. We are bringing you a special episode today live from Morgan Stanley's, 25th European TMT Conference, currently underway. The central theme we're focused on: Can Europe keep up from a technology development perspective?It's Wednesday, November the 12th at 8:00 AM in Barcelona. Earlier this morning I was live on stage with my colleagues, Adam Wood, Head of European Technology and Payments, Emmet Kelly, Head of European Telco and Data Centers, and Lee Simpson, Head of European Technology Hardware. The larger context of our conversation was tech diffusion, one of our four key themes that we've identified at Morgan Stanley Research for 2025. For the panel, we wanted to focus further on agentic AI in Europe, AI disruption as well as adoption, and data centers. We started off with my question to Adam. I asked him to frame our conversation around how Europe is enabling the Agentic AI wave. Adam Wood: I mean, I think obviously the debate around GenAI, and particularly enterprise software, my space has changed quite a lot over the last three to four months. Maybe it's good if we do go back a little bit to the period before that – when everything was more positive in the world. And I think it is important to think about, you know, why we were excited, before we started to debate the outcomes. And the reason we were excited was we've obviously done a lot of work with enterprise software to automate business processes. That's what; that's ultimately what software is about. It's about automating and standardizing business processes. They can be done more efficiently and more repeatably. We'd done work in the past on RPA vendors who tried to take the automation further. And we were getting numbers that, you know, 30 – 40 percent of enterprise processes have been automated in this way. But I think the feeling was it was still the minority. And the reason for that was it was quite difficult with traditional coding techniques to go a lot further. You know, if you take the call center as a classic example, it's very difficult to code what every response is going to be to human interaction with a call center worker. It's practically impossible. And so, you know, what we did for a long time was more – where we got into those situations where it was difficult to code every outcome, we'd leave it with labor. And we'd do the labor arbitrage often, where we'd move from onshore workers to offshore workers, but we'd still leave it as a relatively manual process with human intervention in it. I think the really exciting thing about GenAI is it completely transforms that equation because if the computers can understand natural human language, again to our call center example, we can train the models on every call center interaction. And then first of all, we can help the call center worker predict what the responses are going to be to incoming queries. And then maybe over time we can even automate that role. I think it goes a lot further than, you know, call center workers. We can go into finance where a lot of work is still either manual data re-entry or a remediation of errors. And again, we can automate a lot more of those tasks. That's obviously where, where SAP's involved. But basically what I'm trying to say is if we expand massively the capabilities of what software can automate, surely that has to be good for the software sector that has to expand the addressable markets of what software companies are going to be able to do. Now we can have a secondary debate around: Is it going to be the incumbents, is it going to be corporates that do more themselves? Is it going to be new entrants that that benefit from this? But I think it's very hard to argue that if you expand dramatically the capabilities of what software can do, you don't get a benefit from that in the sector. Now we're a little bit more consumer today in terms of spending, and the enterprises are lagging a little bit. But I think for us, that's just a question of timing. And we think we'll see that come through.I'll leave it there. But I think there's lots of opportunities in software. We're probably yet to see them come through in numbers, but that shouldn't mean we get, you know, kind of, we don't think they're going to happen. Paul Walsh: Yeah. We're going to talk separately about AI disruption as we go through this morning's discussion. But what's the pushback you get, Adam, to this notion of, you know, the addressable market expanding? Adam Wood: It's one of a number of things. It's that… And we get onto the kind of the multiple bear cases that come up on enterprise software. It would be some combination of, well, if coding becomes dramatically cheaper and we can set up, you know, user interfaces on the fly in the morning, that can query data sets; and we can access those data sets almost in an automated way. Well, maybe companies just do this themselves and we move from a world where we've been outsourcing software to third party software vendors; we do more of it in-house. That would be one. The other one would be the barriers to entry of software have just come down dramatically. It's so much easier to write the code, to build a software company and to get out into the market. That it's going to be new entrants that challenge the incumbents. And that will just bring price pressure on the whole market and bring… So, although what we automate gets bigger, the price we charge to do it comes down. The third one would be the seat-based pricing issue that a lot of software vendors to date have expressed the value they deliver to customers through. How many seats of the software you have in house. Well, if we take out 10 – 20 percent of your HR department because we make them 10, 20, 30 percent more efficient. Does that mean we pay the software vendor 10, 20, 30 percent less? And so again, we're delivering more value, we're automating more and making companies more efficient. But the value doesn't accrue to the software vendors. It's some combination of those themes I think that people would worry about. Paul Walsh: And Lee, let's bring you into the conversation here as well, because around this theme of enabling the agentic AI way, we sort of identified three main enabler sectors. Obviously, Adam's with the software side. Cap goods being the other one that we mentioned in the work that we've done. But obviously semis is also an important piece of this puzzle. Walk us through your thoughts, please. Lee Simpson: Sure. I think from a sort of a hardware perspective, and really we're talking about semiconductors here and possibly even just the equipment guys, specifically – when seeing things through a European lens. It's been a bonanza. We've seen quite a big build out obviously for GPUs. We've seen incredible new server architectures going into the cloud. And now we're at the point where we're changing things a little bit. Does the power architecture need to be changed? Does the nature of the compute need to change? And with that, the development and the supply needs to move with that as well. So, we're now seeing the mantle being picked up by the AI guys at the very leading edge of logic. So, someone has to put the equipment in the ground, and the equipment guys are being leaned into. And you're starting to see that change in the order book now. Now, I labor this point largely because, you know, we'd been seen as laggards frankly in the last couple of years. It'd been a U.S. story, a GPU heavy story. But I think for us now we're starting to see a flipping of that and it's like, hold on, these are beneficiaries. And I really think it's 'cause that bow wave has changed in logic. Paul Walsh: And Lee, you talked there in your opening remarks about the extent to which obviously the focus has been predominantly on the U.S. ways to play, which is totally understandable for global investors. And obviously this has been an extraordinary year of ups and downs as it relates to the tech space. What's your sense in terms of what you are getting back from clients? Is the focus shifts may be from some of those U.S. ways to play to Europe? Are you sensing that shift taking place? How are clients interacting with you as it relates to the focus between the opportunities in the U.S. and Asia, frankly, versus Europe? Lee Simpson: Yeah. I mean, Europe's coming more into debate. It's more; people are willing to talk to some of the players. We've got other players in the analog space playing into that as well. But I think for me, if we take a step back and keep this at the global level, there's a huge debate now around what is the size of build out that we need for AI? What is the nature of the compute? What is the power pool? What is the power budgets going to look like in data centers? And Emmet will talk to that as well. So, all of that… Some of that argument's coming now and centering on Europe. How do they play into this? But for me, most of what we're finding people debate about – is a 20-25 gigawatt year feasible for [20]27? Is a 30-35 gigawatt for [20]28 feasible? And so, I think that's the debate line at this point – not so much as Europe in the debate. It's more what is that global pool going to look like? Paul Walsh: Yeah. This whole infrastructure rollout's got significant implications for your coverage universe… Lee Simpson: It does. Yeah. Paul Walsh: Emmet, it may be a bit tangential for the telco space, but was there anything you wanted to add there as it relates to this sort of agentic wave piece from a telco's perspective? Emmet Kelly: Yeah, there's a consensus view out there that telcos are not really that tuned into the AI wave at the moment – just from a stock market perspective. I think it's fair to say some telcos have been a source of funds for AI and we've seen that in a stock market context, especially in the U.S. telco space, versus U.S. tech over the last three to six months, has been a source of funds. So, there are a lot of question marks about the telco exposure to AI. And I think the telcos have kind of struggled to put their case forward about how they can benefit from AI. They talked 18 months ago about using chatbots. They talked about smart networks, et cetera, but they haven't really advanced their case since then. And we don't see telcos involved much in the data center space. And that's understandable because investing in data centers, as we've written, is extremely expensive. So, if I rewind the clock two years ago, a good size data center was 1 megawatt in size. And a year ago, that number was somewhere about 50 to 100 megawatts in size. And today a big data center is a gigawatt. Now if you want to roll out a 100 megawatt data center, which is a decent sized data center, but it's not huge – that will cost roughly 3 billion euros to roll out. So, telcos, they've yet to really prove that they've got much positive exposure to AI. Paul Walsh: That was an edited excerpt from my conversation with Adam, Emmet and Lee. Many thanks to them for taking the time out for that discussion and the live audience for hearing us out.We will have a concluding episode tomorrow where we dig into tech disruption and data center investments. So please do come back for that very topical conversation. As always, thanks for listening. Let us know what you think about this and other episodes by leaving us a review wherever you get your podcasts. And if you enjoy Thoughts on the Market, please tell a friend or colleague to tune in today.