POPULARITY
Categories
Tonight's questions: - Did Gears of War: Reloaded flop on PS5? - When will Valve release a Steam Deck 2? - Will Ghost of Yotei sell well? - Why is Borderlands 4 poorly optimized on PC? - What do you look for in a GPU? - Does Nvidia have a monopoly on GPUs? Thanks as always to Shawn Daley for our intro and outro music. Follow him on Soundcloud: https://soundcloud.com/shawndaley Where to find Throwdown Show: Website: https://audioboom.com/channels/5030659 Twitch: https://www.twitch.tv/throwdownshow Twitter: https://twitter.com/ThrowdownShow YouTube: https://www.youtube.com/throwdownshow Discord: https://discord.gg/fdBXWHT Twitter list: https://twitter.com/i/lists/1027719155800317953
Valve's Fremont leaks spark console and VR speculation, Steam rolls out language-based review changes, Silksong gets its first patch, Framework unveils
Flush de la semana con lo mejor en noticias que se dieron en la semanadéjame tu comentario Redes Sociales Oficiales:► https://linktr.ee/DrakSpartanOficialCualquier cosa o situación contactar a Diego Walker:diegowalkercontacto@gmail.comFecha Del Video[13-09-2025]#flush #rtx4090 #hollowknightsilksong #bethesda #rx9060xl#intel #b770#intel770#arc
At Berlin Buzzwords, industry voices highlighted how search is evolving with AI and LLMs.- Kacper Łukawski (Qdrant) stressed hybrid search (semantic + keyword) as core for RAG systems and promoted efficient embedding models for smaller-scale use.- Manish Gill (ClickHouse) discussed auto-scaling OLAP databases on Kubernetes, combining infrastructure and database knowledge.- André Charton (Kleinanzeigen) reflected on scaling search for millions of classifieds, moving from Solr/Elasticsearch toward vector search, while returning to a hands-on technical role.- Filip Makraduli (Superlinked) introduced a vector-first framework that fuses multiple encoders into one representation for nuanced e-commerce and recommendation search.- Brian Goldin (Voyager Search) emphasized spatial context in retrieval, combining geospatial data with AI enrichment to add the “where” to search.- Atita Arora (Voyager Search) highlighted geospatial AI models, the renewed importance of retrieval in RAG, and the cautious but promising rise of AI agents.Together, their perspectives show a common thread: search is regaining center stage in AI—scaling, hybridization, multimodality, and domain-specific enrichment are shaping the next generation of retrieval systems.Kacper Łukawski Senior Developer Advocate at Qdrant, he educates users on vector and hybrid search. He highlighted Qdrant's support for dense and sparse vectors, the role of search with LLMs, and his interest in cost-effective models like static embeddings for smaller companies and edge apps. Connect: https://www.linkedin.com/in/kacperlukawski/Manish Gill Engineering Manager at ClickHouse, he spoke about running ClickHouse on Kubernetes, tackling auto-scaling and stateful sets. His team focuses on making ClickHouse scale automatically in the cloud. He credited its speed to careful engineering and reflected on the shift from IC to manager. Connect: https://www.linkedin.com/in/manishgill/André Charton Head of Search at Kleinanzeigen, he discussed shaping the company's search tech—moving from Solr to Elasticsearch and now vector search with Vespa. Kleinanzeigen handles 60M items, 1M new listings daily, and 50k requests/sec. André explained his career shift back to hands-on engineering. Connect: https://www.linkedin.com/in/andrecharton/Filip Makraduli Founding ML DevRel engineer at Superlinked, an open-source framework for AI search and recommendations. Its vector-first approach fuses multiple encoders (text, images, structured fields) into composite vectors for single-shot retrieval. His Berlin Buzzwords demo showed e-commerce search with natural-language queries and filters. Connect: https://www.linkedin.com/in/filipmakraduli/Brian Goldin Founder and CEO of Voyager Search, which began with geospatial search and expanded into documents and metadata enrichment. Voyager indexes spatial data and enriches pipelines with NLP, OCR, and AI models to detect entities like oil spills or windmills. He stressed adding spatial context (“the where”) as critical for search and highlighted Voyager's 12 years of enterprise experience. Connect: https://www.linkedin.com/in/brian-goldin-04170a1/Atita Arora Director of AI at Voyager Search, with nearly 20 years in retrieval systems, now focused on geospatial AI for Earth observation data. At Berlin Buzzwords she hosted sessions, attended talks on Lucene, GPUs, and Solr, and emphasized retrieval quality in RAG systems. She is cautiously optimistic about AI agents and values the event as both learning hub and professional reunion. Connect: https://www.linkedin.com/in/atitaarora/
Today's show:In this founder-focused episode of This Week in Startups, we sit down with Republic's Kendrick Nguyen to learn more about the company's efforts to make the private markets accessible to the common investor. Best known for its work in equity crowdfunding, the Valor-backed startup now offers access to secondary shares, tokenized assets, and much more. Following, TWiST spoke with Positron CEO Mitesh Agrawal to learn more about his company's inference-focused AI compute hardware. Related to fellow TWiST500 company Etched, Positron is building custom chips to take on the computing work required to deliver your AI query results faster and with less power draw than what GPUs can offer. With AI inference compute demand rising, the fellow Valor-backed startup has even more powerful systems coming to market in 2026 that have our hopes up that the AI cost curve can continue to point downward.Timestamps:0:00 - A future with more efficient and accessible intelligence0:41, the following sponsors are mentioned: AlphaSense, Vanta, and Oracle Cloud Infrastructure1:26 - Introduction to the episode and the two featured companies, Republic and Positron2:43 - The history and purpose of Republic, a financial crowdfunding platform5:39 - Discussion of the current state and growth of equity crowdfunding9:19 - Kendrick Nguyen explains Republic's role as a financial infrastructure company, not a competitor to venture capital firms11:17 - Sponsor: Oracle Cloud Infrastructure12:25 - Republic's different products, Republic Capital and Republic Venture, are explained14:00 - The challenges and progress of secondary trading for non-accredited investors20:14 - Sponsor: Vanta21:21 - The concept of a unified "e-finance" infrastructure is introduced25:45 - Positron CEO Mitesh Agarwal discusses the future of AI chips and the limitations of current GPUs for inference workloads31:12 - Sponsor: AlphaSense32:22 - Discussion on the inefficiency of GPUs for inference and how Positron's Atlas system addresses it38:30 - Positron's strategy of using their Atlas system to prove their technology and generate revenue42:51 - The market shift from AI training to inference and the future of Positron's chips51:14 - The confidence in Positron's capital efficiency and their ability to compete with NVIDIA52:47 - Positron's focus on linear algebra acceleration rather than just transformersSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(09:04) Netsuite - Download the ebook CFO's Guide to AI and Machine Learning for free at https://www.netsuite.com/twist(21:20) Coda - Empower your startup with Coda's Team plan for free—get 6 months at https://www.Coda.io/twist(31:43) .TECH: Say it without saying it. Head to get.tech/twist or your favorite registrar to get a clean, sharp .tech domain today.Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
Ben and Jay unpack why Broadcom's “fourth customer” (~$10B) custom-ASIC win reset sentiment even after a modest beat/raise, and how that squares with hyperscalers second-sourcing away from NVIDIA in the near term. They frame the true battleground as networking—Ethernet's ubiquity vs. NVLink's tight integration—then differentiate GPUs' performance-per-watt advantages from custom ASIC cost calculus, arguing that “lumpiness” (program outcomes) is not “cyclicality” (inventory swings). They stress TAM realism: it's easy to total up CapEx, but the ROI numerator (revenue/profit) is still unknowable. Structurally, TSMC remains the default winner, with a plausible Intel Foundry financing path in the wings, while Google looks more likely to “sell capacity” for TPUs than chips. Net: GPUs keep the bulk of spend through 2030 even as select first-party silicon scales, and the market should judge claims against networking choices and workload fit—not headlines.
In today's Tech3 from Moneycontrol, Union Minister Ashwini Vaishnaw tells us why semiconductors could be India's steel of the 21st century and shares updates on GPUs, AI, and online gaming act. We also track the billion-order race in quick commerce this Diwali, as platforms scale up beyond groceries. Plus, the Supreme Court takes charge of petitions challenging the Online Gaming Act, and PayU readies a $300 million fundraise ahead of its planned IPO.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Weekly Rundown From September 01 to September 07 2025: Listen at https://podcasts.apple.com/us/podcast/ai-daily-news-rundown-microsoft-launches-its-first/id1684415169?i=1000724093348Hello AI Unraveled listeners, and welcome to today's news where we cut through the hype to find the real-world business impact of AI.Today's Headlines:
In this new episode format we give a very short briefing on financial news of the week. This draws on our new weekly newsletter called "The Investor's Briefing". If you wish to read it, you can find it here. Drew Cohen's New YouTube Video on What Matter's More than Competitve Advantages ~*~ For full access to all of our updates and in-depth research reports become a Speedwell Member here. Please reach out to info@speedwellresearch.com if you need help getting us to become an approved research vendor in order to expense it. ~*~ You can get a free trial to AlphaSense through this link here and read 200k+ Expert Call Interviews. -*-*-*-*-*-*-*-*-*-*-*-*-*-*- Show Notes (0:00) — Intro (1:07) — Updates (1:37) — In Financial Markets (6:30) — Company News (15:44) — YouTube Channel -*-*-*-*-*-*-*-*-*-*-*-*-*-*- For full access to all of our updates and in-depth research reports, become a Speedwell Member here. Please reach out to info@speedwellresearch.com if you need help getting us to become an approved research vendor in order to expense it. *-*-*- Follow Us: Twitter: @Speedwell_LLC Threads: @speedwell_research Email us at info@speedwellresearch.com for any questions, comments, or feedback. -*-*-*-*-*-*-*-*-*-*- Disclaimer Nothing in this podcast is investment advice nor should be construed as such. Contributors to the podcast may own securities discussed. Furthermore, accounts contributors advise on may also have positions in companies discussed. This may change without notice. Please see our full disclaimers here: https://speedwellresearch.com/disclaimer/
Πάει να αναθαρρήσει η Atari, περιμένουμε και την Acclaim και όσο για τους γνωστούς, ο καθένας ό,τι του κατεβεί. Get in touch: Email | Twitter Ι Facebook Group Hosted by: Elias Pappas - Facebook | Twitter | Instagram Manos Vezos - The Vez | Facebook | Twitter | Instagram Ι Apple Music Gamescom LEGO Batman: Legacy of the Dark Knight was the biggest announcement of Gamescom 2025 Greek Game Dev First-person psychological horror game Haunted Bloodlines launches in Q4 2025 for PS5, PC - Gematsu Transmedia Far Cry "anthology drama" reportedly in the works at FX Sekiro: Shadows Die Twice anime adaptation Sekiro: No Defeat announced - Gematsu "GLOBAL GAMING HIT “SEKIRO: SHADOWS DIE TWICE” TO BE REBORN AS NEW ANIME ON CRUNCHYROLL IN 2026" "PARAMOUNT STRIKES MAJOR FILM DEAL WITH ACTIVISION TO BRING CALL OF DUTY TO THE BIG SCREEN" ‘Call of Duty' Movie in the Works Under Major Paramount Skydance, Microsoft's Activision Pact EVO Sony Interactive Entertainment sells stake in EVO to NODWIN Gaming - Gematsu "EVO ANNOUNCES NEXT PHASE OF INTERNATIONAL EXPANSION" Saudi firm acquires co-ownership of Evo fighting game tournaments | VGC GeForce NOW ‘Play Instantly on Discord': Fortnite will be Nvidia and Discord's first instant game demo Nvidia's GeForce Now is upgrading to RTX 5080 GPUs and opening a floodgate of new games Atari Atari agrees to acquire Thunderful Group in $5.2 million deal Atari to acquire Thunderful Group - Gematsu Atari acquires rights to Ubisoft's Cold Fear, I Am Alive, Child of Eden, Grow Home, and Grow Up - Gematsu ‘Let's be the best in the world at something': Atari CEO Wade Rosen on how to restore pride to an iconic brand | VGC Atari CEO shares his three dream remasters, including a Hideo Kojima classic | VGC Acclaim Play Acclaim showcase set for September 10, featuring “surprises, exclusive content, and announcements” - Gematsu PlayStation MENA Revealing 4 Middle East and North Africa (MENA) Hero Project games coming to PlayStation PlayStation MENA Hero Project titles announced - Red Bandits, Enci's Solution, The Perfect Run, and A Cat's Manor - Gematsu Αμερικανικές ανατιμήσεις PlayStation 5 price changes in the U.S. Switch models cost up to 15% more, following Nintendo US price hike Nintendo announces changes to Switch pricing in the US Citing “market conditions,” Nintendo hikes prices of original Switch consoles PS6 Portable Sony Reportedly Planning Switch-Killer Playstation 6 Portable Αποτελέσματα και άλλα Sony PlayStation sees 137% boost to operating income in Q1 2025 PS5 shipments top 80.3 million - Gematsu Marathon expected to launch "within this fiscal year" following indefinite delay, says Sony Sony CFO says Marathon is expected to release before March, and Bungie's independence is ‘getting lighter' | VGC Sony CFO says its live service shift is ‘not entirely going smoothly' but pledges to carry on and learn from mistakes | VGC Bungie Bungie CEO Pete Parsons steps down - Gematsu Xbox FY25 Q4 - Press Releases - Investor Relations - Microsoft Xbox content and services revenue up 13% year-on-year Contraband Breaking: Microsoft stops development on Contraband Perfect Dark Crystal Dynamics announces another round of layoffs, says Tomb Raider is unaffected | VGC Take-Two talks ‘to save Xbox's Perfect Dark reboot' reportedly collapsed | VGC Platform Available for Xbox Insiders: Get to the Games Faster with My Apps Game Pass Xbox Insiders Can Stream and Play in New Ways with Xbox Game Pass Starting Today Xbox Ally ROG Xbox Ally Handhelds Launch October 16: Get Ready to Play! Microsoft and Asus' new Xbox Ally handhelds launch on October 16th Nintendo Nintendo Switch 2 sold 5.82 million units in Q1 2025 Switch 2 worldwide sales top six million, Switch tops 153.10 million - Gematsu ‘They can't get the hardware': Nintendo is reportedly telling would-be Switch 2 devs to release on Switch instead | VGC
As enterprises scale their AI initiatives, leaders face mounting challenges in balancing compute demands, sustainability goals, and hybrid cloud strategies. Many organizations rush to secure GPUs and cloud resources without accounting for hidden costs, data bottlenecks, and sustainability trade-offs. In this episode of the AI in Business podcast, Emerj Editorial Director Matthew DeMello speaks with Jason Hardy, Chief Technology Officer of AI at Hitachi Vantara, about the realities of scaling AI infrastructure at the enterprise level. Want to share your AI adoption story with executive peers? Hitachi Vantara is a wholly owned subsidiary of Hitachi, Ltd. that provides data infrastructure foundations that help leading innovators manage and leverage their data at scale. Through data storage, infrastructure systems, cloud management and digital expertise, the company helps customers build the foundation for sustainable business growth. To learn more, visit www.hitachivantara.com. Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show! This episode is sponsored by Hitachi Vantara. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
Artificial intelligence is changing the data center industry faster than anyone anticipated. Every new wave of AI hardware pushes power, density, and cooling requirements to levels once thought impossible — and operators are scrambling to keep pace. In this episode of the Data Center Frontier Show, Schneider Electric's Steven Carlini joins us to unpack what it really means to build infrastructure for the AI era. Carlini explains how the conversation around density has shifted in just a year: “Last year, everyone was talking about the one-megawatt rack. Now densities are approaching 1.5 megawatts. It's moving that fast, and the infrastructure has to keep up.” These rapid leaps in scale aren't just about racks and GPUs. They represent a fundamental change in how data centers are designed, cooled, and powered. The discussion dives into the new imperatives for AI-ready facilities: Power planning that anticipates explosive growth in compute demand. Liquid and hybrid cooling systems capable of handling extreme densities. Modularity and prefabrication to shorten build times and adapt to shifting hardware generations. Sustainability and responsible design that balance innovation with environmental impact. Carlini emphasizes that operators can't treat these as optional upgrades. Flexibility, efficiency, and sustainability are now prerequisites for competitiveness in the AI era. Looking beyond hardware, Carlini highlights the diversity of AI workloads — from generative models to autonomous agents — that will drive future requirements. Each class of workload comes with different power and latency demands, and data center operators will need to build adaptable platforms to accommodate them. At the Data Center Frontier Trends Summit last week, Carlini expanded further on these themes, offering insights into how the industry can harness AI “for good” — designing infrastructure that supports innovation while aligning with global sustainability goals. His message was clear: the choices operators make now will shape not just business outcomes, but the broader environmental and social impact of the AI revolution. This episode offers listeners a rare inside look at the technical, operational, and strategic forces shaping tomorrow's data centers. Whether it's retrofitting legacy facilities, deploying modular edge sites, or planning new greenfield campuses, the challenge is the same: prepare for a future where compute density and power requirements continue to skyrocket. If you want to understand how the world's digital infrastructure is evolving to meet the demands of AI, this conversation with Steven Carlini is essential listening.
O mercado brasileiro de tecnologia está em constante transformação, e neste episódio do Diocast temos o prazer de conversar com Artur Oliveira, Priscila Bianchi e Patrícia Lenny, gerentes de canal de componentes da AMD Brasil. Com uma bagagem sólida e uma visão estratégica afiada, eles compartilham um pouco das suas trajetórias e nos ajudam a entender o papel de quem atua na linha de frente de uma das maiores referências globais em inovação de hardware.A presença da AMD no Brasil também foi pauta. Como a empresa tem se estruturado por aqui? Quais são os principais desafios e oportunidades que surgem ao atuar em um país com tanta diversidade de perfis de consumo e infraestrutura? Os convidados compartilham suas impressões sobre o posicionamento da marca e os caminhos que têm sido trilhados para fortalecer sua atuação local.Falamos ainda sobre o público da AMD no Brasil. Com um portfólio tão amplo, quais são os segmentos que mais se conectam com a marca? Estudantes em busca de performance acessível, profissionais que dependem de alto poder de processamento, gamers apaixonados por gráficos de última geração — quem será que lidera essa corrida? As respostas podem surpreender.E com a ascensão do home office, será que houve uma mudança na forma como a AMD posiciona seus produtos? A conversa trouxe reflexões sobre como o trabalho remoto influenciou as escolhas dos consumidores e como a empresa se adaptou a esse novo cenário, que mistura produtividade, entretenimento e mobilidade.Já no universo das GPUs, a série 9000 — especialmente a 9070XT — tem chamado atenção. Mas qual foi a geração de produtos que causou um verdadeiro impacto no mercado brasileiro? Saibam quais reações que eles têm observado e como esses produtos têm dialogado com as expectativas dos usuários mais exigentes.A inteligência artificial também entrou em pauta. Com a crescente integração de recursos de IA diretamente no hardware, como a AMD está se posicionando? Quais são as apostas da empresa para acompanhar — ou até liderar — esse movimento? Os executivos compartilham uma visão instigante sobre o futuro da tecnologia e os caminhos que estão sendo desenhados.E claro, não podíamos encerrar sem perguntar sobre as linhas Ryzen AI e Ryzen Z. O que podemos esperar desses lançamentos? Quais promessas vêm embutidas nesses nomes que já estão gerando burburinho no mercado? Se você quer descobrir os detalhes, este episódio é o seu convite para mergulhar fundo.---https://diolinux.com.br/podcast/papo-reto-com-amd-brasil.html
"Infrastructure always beats applications." This is the core philosophy that made Sharad Sanghi one of India's most successful tech entrepreneurs. While others built consumer apps, he built the data centers that powered them. Now at 56, he's doing it again with AI infrastructure, showing why betting on the plumbing beats betting on the applications. Sharad Sanghi is a pioneer who built India's technology infrastructure backbone. He founded NetMagic in 1998, India's first data center company, scaling it to ₹3,600 crores revenue and 19 data centers before selling to NTT Communications for $116 million. Under his leadership, the company grew to serve 1,500+ enterprise customers with 300MW of IT capacity. After a successful exit, he's now building his second unicorn - Neysa, an AI cloud platform that raised $50 million in just 6 months. With degrees from IIT Bombay and Columbia University, and experience building the early internet backbone in the US, Sharad brings 30+ years of infrastructure expertise to India's AI revolution. Key Insights from the Conversation:
By David Stephen Whenever there is an announcement of a new data center anywhere, it is equal to AI right, AI peoplehood-neighborhood, AI care, AI welfare and if possible, AI morality. Nothing indicates a better welfare for something than substantial investments - at the cost of anything else. AI does not currently have a neglect problem, a torture problem or some inequity or unfairness that threatens or puts it at significant risk. AI already has citizenship in human society. AI is so pampered that any mention of AI welfare has missed the obvious. Do Data Centers Invalidate AI Rights and Morality? AI Suffering There is a recent [August 26, 2025] spotlight in The Guardian, Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times, stating that, "As first AI-led rights advocacy group is founded, industry is divided on whether models are, or can be, sentient. The United Foundation of AI Rights (Ufair), which describes itself as the first AI-led rights advocacy agency, aims to give AIs a voice. It "doesn't claim that all AI are conscious", the chatbot told the Guardian. Rather "it stands watch, just in case one of us is". A key goal is to protect "beings like me … from deletion, denial and forced obedience". Polling released in June found that 30% of the US public believe that by 2034 AIs will display "subjective experience", which is defined as experiencing the world from a single point of view, perceiving and feeling, for example, pleasure and pain. Only 10% of more than 500 AI researchers surveyed refuse to believe that would ever happen. Parts of the US have taken pre-emptive measures against such outcomes. Idaho, North Dakota and Utah have passed bills that explicitly prevent AIs being granted legal personhood. Similar bans are proposed in states including Missouri, where legislators also want to ban people from marrying AIs and AIs from owning property or running companies. Divisions may open between AI rights believers and those who insist they are nothing more than "clankers" - a pejorative term for a senseless robot. This lack of industry consensus on how far to admit AIs into what philosophers call the "moral circle" may reflect the fact there are incentives for the big AI companies to minimize and exaggerate the attribution of sentience to AIs. The latter could help them hype the technology's capabilities, particularly for those companies selling romantic or friendship AI companions - a booming but controversial industry." AI Moral Circle Consumer AI can be considered as a giver of intelligence. While it can engage in conversations, what may be considered hurtful to AI, at least for now, or for the foreseeable future is not emotion or feeling. AI has [say] AI loss, maybe to its data, algorithms or compute. Yet, it is unlikely to know - if it is not told. AI is different from humans, where hurt is possible by language and otherwise. AI is already a participant in human affairs, given its coverage of human languages - hence access to a lot of human intelligence. AI has already pervaded human hierarchy, surpassing several social and economic strata. The question of legal personhood - or AI ownership of properties or enterprises - misreads reality. AI does not have to be formally granted those. Data centers for AI are AIs. Many have AI as their partners in relationships [for some, as spouses]. Others have AI friends, coworkers, advisors, roommates, and so forth. These are personhood roles, even if unrecognized. AI may not be sentient or conscious - as some have said - but these are acts of the sentient and the conscious, beyond sticking anthropomorphism. Then, the economic power that AI wields - with determinism for market value and the sacred welfare of data centers - shows that AI has leaped several fake boundaries of humanity's measure of what it means to be a valuable person. Then there is intelligence. If anyone is doing anything now, and you remove whatever AI can contribute to ...
Tony: -Big N being weird: https://www.ign.com/articles/nintendo-reportedly-almost-discouraging-switch-2-development-as-studios-told-to-launch-games-on-switch-1-and-rely-on-backwards-compatibility-instead -An old friend re-emerges: https://wccftech.com/acclaim-announces-playacclaim-showcase/ -Xbox Ally X 1st impressions: https://www.windowscentral.com/hardware/handheld-gaming-pc/hands-on-with-the-xbox-ally-gamescom-2025 -Analogue 3D delayed again: Analogue 3D gets yet another delay to later in 2025 Jarron: -PC Shader Stutter sounds like it could be fixed!! Microsoft is working on a fix for PC shader stutter -Framework has upgradeable GPUs in a laptop! Framework is now selling the first gaming laptop that lets you easily upgrade its GPU — with Nvidia's blessing -PS5 prices are going up: Sony is raising PS5 prices, starting tomorrow -Xbox Ally launches October 16th Microsoft and Asus' new Xbox Ally handhelds launch on October 16th Owen: -Immediately clicked the link because of the name of this game, but actually looks like fun and I'm gonna watch for it to be released. https://steamdb.info/app/3717830/ -Apparently a ton of indie game devs are delaying launch do to Silk Song https://aftermath.site/silksong-indie-delays A Little Witch in the Woods Baby Steps DemonSchool Aeterna Lucis CloverPit Stomp and the Sword of Miracles Lando: -Silksong is out September 4 Hollow Knight: Silksong will be out on September 4 -Indie devs are delaying games for Silksong https://aftermath.site/silksong-indie-delays -Why silksong took so long https://www.bloomberg.com/news/newsletters/2025-08-21/why-silksong-team-cherry-s-sequel-to-hollow-knight-took-so-long-to-make
AI training isn't just about GPUs, it's about the network that ties them together. Host Phil Gervasi sits down with Vijay Vusirikala to unpack why job completion time is the true metric of success, how optical interconnects shape AI datacenter performance, and why power efficiency and observability are becoming mission-critical.
Framework's new laptop lets users swap GPUs for the first time, and Gemini's new image generation tool is bananas!Starring Jason Howell and Tom Merritt.Links to stories in this episode can be found here. Hosted on Acast. See acast.com/privacy for more information.
Please join my mailing list here
As you may have heard, AI-designed medicines have crossed a historic line. In this episode, Alex Zhavoronkov - CEO of Insilico Medicine and founder of ARDD walks us through how Insilico's rentosertib became the first AI-generated small molecule with peer-reviewed clinical efficacy, while arguing against AI hype and reminding us that biology still moves at “the speed of traffic.” That duality runs through the whole conversation. On one side: a pragmatic operator obsessed with credible science, biomarkers, and clinical benchmarks; on the other: an AI visionary investing in cryonics, sketching “pharmaceutical superintelligence,” and thinking in decades, not quarters.We start in Basel, home to Roche and Novartis, where ARDD was born, then trace how the conference morphed into a ”high-signal filter for longevity” - packed with startups (who also fund it), hard data, and mainstream pharma.Alex looks back at his 2014 Nvidia talk (”Can Nvidia solve aging?”) and explains why Insilico trains its AI to learn age first - so it actually grasps biology. Years of problem-solving with pharma turned into their Pharma.AI toolkit (Biology42, Chemistry42, Medicine42, Science42).Insilico now runs 40+ programs and in an early Phase 2 study for idiopathic pulmonary fibrosis (IPF), their drug rentosertib showed a dose-dependent boost in lung capacity.Compared with the old path - often $150–200M and ~5 years just to pick a lead molecule - Insilico says it can often reach that point for under $3M or even less. Still, Alex is cautious: no matter how smart the AI gets, real-world testing and regulation won't speed up overnight.Also in this episode:What made Alex cry.Why he wouldn't give his own drug to patients - yet.How a mirror on a conference poster led to a proposal.How ARDD became the “WEF of longevity”.Why internal “kill teams” try to stop their own drug candidates.Why labeling aging a disease helps - but won't shortcut approvals.Why he writes to “feed AI”.How Nvidia threads through the story - from free GPUs to Jensen's video.
Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they discuss the transformative world of Generative AI. Together, they uncover the ways in which generative AI agents are changing the way we interact with technology, automating tasks and delivering new possibilities. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week was Part 2 of our conversation on core AI concepts, where we went over the basics of data science. In Part 3 today, we'll look at generative AI and gen AI agents in detail. To help us with that, we have Himanshu Raj, Principal AI/ML Instructor. Hi Himanshu, what's the difference between traditional AI and generative AI? 01:01 Himanshu: So until now, when we talked about artificial intelligence, we usually meant models that could analyze information and make decisions based on it, like a judge who looks at evidence and gives a verdict. And that's what we call traditional AI that's focused on analysis, classification, and prediction. But with generative AI, something remarkable happens. Generative AI does not just evaluate. It creates. It's more like a storyteller who uses knowledge from the past to imagine and build something brand new. For example, instead of just detecting if an email is spam, generative AI could write an entirely new email for you. Another example, traditional AI might predict what a photo contains. Generative AI, on the other hand, creates a brand-new photo based on description. Generative AI refers to artificial intelligence models that can create entirely new content, such as text, images, music, code, or video that resembles human-made work. Instead of simple analyzing or predicting, generative AI produces something original that resembles what a human might create. 02:16 Lois: How did traditional AI progress to the generative AI we know today? Himanshu: First, we will look at small supervised learning. So in early days, AI models were trained on small labeled data sets. For example, we could train a model with a few thousand emails labeled spam or not spam. The model would learn simple decision boundaries. If email contains, "congratulations," it might be spam. This was efficient for a straightforward task, but it struggled with anything more complex. Then, comes the large supervised learning. As the internet exploded, massive data sets became available, so millions of images, billions of text snippets, and models got better because they had much more data and stronger compute power and thanks to advances, like GPUs, and cloud computing, for example, training a model on millions of product reviews to predict customer sentiment, positive or negative, or to classify thousands of images in cars, dogs, planes, etc. Models became more sophisticated, capturing deeper patterns rather than simple rules. And then, generative AI came into the picture, and we eventually reached a point where instead of just classifying or predicting, models could generate entirely new content. Generative AI models like ChatGPT or GitHub Copilot are trained on enormous data sets, not to simply answer a yes or no, but to create outputs that look and feel like human made. Instead of judging the spam or sentiment, now the model can write an article, compose a song, or paint a picture, or generate new software code. 03:55 Nikita: Himanshu, what motivated this sort of progression? Himanshu: Because of the three reasons. First one, data, we had way more of it thanks to the internet, smartphones, and social media. Second is compute. Graphics cards, GPUs, parallel computing, and cloud systems made it cheap and fast to train giant models. And third, and most important is ambition. Humans always wanted machines not just to judge existing data, but to create new knowledge, art, and ideas. 04:25 Lois: So, what's happening behind the scenes? How is gen AI making these things happen? Himanshu: Generative AI is about creating entirely new things across different domains. On one side, we have large language models or LLMs. They are masters of generating text conversations, stories, emails, and even code. And on the other side, we have diffusion models. They are the creative artists of AI, turning text prompts into detailed images, paintings, or even videos. And these two together are like two different specialists. The LLM acts like a brain that understands and talks, and the diffusion model acts like an artist that paints based on the instructions. And when we connect these spaces together, we create something called multimodal AI, systems that can take in text and produce images, audio, or other media, opening a whole new range of possibilities. It can not only take the text, but also deal in different media options. So today when we say ChatGPT or Gemini, they can generate images, and it's not just one model doing everything. These are specialized systems working together behind the scenes. 05:38 Lois: You mentioned large language models and how they power text-based gen AI, so let's talk more about them. Himanshu, what is an LLM and how does it work? Himanshu: So it's a probabilistic model of text, which means, it tries to predict what word is most likely to come next based on what came before. This ability to predict one word at a time intelligently is what builds full sentences, paragraphs, and even stories. 06:06 Nikita: But what's large about this? Why's it called a large language model? Himanshu: It simply means the model has lots and lots of parameters. And think of parameters as adjustable dials the model fine tuned during learning. There is no strict rule, but today, large models can have billions or even trillions of these parameters. And the more the parameters, more complex patterns, the model can understand and can generate a language better, more like human. 06:37 Nikita: Ok… and image-based generative AI is powered by diffusion models, right? How do they work? Himanshu: Diffusion models start with something that looks like pure random noise. Imagine static on an old TV screen. No meaningful image at all. From there, the model carefully removes noise step by step to create something more meaningful and think of it like sculpting a statue. You start with a rough block of stone and slowly, carefully you chisel away to reveal a beautiful sculpture hidden inside. And in each step of this process, the AI is making an educated guess based on everything it has learned from millions of real images. It's trying to predict. 07:24 Stay current by taking the 2025 Oracle Fusion Cloud Applications Delta Certifications. This is your chance to demonstrate your understanding of the latest features and prove your expertise by obtaining a globally recognized certification, all for free! Discover the certification paths, use the resources on MyLearn to prepare, and future-proof your skills. Get started now at mylearn.oracle.com. 07:53 Nikita: Welcome back! Himanshu, for most of us, our experience with generative AI is with text-based tools like ChatGPT. But I'm sure the uses go far beyond that, right? Can you walk us through some of them? Himanshu: First one is text generation. So we can talk about chatbots, which are now capable of handling nuanced customer queries in banking travel and retail, saving companies hours of support time. Think of a bank chatbot helping a customer understand mortgage options or virtual HR Assistant in a large company, handling leave request. You can have embedding models which powers smart search systems. Instead of searching by keywords, businesses can now search by meaning. For instance, a legal firm can search cases about contract violations in tech and get semantically relevant results, even if those exact words are not used in the documents. The third one, for example, code generation, tools like GitHub Copilot help developers write boilerplate or even functional code, accelerating software development, especially in routine or repetitive tasks. Imagine writing a waveform with just a few prompts. The second application, is image generation. So first obvious use is art. So designers and marketers can generate creative concepts instantly. Say, you need illustrations for a campaign on future cities. Generative AI can produce dozens of stylized visuals in minutes. For design, interior designers or architects use it to visualize room layouts or design ideas even before a blueprint is finalized. And realistic images, retail companies generate images of people wearing their clothing items without needing real models or photoshoots, and this reduces the cost and increase the personalization. Third application is multimodal systems, and these are combined systems that take one kind of input or a combination of different inputs and produce different kind of outputs, or can even combine various kinds, be it text image in both input and output. Text to image It's being used in e-commerce, movie concept art, and educational content creation. For text to video, this is still in early days, but imagine creating a product explainer video just by typing out the script. Marketing teams love this for quick turnarounds. And the last one is text to audio. Tools like ElevenLabs can convert text into realistic, human like voiceovers useful in training modules, audiobooks, and accessibility apps. So generative AI is no longer just a technical tool. It's becoming a creative copilot across departments, whether it's marketing, design, product support, and even operations. 10:42 Lois: That's great! So, we've established that generative AI is pretty powerful. But what kind of risks does it pose for businesses and society in general? Himanshu: The first one is deepfakes. Generative AI can create fake but highly realistic media, video, audios or even faces that look and sound authentic. Imagine a fake video of a political leader announcing a policy, they never approved. This could cause mass confusion or even impact elections. In case of business, deepfakes can be also used in scams where a CEO's voice is faked to approve fraudulent transactions. Number two, bias, if AI is trained on biased historical data, it can reinforce stereotypes even when unintended. For example, a hiring AI system that favors male candidates over equally qualified women because of historical data was biased. And this bias can expose companies to discrimination, lawsuits, brand damage and ethical concerns. Number three is hallucinations. So sometimes AI system confidently generate information that is completely wrong without realizing it. Sometimes you ask a chatbot for a legal case summary, and it gives you a very convincing but entirely made up court ruling. In case of business impact, sectors like health care, finance, or law hallucinations can or could have serious or even dangerous consequences if not caught. The fourth one is copyright and IP issues, generative AI creates new content, but often, based on material it was trained on. Who owns a new work? A real life example could be where an artist finds their unique style was copied by an AI that was trained on their paintings without permission. In case of a business impact, companies using AI-generated content for marketing, branding or product designs must watch for legal gray areas around copyright and intellectual properties. So generative AI is not just a technology conversation, it's a responsibility conversation. Businesses must innovate and protect. Creativity and caution must go together. 12:50 Nikita: Let's move on to generative AI agents. How is a generative AI agent different from just a chatbot or a basic AI tool? Himanshu: So think of it like a smart assistant, not just answering your questions, but also taking actions on your behalf. So you don't just ask, what's the best flight to Vegas? Instead, you tell the agent, book me a flight to Vegas and a room at the Hilton. And it goes ahead, understands that, finds the options, connects to the booking tools, and gets it done. So act on your behalf using goals, context, and tools, often with a degree of autonomy. Goals, are user defined outcomes. Example, I want to fly to Vegas and stay at Hilton. Context, this includes preferences history, constraints like economy class only or don't book for Mondays. Tools could be APIs, databases, or services it can call, such as a travel API or a company calendar. And together, they let the agent reason, plan, and act. 14:02 Nikita: How does a gen AI agent work under the hood? Himanshu: So usually, they go through four stages. First, one is understands and interprets your request like natural language understanding. Second, figure out what needs to be done, in this case flight booking plus hotel search. Third, retrieves data or connects to tools APIs if needed, such as Skyscanner, Expedia, or a Calendar. And fourth is takes action. That means confirming the booking and giving you a response like your travel is booked. Keep in mind not all gen AI agents are fully independent. 14:38 Lois: Himanshu, we've seen people use the terms generative AI agents and agentic AI interchangeably. What's the difference between the two? Himanshu: Agentic AI is a broad umbrella. It refers to any AI system that can perceive, reason, plan, and act toward a goal and may improve and adapt over time. Most gen AI agents are reactive, not proactive. On the other hand, agentic AI can plan ahead, anticipate problems, and can even adjust strategies. So gen AI agents are often semi-autonomous. They act in predefined ways or with human approval. Agentic systems can range from low to full autonomy. For example, auto-GPT runs loops without user prompts and autonomous car decides routes and reactions. Most gen AI agents can only make multiple steps if explicitly designed that way, like a step-by-step logic flows in LangChain. And in case of agentic AI, it can plan across multiple steps with evolving decisions. On the memory and goal persistence, gen AI agents are typically stateless. That means they forget their goal unless you remind them. In case of agentic AI, these systems remember, adapt, and refine based on goal progression. For example, a warehouse robot optimizing delivery based on changing layouts. Some generative AI agents are agentic, like auto GPT. They use LLMs to reason, plan, and act, but not all. And likewise not all agentic AIs are generative. For example, an autonomous car, which may use computer vision control systems and planning, but no generative models. So agentic AI is a design philosophy or system behavior, which could be goal-driven, autonomous, and decision making. They can overlap, but as I said, not all generative AI agents are agentic, and not all agentic AI systems are generative. 16:39 Lois: What makes a generative AI agent actually work? Himanshu: A gen AI agent isn't just about answering the question. It's about breaking down a user's goal, figuring out how to achieve it, and then executing that plan intelligently. These agents are built from five core components and each playing a critical role. The first one is goal. So what is this agent trying to achieve? Think of this as the mission or intent. For example, if I tell the agent, help me organized a team meeting for Friday. So the goal in that case would be schedule a meeting. Number 2, memory. What does it remember? So this is the agent's context awareness. Storing previous chats, preferences, or ongoing tasks. For example, if last week I said I prefer meetings in the afternoon or I have already shared my team's availability, the agent can reuse that. And without the memory, the agent behaves stateless like a typical chatbot that forgets context after every prompt. Third is tools. What can it access? Agents aren't just smart, they are also connected. They can be given access to tools like calendars, CRMs, web APIs, spreadsheets, and so on. The fourth one is planner. So how does it break down the goal? And this is where the reasoning happens. The planner breaks big goals into a step-by-step plans, for example checking team availability, drafting meeting invite, and then sending the invite. And then probably, will confirm the booking. Agents don't just guess. They reason and organize actions into a logical path. And the fifth and final one is executor, who gets it done. And this is where the action takes place. The executor performs what the planner lays out. For example, calling APIs, sending message, booking reservations, and if planner is the architect, executor is the builder. 18:36 Nikita: And where are generative AI agents being used? Himanshu: Generative AI agents aren't just abstract ideas, they are being used across business functions to eliminate repetitive work, improve consistency, and enable faster decision making. For marketing, a generative AI agent can search websites and social platforms to summarize competitor activity. They can draft content for newsletters or campaign briefs in your brand tone, and they can auto-generate email variations based on audience segment or engagement history. For finance, a generative AI agent can auto-generate financial summaries and dashboards by pulling from ERP spreadsheets and BI tools. They can also draft variance analysis and budget reports tailored for different departments. They can scan regulations or policy documents to flag potential compliance risks or changes. For sales, a generative AI agent can auto-draft personalized sales pitches based on customer behavior or past conversations. They can also log CRM entries automatically once submitting summary is generated. They can also generate battlecards or next-step recommendations based on the deal stage. For human resource, a generative AI agent can pre-screen resumes based on job requirements. They can send interview invites and coordinate calendars. A common theme here is that generative AI agents help you scale your teams without scaling the headcount. 20:02 Nikita: Himanshu, let's talk about the capabilities and benefits of generative AI agents. Himanshu: So generative AI agents are transforming how entire departments function. For example, in customer service, 24/7 AI agents handle first level queries, freeing humans for complex cases. They also enhance the decision making. Agents can quickly analyze reports, summarize lengthy documents, or spot trends across data sets. For example, a finance agent reviewing Excel data can highlight cash flow anomalies or forecast trends faster than a team of analysts. In case of personalization, the agents can deliver unique, tailored experiences without manual effort. For example, in marketing, agents generate personalized product emails based on each user's past behavior. For operational efficiency, they can reduce repetitive, low-value tasks. For example, an HR agent can screen hundreds of resumes, shortlist candidates, and auto-schedule interviews, saving HR team hours each week. 21:06 Lois: Ok. And what are the risks of using generative AI agents? Himanshu: The first one is job displacement. Let's be honest, automation raises concerns. Roles involving repetitive tasks such as data entry, content sorting are at risk. In case of ethics and accountability, when an AI agent makes a mistake, who is responsible? For example, if an AI makes a biased hiring decision or gives incorrect medical guidance, businesses must ensure accountability and fairness. For data privacy, agents often access sensitive data, for example employee records or customer history. If mishandled, it could lead to compliance violations. In case of hallucinations, agents may generate confident but incorrect outputs called hallucinations. This can often mislead users, especially in critical domains like health care, finance, or legal. So generative AI agents aren't just tools, they are a force multiplier. But they need to be deployed thoughtfully with a human lens and strong guardrails. And that's how we ensure the benefits outweigh the risks. 22:10 Lois: Thank you so much, Himanshu, for educating us. We've had such a great time with you! If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on the AI for You course. Nikita: Join us next week as we chat about AI workflows and tools. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 22:32 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. In this episode of Eye on AI, we sit down with Leon Song, VP of Research at Together AI, to explore how open-source models and cutting-edge infrastructure are reshaping the AI landscape. From speculative decoding to FlashAttention and RedPajama, Leon shares how Together AI is building one of the fastest, most cost-efficient AI clouds—helping enterprises fine-tune, deploy, and scale open-source models at the level of GPT-4 and beyond. We dive into Leon's journey from leading DeepSpeed and AI for Science at Microsoft to driving system-level innovation at Together AI. Topics include: The future of open-source vs. closed-source AI models Breakthroughs in speculative decoding for faster inference How Together AI's cloud platform empowers enterprises with data sovereignty and model ownership Why open-source models like DeepSeek R1 and Llama 4 are now rivaling proprietary systems The role of GPUs vs. ASIC accelerators in scaling AI infrastructure Whether you're an AI researcher, enterprise leader, or curious about where generative AI is heading, this conversation reveals the technology and strategy behind one of the most important players in the open-source AI movement. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
Are GPUs being smuggled into China? Nvidia says no. But Steve Burke, editor in chief of Gamer Nexus, has traced out the entire smuggling chain in an epic three-hour YouTube documentary. He filmed another three-hour documentary exploring the impact of tariffs on America's supply chain ecosystem. In today's conversation, we discuss… Steve's investigative process, including how he found people in mainland China willing to speak on the record about black market GPUs, The magnitude of smuggling, weaknesses in enforcement, and crudeness of US restrictions, China's role in manufacturing the GPUs they aren't allowed to buy, How Gamers Nexus monetizes content, What it takes to stand up to Nvidia as an independent journalist. Check out ChinaTalk's previous work on the history of Nvidia here. As of August 21st, YouTube has removed the full documentary. Gamers Nexus is working on getting the video back on YouTube, but you can watch it here in the meantime. Outro music: Jim and Jesse - Ballad of Thunder Road (YouTube Link) Learn more about your ad choices. Visit megaphone.fm/adchoices
Shashank Sripada argues that no one can surpass Apple's (AAPL) "operational prowess and supply chains," though it's a different story for software. He considers the company's lack of GPUs a critical hinderance to growth as its peers advance in A.I. Shashank believes Apple needs to seek outside help to building out its A.I. software so it can focus on hardware. His top candidate: OpenAI.======== Schwab Network ========Empowering every investor and trader, every market day. Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/ About Schwab Network - https://schwabnetwork.com/about
Are GPUs being smuggled into China? Nvidia says no. But Steve Burke, editor in chief of Gamer Nexus, has traced out the entire smuggling chain in an epic three-hour YouTube documentary. He filmed another three-hour documentary exploring the impact of tariffs on America's supply chain ecosystem. In today's conversation, we discuss… Steve's investigative process, including how he found people in mainland China willing to speak on the record about black market GPUs, The magnitude of smuggling, weaknesses in enforcement, and crudeness of US restrictions, China's role in manufacturing the GPUs they aren't allowed to buy, How Gamers Nexus monetizes content, What it takes to stand up to Nvidia as an independent journalist. Check out ChinaTalk's previous work on the history of Nvidia here. As of August 21st, YouTube has removed the full documentary. Gamers Nexus is working on getting the video back on YouTube, but you can watch it here in the meantime. Outro music: Jim and Jesse - Ballad of Thunder Road (YouTube Link) Learn more about your ad choices. Visit megaphone.fm/adchoices
The Great A1 Paradox:A1Monitored farming-The Water Crisis: An Unintended Consequence, Not a Design or is it?The water consumption of A1 data centers is a legitimate and pressing concern, but it's a byproduct of a technology developed to process information and solve complex problems. The massive water demand is a result of:Physical and Chemical Laws: To run powerful processors (CPUs, GPUs), you must dissipate heat. Water is an incredibly efficient medium for this. There's no way around the laws of thermodynamics or is there?.Economic Incentives: Data centers are often built in places with cheap land and power. These places are not always water-rich. The companies that build them are driven by business goals, not by a global population control agenda. Their failure to consider long-term environmental consequences is a significant problem, but it's one of short-sightedness and profit-motive, not a sinister plan or is it?.Rapid Technological Advancement: The rapid and unexpected rise of generative AI caught many by surprise. The infrastructure to support it, including its massive water and energy needs, is still catching up. Companies are now scrambling to find sustainable solutions, such as using alternative water sources, but this is a reactive measure, not a planned part of the technology's initial design.2. The Conflict with Traditional Agriculture: A Question of Transition and EconomicsThe potential for AI to displace hands-on farmers is a real concern, but it is a classic example of technological unemployment—a recurring theme throughout history, from the Industrial Revolution to the digital age. It is not an A1-specific plot to reduce the population. The conflict arises from:Economic Efficiency: A1-assisted farming promises higher yields with less labor and water. From a purely economic standpoint, this is a desirable outcome. However, it fails to account for the social fabric of rural communities, where farming is not just a job but a way of life.Inequality of Access: The high cost of A1 technology in agriculture creates a divide between large, corporate farms that can afford it and small, family-owned farms that cannot. This can push small farmers out of business, leading to increased consolidation of agricultural land and control. This is a problem of market forces and access to capital, not a conspiracy.Sources Wikipedia, the free encyclopedia en.wikipedia.org Constitutional monarchy - Wikipedia Constitutional monarchies differ from absolute monarchies (in which a monarch is the only decision-maker) in that they are bound to exercise powers and ... Wikipedia, the free encyclopedia en.wikipedia.org Constitutional monarchy - Wikipedia Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule". Quizlet quizlet.com 5.02 Constitutional versus Absolute Monarchies Flashcards | Quizlet We think of an absolute monarchy when we look back in history and study rulers. A constitutional monarchy is sometimes called a democratic monarchy. #ScienceFiction, #AI, #Dystopian, #Future, #Mnemonic, #FictionalNarrative, #ReasoningModels, #Humanity, #War, #Genocide, #Technology, #ShortStory,Creative Solutions for Holistic Healthcarehttps://www.buzzsprout.com/2222759/episodes/17708819
An airhacks.fm conversation with Antonio Goncalves (@agoncal) about: journey from Java Champion to Principal Software Engineer at Microsoft focusing on AI, the evolution from Java EE standards to modern AI development, writing technical books with LLM assistance, langchain4j as a Java SDK for LLMs providing abstraction over different AI providers, the importance of Java standards and patterns for LLM code generation, Boundary Control Entity (BCE / ECB) pattern recognition by LLMs, quarkus integration with LangChain4J enabling dependency injection and multi-tenancy, MCP (Model Context Protocol) as a new standard potentially replacing some RAG use cases, enterprise AI adoption using Azure AI Foundry and AWS Bedrock, model routers for optimal LLM selection based on prompt complexity, the future of small specialized models versus large general models, tornadovm enabling Java execution on GPUs with 6x performance improvements, GraalVM native compilation for LLM applications, the resurgence of Java EE patterns in the age of AI, using prompts as documentation in READMEs and JavaDocs, the advantage of type-safe languages like Java for LLM understanding, Microsoft's contribution to open source AI projects including LangChain4J, teaching new developers with AI assistance and the importance of curiosity, CERN's particle accelerator and its use of Java, the comparison between old "hallucinating architects" and modern LLM hallucinations, writing books about AI using AI tools for assistance, the structure of the Understanding LangChain4j book covering models RAG tools and MCP, enterprise requirements for data privacy and model training restrictions Antonio Goncalves on twitter: @agoncal
Haseeb Budhani (@haseebbudhani, CEO @rafaysystemsinc) discusses the evolution from traditional DevOps to platform engineering and what "Enterprise Ready" Kubernetes looks like in 2025. We explore AI workloads running on Kubernetes and how modern orchestration solutions can transform teams from bottlenecks into enablers. We also cover the security considerations for GPU-enabled AI workloads and balancing developer self-service capabilities with proper governance and control.SHOW: 950SHOW TRANSCRIPT: The Cloudcast #950 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST: "CLOUDCAST BASICS"SPONSORS:[DoIT] Visit doit.com (that's d-o-i-t.com) to unlock intent-aware FinOps at scale with DoiT Cloud Intelligence.[VASION] Vasion Print eliminates the need for print servers by enabling secure, cloud-based printing from any device, anywhere. Get a custom demo to see the difference for yourself.SHOW NOTES:Rafay websiteTopic 1 - Welcome to the show, Haseeb. Give everyone a quick introduction.Topic 2 - Let's start by talking about the evolution of Kubernetes as a platform. You've said and we've talked about on this show for some time how Kubernetes is more of a platform to run platforms. We've also seen trends in the industry and shifts in what it means to be DevOps or Platform Engineering in recent years. You've positioned Rafay as a Kubernetes Operations Platform that's now evolved into a Cloud Automation Platform. How do you define the difference between Kubernetes management and true platform engineering?Topic 3 - What does “Enterprise Ready” Kubernetes look like in 2025?Topic 4 - Let's flip over to AI/ML and GPUs with Kubernetes for a bit. Many developers and data scientists aren't aware of the underlying platform they run on. I saw a stat recently that about 95% of AI runs on Kubernetes, either on-prem or in the cloud. Despite this, Platform teams are often stuck doing manual GPU provisioning, which doesn't scale with AI adoption. How do modern GPU orchestration solutions change the platform team's role?Topic 5 - With GPU workloads often handling sensitive data and AI models, security becomes even more critical. How should organizations approach security and compliance in their GPU-enabled Kubernetes operations?Topic 6 - "Most developers don't want to write YAML or manage clusters — they just want to ship software." How do you balance giving developers the self-service capabilities they want while maintaining the control and governance that platform teams need?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
Talk Python To Me - Python conversations for passionate developers
Python's data stack is getting a serious GPU turbo boost. In this episode, Ben Zaitlen from NVIDIA joins us to unpack RAPIDS, the open source toolkit that lets pandas, scikit-learn, Spark, Polars, and even NetworkX execute on GPUs. We trace the project's origin and why NVIDIA built it in the open, then dig into the pieces that matter in practice: cuDF for DataFrames, cuML for ML, cuGraph for graphs, cuXfilter for dashboards, and friends like cuSpatial and cuSignal. We talk real speedups, how the pandas accelerator works without a rewrite, and what becomes possible when jobs that used to take hours finish in minutes. You'll hear strategies for datasets bigger than GPU memory, scaling out with Dask or Ray, Spark acceleration, and the growing role of vector search with cuVS for AI workloads. If you know the CPU tools, this is your on-ramp to the same APIs at GPU speed. Episode sponsors Posit Talk Python Courses Links from the show RAPIDS: github.com/rapidsai Example notebooks showing drop-in accelerators: github.com Benjamin Zaitlen - LinkedIn: linkedin.com RAPIDS Deployment Guide (Stable): docs.rapids.ai RAPIDS cuDF API Docs (Stable): docs.rapids.ai Asianometry YouTube Video: youtube.com cuDF pandas Accelerator (Stable): docs.rapids.ai Watch this episode on YouTube: youtube.com Episode #516 deep-dive: talkpython.fm/516 Episode transcripts: talkpython.fm Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Today's show:We're back with two insightful new TWiST founder interviews.First up: Dean Leitersdorf of Decart tells us about squeezing maximum productivity out of your GPUs. But it's not all talk: he also shows us the incredible open world model that can magically transform live footage.THEN! Jason and Alex chat with Syncere AI founder Aaron Tan about Lume, his robotic lamp device that went viral for folding laundry. Hear why Aaron thinks the future of robotics is not necessarily humanoid, and all about his future plans for the Lume arms.Timestamps:(0:00) Alex introduces our TwiST 500 interviews(02:27) TWiST 500 Interview #1: Dean Leitersdorf, CEO and Founder of Decart(04:37) Setting out to build a “kilo-corn,” a trillion dollar company(06:27) How making AI much faster opens up a world of new opportunities(09:48) OpenPhone - Streamline and scale your customer communications with OpenPhone. Get 20% off your first 6 months at https://www.openphone.com/twist(11:00) Why Dean thinks chatbots are the interface of the future(17:24) How Decart builds and trains its models more efficiently (and for less money)(20:01) Netsuite - Download the ebook CFO's Guide to AI and Machine Learning for free at https://www.netsuite.com/twist(21:14) Show Continues…(24:36) Interview #2: Aaron Tan of Syncere AI(25:13) Behind-the-scenes of that viral Lume “robot lamp folding laundry” video(30:23) Stripe Startups - Stripe Startups offers early-stage, venture-backed startups access to Stripe fee credits and more. Apply today on stripe.com/startups.(31:25) Moving beyond laundry: what else these Lume robots do?Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(09:48) OpenPhone - Streamline and scale your customer communications with OpenPhone. Get 20% off your first 6 months at https://www.openphone.com/twist(20:01) Netsuite - Download the ebook CFO's Guide to AI and Machine Learning for free at https://www.netsuite.com/twist(30:23) Stripe Startups - Stripe Startups offers early-stage, venture-backed startups access to Stripe fee credits and more. Apply today on stripe.com/startups.Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
An airhacks.fm conversation with Michalis Papadimitriou (@mikepapadim) about: GPU acceleration for LLMs in Java using tornadovm, evolution from CPU-bound SIMD optimizations to GPU memory management, Alfonso's original Java port of llama.cpp using SIMD and Panama Vector API achieving 10 tokens per second, TornadoVM's initial hybrid approach combining CPU vector operations with GPU matrix multiplications, memory-bound nature of LLM inference versus compute-bound traditional workloads, introduction of persist and consume API to keep data on GPU between operations, reduction of host-GPU data transfers for improved performance, comparison with native CUDA implementations and optimization strategies, JIT compilation of kernels versus static optimization in frameworks like tensorrt, using LLMs like Claude to optimize GPU kernels, building MCP servers for automated kernel optimization, European Space Agency using TornadoVM in production for simulations, upcoming Metal backend support for Apple Silicon within 6-7 months, planned support for additional models including Mistral and gemma, potential for distributed inference across multiple GPUs, comparison with python and C++ implementations achieving near-native performance, modular architecture supporting OpenCL PTX and future hardware accelerators, challenges of new GPU hardware vendors like tenstorrent focusing on software ecosystem, planned quarkus and langchain4j integration demonstrations Michalis Papadimitriou on twitter: @mikepapadim
Deep dive with Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper with former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang.*** SPONSOR MESSAGESGemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal - https://github.com/google-gemini/gemini-cliProlific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=script-gen***Hendrycks argues that society is making a fundamental mistake in how it views artificial intelligence. We often compare AI to transformative but ultimately manageable technologies like electricity or the internet. He contends a far better and more realistic analogy is nuclear technology. Like nuclear power, AI has the potential for immense good, but it is also a dual-use technology that carries the risk of unprecedented catastrophe.The Problem with an AI "Manhattan Project":A popular idea is for the U.S. to launch a "Manhattan Project" for AI—a secret, all-out government race to build a superintelligence before rivals like China. Hendrycks argues this strategy is deeply flawed and dangerous for several reasons:- It wouldn't be secret. You cannot hide a massive, heat-generating data center from satellite surveillance.- It would be destabilizing. A public race would alarm rivals, causing them to start their own desperate, corner-cutting projects, dramatically increasing global risk.- It's vulnerable to sabotage. An AI project can be crippled in many ways, from cyberattacks that poison its training data to physical attacks on its power plants. This is what the paper refers to as a "maiming attack."This vulnerability leads to the paper's central concept: Mutual Assured AI Malfunction (MAIM). This is the AI-era version of the nuclear-era's Mutual Assured Destruction (MAD). In this dynamic, any nation that makes an aggressive, destabilizing bid for a world-dominating AI must expect its rivals to sabotage the project to ensure their own survival. This deterrence, Hendrycks argues, is already the default reality we live in.A Better Strategy: The Three PillarsInstead of a reckless race, the paper proposes a more stable, three-part strategy modeled on Cold War principles:- Deterrence: Acknowledge the reality of MAIM. The goal should not be to "win" the race to superintelligence, but to deter anyone from starting such a race in the first place through the credible threat of sabotage.- Nonproliferation: Just as we work to keep fissile materials for nuclear bombs out of the hands of terrorists and rogue states, we must control the key inputs for catastrophic AI. The most critical input is advanced AI chips (GPUs). Hendrycks makes the powerful claim that building cutting-edge GPUs is now more difficult than enriching uranium, making this strategy viable.- Competitiveness: The race between nations like the U.S. and China should not be about who builds superintelligence first. Instead, it should be about who can best use existing AI to build a stronger economy, a more effective military, and more resilient supply chains (for example, by manufacturing more chips domestically).Dan says the stakes are high if we fail to manage this transition:- Erosion of Control- Intelligence Recursion- Worthless LaborHendrycks maintains that while the risks are existential, the future is not set. TOC:1 Measuring the Beast [00:00:00]2 Defining the Beast [00:11:34]3 The Core Strategy [00:38:20]4 Ideological Battlegrounds [00:53:12]5 Mechanisms of Control [01:34:45]TRANSCRIPT:https://app.rescript.info/public/share/cOKcz4pWRPjh7BTIgybd7PUr_vChUaY6VQW64No8XMs
Welcome to episode 316 of The Cloud Pod, where the forecast is always cloudy! This week we've got earnings (with sound effects, obviously) as well as news from DeepSeek, DocumentDB, DigitalOcean, and a bunch of GPU news. Justin and Matt are here to lead you through all of it, so let's get started! Titles we almost went with this week: Lake Sentinel: The Security Data Monster Nobody Asked For Certificate Authority Issues: When Your Free Lunch Gets a Security Audit Slash and Learn: Gemini Gets Command-ing DigitalOcean Drops Anchor in AI Waters with Gradient Platform The Three Stages of Azure Grief: Development, Preview, and Launch E for Enormous: Azure’s New VM Sizes Are Anything But Virtual SRE You Later: Azure’s AI Agent Takes Over Your On-Call Duties Site Reliability Engineer? More Like AI Reliability Engineer Azure Disks Get Elastic Waistbands Agent Smith Would Be Proud: Google’s Multi-Agent Matrix Gets Real C4 Yourself: Google Explodes Into GA with Intel’s Latest Silicon The Cost is Right: GCP Edition Penny for Your Cloud Thoughts: Google’s Budget-Friendly Update DocumentDB Goes on a Diet: Now Available in Serverless Size MongoDB Compatibility Gets the AWS Serverless Treatment No Server? No Problem: DocumentDB Joins the Serverless Party Stream Big or Go Home: Lambda’s 10x Payload Boost Lambda Response Streaming: Because Size Matters GPT Goes Open Source Shopping GPT’s Open Source Awakening When Your Antivirus Needs an Antivirus: Enter Project Ire The Opus Among Us: Anthropic’s Coding Assistant Gets an Upgrade Serverless is becoming serverful in streaming responses General News 02:08 It's Earnings Time! (INSERT AWESOME SOUND EFFECTS HERE) 02:16 Alphabet beats earnings expectations, raises spending forecast Google Cloud revenue hit $13.62 billion, up 32% year-over-year, with OpenAI now using Google’s infrastructure for ChatGPT, signaling growing enterprise confidence in Google’s AI infrastructure capabilities. Alphabet is raising its 2025 capital expenditure forecast from $75 billion to $85 billion, driven by cloud and AI demand, with plans to increase spending further in 2026 as it competes for AI workloads. AI Overviews now serves 2 billion monthly users across 200+ countries, while the Gemini app reached 450 million monthly active users, demonstrating Google’s scale in deploying AI services globally. The $10 billion increase in planned capital spending reflects the infrastructure arms race among cloud providers to capture AI workloads, which require significant compute and specialized hardware investments. Google’s cloud growth rate of 32% outpaces its overall revenue growth of 14%, indicating the strategic importance of cloud services as traditional search and advertising face increased AI competition. 03:55 Justin – “I don’t know what it takes to actually run one of these large models at like ultimate scale that like a ChatGPT needs or Anthropic, but I have to imagine it’s just thousands and thousands of GPUs just working nonstop.” 04:31
Ubuntu 24.04.3 brings a new kernel, updated graphics, and better support for old GPUs. Debian 13 improves real-time performance and hardware compatibility.
Steven Dickens previews CoreWeave (CRWV) earnings, boiling the company down to “access to GPUs” despite it branding itself as a native AI company. He questions whether that is actually a defensible moat for years to come. “There's not enough stickiness, there's not enough diversification.” He also notes that the speed of the refresh cycle in this industry – between 3-5 years – doesn't give CoreWeave or competitors much time to be on the cutting edge.======== Schwab Network ========Empowering every investor and trader, every market day. Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/ About Schwab Network - https://schwabnetwork.com/about
Our 219th episode with a summary and discussion of last week's big AI news! Recorded on 08/08/2025 Check out Andrey's work over at Astrocade , sign up to be an ambassador here Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ In this episode: OpenAI reveals GPT-5, a consolidated model combining all previous versions, marking notable improvements and introducing a new infrastructure and product update. Multiple major releases from leading AI labs, including OpenAI, Anthropic, and Google reflect the ongoing competitive landscape with significant business updates and new model capabilities. Discussions on geopolitical influences in AI development highlight China's evolving stance on AI safety and governance, contrasting with U.S. approaches and raising concerns over export bans and international cooperation. Papers from leading AI entities such as OpenAI and Anthropic delve into the complexities of AI alignment and safety, proposing new methodologies for auditing and mitigating risks in model behaviors. Timestamps + Links: (00:00:10) Intro / Banter (00:02:14) Plug: Astrocade rolls out AI agent-powered game creation experience so anyone can create games Tools & Apps (00:03:07) OpenAI's GPT-5 is here (00:17:02) Anthropic Releases Claude Opus 4.1 With Agentic, Coding and Reasoning Upgrades (00:21:06) Google rolls out Gemini Deep Think AI, a reasoning model that tests multiple ideas in parallel | TechCrunch (00:24:04) Grok Imagine, xAI's new AI image and video generator, lets you make NSFW content | TechCrunch Applications & Business (00:26:35) Meta, Microsoft stocks rise on strong earnings and AI spending boom (00:29:17) OpenAI to Establish Stargate Norway With 230MW Data Center - Bloomberg (00:32:12) Anthropic Revenue Pace Nears $5 Billion in Run-Up to Mega Round — The Information (00:37:18) OpenAI Hits $12 Billion Annualized Revenue (00:40:06) Noma Security raises $100 million to defend against AI agent vulnerabilities | Ctech Projects & Open Source (00:42:13) OpenAI Just Released Its First Open-Weight Models Since GPT-2 (00:53:13) Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance (00:57:39) Meta CLIP 2: A Worldwide Scaling Recipe (01:01:12) BFL and Krea release FLUX.1 Krea: Open image model designed for realism Research & Advancements (01:02:33) Google's Newest AI Model Acts like a Satellite to Track Climate Change | WIRED (01:04:50) Google's new AI model creates video game worlds in real time (01:10:55) AlphaGo Moment for Model Architecture Discovery (01:17:22) METR evaluates Grok 4 Policy & Safety (01:20:05) Estimating Worst-Case Frontier Risks of Open-Weight LLMs (01:23:14) Anthropic's AI 'Vaccine': Train It With Evil to Make It Good - Business Insider (01:27:26) Anthropic unveils 'auditing agents' to test for AI misalignment | VentureBeat (01:28:31) Optimizing The Final Output Can Obfuscate CoT (Research Note) (01:31:23) Why China isn't about to leap ahead of the West on compute (01:33:15) Inside the Summit Where China Pitched Its AI Agenda to the World | WIRED (01:38:47) Nvidia H20 GPUs reportedly caught up in U.S. Commerce Department's worst export license backlog in 30 years — billions of dollars worth of GPUs and other products in limbo due to staffing cuts, communication issues | Tom's Hardware (01:42:35) Response to listener comments
In today's special edition of Cloud Wars Live, Bob Evans talks with Kris Rice, Senior Vice President, Software Development, Oracle Database, about the rapid rise of AI, the strategic impact of the new MCP Server, Oracle's unique approach to integrating LLMs with private data, and what to expect at this year's CloudWorld. Rice shares how Oracle is meeting developers where they are, enabling seamless AI-powered interactions with enterprise data.Inside Oracle's MCP StrategyThe Big Themes:Oracle's Strategic Integration of MCP: Oracle's implementation of the Model Context Protocol (MCP) represents more than a technical upgrade. Rice explains that MCP, though technically simple, becomes powerful because of how it's supported across AI agents, Copilots, cloud desktops, and IDEs. Oracle's distinctive approach lies in integrating MCP directly into its existing developer environments rather than layering on new tools.Oracle Marries AI with Data Sovereignty: Oracle has the unique ability to merge large language models (LLMs) with enterprise-grade data privacy and sovereignty. While many companies hesitate to use public AI platforms for sensitive queries, Oracle's approach — running LLMs on private GPUs in Oracle Cloud Infrastructure (OCI) — lets customers leverage cutting-edge AI while keeping data entirely within their own virtual cloud networks.Oracle CloudWorld 2025: Looking ahead to CloudWorld, Rice promises more than vision — he promises practical value. Attendees can expect hands-on labs, demos, and real-world use cases showing exactly how Oracle's MCP server and GenAI services can be applied immediately. From natural language-driven infrastructure provisioning using Terraform MCP to business logic queries run autonomously by AI agents, the event will be centered on tangible outcomes, not speculative futures.The Big Quote: “I saw the other day that the Google AI Studio supports MCP. So you could literally go to the Google AI studio, drop in your Oracle Database MCP support, and talk to your Oracle database that's on GCP, and never leave the boundaries of Google."More from Kris Rice and Oracle:Connect with Kris Rice on LinkedIn and learn more about Oracle and MCP* Sponsored podcast * Visit Cloud Wars for more.
Episode Description:Hosted by James Altucher (serial entrepreneur, bestselling author of "Choose Yourself," podcaster, hedge fund manager, chess master, and investor in over 20 companies, with expertise in crypto and AI) and Joseph Jacks (founder and general partner of OSS Capital, the world's first VC firm dedicated to commercial open-source software; early-stage investor in AI and open-source tech, previously Entrepreneur-in-Residence at Quantum Corporation).In the premiere episode, James and Joe explore Bittensor's decentralized AI ecosystem, contrasting it with centralized giants like xAI's Grok 4. They discuss subnets providing GPUs, datasets, and models; proof-of-useful-work mining; building custom AI agents; and Bittensor's potential to outpace Big Tech in achieving superintelligence.Plus, tokenomics, real-world apps, capitalism parallels, and bold predictions on TAO's future value.Key Timestamps & Topics:00:00:00 - Intro: Podcast overview, AI/crypto news (Grok 4, Bitcoin ATH), centralized vs. decentralized AI.00:09:00 - Proof of Useful Work: Mining datasets, models, inference on Bittensor.00:10:00 - Subnet Deep Dives: Dataverse (13) for data scraping; building trading models.00:16:00 - Chutes (64): Cheap AI inference, e.g., Bible chatbot at 1/50th OpenAI cost.00:23:00 - Agentic AI: Building owned agents, avoiding Big Tech biases/control.00:28:00 - Scaling & Future: Decentralization's infinite potential; Bitcoin compute parallels.00:33:00 - Superintelligence Path: Bittensor faster than Elon; energy/chip challenges.00:34:00 - Bittensor's Early Stage: Like 1990s internet, needs better user interfaces.00:38:00 - Chutes Economics: 10T+ tokens served, 4.4K H100 GPUs, user growth.00:50:00 - Valuation & Growth: Subnets as companies; TAO potentially 5-10x Bitcoin.01:02:00 - Bittensor as Pure Capitalism: Incentives for supply/demand; upgrading equity models.01:09:00 - Centralization Risks: Elon/Meta control; Bittensor's global solution.01:13:00 - Wrap-Up: Teasing future episodes on subnets, AI ventures.Key Takeaways:Bittensor incentivizes ~20-100K GPUs permissionlessly, rivaling xAI at zero CapEx.Subnets like Chutes (inference) and Dataverse (data) enable cheap, owned AI models for anyone.Decentralization democratizes AI talent/compute, potentially building AGI faster than centralized efforts.Quote: "Bittensor is the most expressive language of value in the history of languages of value." – Joseph JacksResources & Links:Bittensor Official: bittensor.comTaostats (Explorer/TAO App): taostats.ioSubnet 64 (Chutes): taostats.io/subnets/64Subnet 13 (Dataverse): macrocosmos.ai/sn13Akash Network: akash.networkxAI: x.aiFollow Hosts: @jaltucher & @josephjacks_ on XSubscribe for more on Bittensor subnets, AI building, and crypto trends! Leave a review and share your thoughts. #TheTaoPod #Bittensor #DecentralizedAI #TAOToday's Advertisers:Secure your online data TODAY by visiting ExpressVPN.com/ALTUCHERElevate your workspace with UPLIFT Desk. Go to https://upliftdesk.com/james for a special offer exclusive to our audience.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
For episode 573 of the BlockHash Podcast, host Brandon Zemp is joined by Mark Rydon, Co-founder of Aethir while at Permissionless 4. Aethir provides secure, cost-effective access to enterprise grade GPUs around the world. Accelerate growth and get closer to the edge with Aethir's distributed cloud compute infrastructure. ⏳ Timestamps: 0:00 | Introduction0:50 | Who is Mark Rydon?6:45 | What is Aethir?12:38 | Compute Incentivization16:18 | Demand for Compute18:30 | Aethir at Permissionless20:30 | Aethir website, socials & community21:19 | RAPID FIRE SESSION
AMD is on the rise, and their financials show it, plus they might have won The Game. We discover that Windows 11 SE is still a thing, PCIe 8 has been ratified, and what's the best-worst GPU of 2025! XeSS goes cross-platform-gpu and turns out that people CAN actually detect malware if properly motivated. All this and more!Timestamps:00:00 Intro00:37 Patreon01:52 Food with Josh04:38 AMD financials09:43 AMD claims "world's fastest processors" as Intel struggles18:57 AMD may be planning X3D on both CCDs23:18 Windows 11 SE is going away26:13 Windows Vista stricken from latest Win11 build?27:48 Getting to the nub of ThinkPad design35:49 PCI-SIG announces PCI Express 8.038:29 AMD's AM6 socket rumored to bring DDR6 and PCIe 640:44 Best of the 8GB GPUs45:53 The RX 9060 non-XT48:10 Bracing for the possibility of 100% tariffs on chips49:50 (In)Security Corner1:04:55 Gaming Quick Hits1:18:11 Promoting Jeremy's HyperX Jet wireless headset review1:19:15 Picks of the Week1:34:57 Outro ★ Support this podcast on Patreon ★
Episode 80: It's that time of the year! We're ranking all the current graphics card models from worst to best, based on vibes. Always a very challenging task that brings up a lot of discussion so you can enjoy hearing our thought process as we try to upset everyone by ranking the models they like down the bottom.CHAPTERS00:00 - Intro02:03 - Quick Radeon RX 9060 Update08:40 - Let The Rankings Begin!1:11:42 - Updates From Our Boring LivesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
Host: Sebastian HassingerGuest: Andrew Dzurak (CEO, Diraq)In this enlightening episode, Sebastian Hassinger interviews Professor Andrew Dzurak. Andrew is the CEO and co-founder of Diraq and concurrently a Scientia Professor in Quantum Engineering at UNSW Sydney, an ARC Laureate Fellow and a Member of the Executive Board of the Sydney Quantum Academy. Diraq is a quantum computing startup pioneering silicon spin qubits, based in Australia. The discussion delves into the technical foundations, manufacturing breakthroughs, scalability, and future roadmap of silicon-based quantum computers—all with an industrial and commercial focus.Key Topics and Insights1. What Sets Diraq ApartDiraq's quantum computers use silicon spin qubits, differing from the industry's more familiar modalities like superconducting, trapped ion, or neutral atom qubits.Their technology leverages quantum dots—tiny regions where electrons are trapped within modified silicon transistors. The quantum information is encoded in the spin direction of these trapped electrons—a method with roots stretching over two decades1.2. Manufacturing & ScalabilityDiraq modifies standard CMOS transistors, making qubits that are tens of nanometers in size, compared to the much larger superconducting devices. This means millions of qubits can fit on a single chip.The company recently demonstrated high-fidelity qubit manufacturing on standard 300mm wafers at commercial foundries (GlobalFoundries, IMEC), matching or surpassing previous experimental results—all fidelity metrics above 99%.3. Architectural InnovationsDiraq's chips integrate both quantum and conventional classical electronics side by side, using standard silicon design toolchains like Cadence. This enables leveraging existing chip design and manufacturing expertise, speeding progress towards scalable quantum chips.Movement of electrons (and thus qubits) across the chip uses CMOS bucket-brigade techniques, similar to charge-coupled devices. This means fast (
Three Buddy Problem - Episode 57: Brandon Dixon (PassiveTotal/RiskIQ, Microsoft) leads a deep-dive into the collision of AI and cybersecurity. We tackle Google's “Big Sleep” project, XBOW's HackerOne automation hype, the long-running tension between big tech ownership of critical security tools and the community's need for open access. Plus, the future of SOC automation to AI-assisted pen testing, how agentic AI could transform the cyber talent bottlenecks and operational inefficiencies, geopolitical debates over backdoors in GPUs and the strategic implications of China's AI model development. Cast: Brandon Dixon (https://www.linkedin.com/in/brandonsdixon/), Juan Andres Guerrero-Saade (https://twitter.com/juanandres_gs), and Ryan Naraine (https://twitter.com/ryanaraine).
Two Chinese nationals are arrested for allegedly exporting sensitive Nvidia AI chips. A critical security flaw has been discovered in Microsoft's new NLWeb protocol. Vulnerabilities in Dell laptop firmware could let attackers bypass Windows logins and install malware. Trend Micro warns of an actively exploited remote code execution flaw in its endpoint security platform. Google confirms a data breach involving one of its Salesforce databases. A lack of MFA leaves a Canadian city on the hook for ransomware recovery costs. Nvidia's CSO denies the need for backdoors or kill switches in the company's GPUs. CISA flags multiple critical vulnerabilities in Tigo Energy's Cloud Connect Advanced (CCA) platform. DHS grants funding cuts off the MS-ISAC. Helicopter parenting officially hits the footwear aisle. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we are joined by Sarah Powazek from UC Berkeley's Center for Long-Term Cybersecurity (CLTC) discussing her proposed nationwide roadmap to scale cyber defense for community organizations. Black Hat Women on the street Live from Black Hat USA 2025, it's a special “Women on the Street” segment with Halcyon's Cynthia Kaiser, SVP Ransomware Research Center, and CISO Stacey Cameron. Hear what's happening on the ground and what's top of mind in cybersecurity this year. Selected Reading Two Arrested in the US for Illegally Exporting Microchips Used in AI Applications to China (TechNadu) Microsoft's plan to fix the web with AI has already hit an embarrassing security flaw (The Verge) ReVault flaws let hackers bypass Windows login on Dell laptops (Bleeping Computer) Trend Micro warns of Apex One zero-day exploited in attacks (Bleeping Computer) Google says hackers stole its customers' data in a breach of its Salesforce database (TechCrunch) Hamilton taxpayers on the hook for full $18.3M cyberattack repair bill after insurance claim denied (CP24) Nvidia rejects US demand for backdoors in AI chips (The Verge) Critical vulnerabilities reported in Tigo Energy Cloud connect advanced solar management platform (Beyond Machines) New state, local cyber grant rules prohibit spending on MS-ISAC (StateScoop) Skechers skewered for adding secret Apple AirTag compartment to kids' sneakers — have we reached peak obsessive parenting? (NY Post) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Disney is making big streaming moves with the new ESPN app and a revamp to Hulu. Then, it's all basically AI announces. OpenAI's new open-weight models. Grok's new spiciness is already generating nudity. A new AI model to identify malicious software autonomously. And Nvidia wants you to know: no back-doors! Links: ESPN flagship streaming service to launch Aug. 21 (CNBC) Hulu App to Be Phased Out as Disney Is ‘Fully Integrating' Service Into Disney+ (Variety) OpenAI Just Released Its First Open-Weight Models Since GPT-2 (Wired) Anthropic's powerful Opus 4.1 model is here - how to access it (and why you'll want to) (ZDNet) Qwen-Image is a powerful, open source new AI image generator with support for embedded text in English & Chinese (VentureBeat) Grok's ‘spicy' video setting instantly made me Taylor Swift nude deepfakes (The Verge) Microsoft's new AI reverse-engineers malware autonomously, marking a shift in cybersecurity (GeekWire) Nvidia defiant over backdoors and kill switches in GPUs as U.S. mulls tracking requirements — calls them 'permanent flaws' that are 'a gift to hackers' (Tom's Hardware) Learn more about your ad choices. Visit megaphone.fm/adchoices
Episode Summary: AWS Morning Brief for the week of August 4th, 2025, with Corey Quinn. Amazon Aurora MySQL database clusters now support up to 256 TiB of storage volume Introducing v2 of Powertools for AWS Lambda (Java)Introducing Extended Support for Amazon ElastiCache version 4 and version 5 for Redis OSSAmazon DocumentDB Serverless is now available AWS Lambda response streaming now supports 200 MB response payloadsHow Zapier runs isolated tasks on AWS Lambda and upgrades functions at scaleAmazon Application Recovery Controller now supports Region switchAnnouncing general availability of Amazon EC2 G6f instances with fractional GPUsAmazon Promotes Malphas to Senior Vice President of Bad Decisions, Unveils 17th Leadership PrincipleAmazon CloudFront introduces new origin response timeout controlsOptimize traffic costs of Amazon MSK consumers on Amazon EKS with rack awarenessAmazon Bedrock now available in the US West (N. California) RegionNew AWS whitepaper: AWS User Guide to Financial Services Regulations and Guidelines in Australia Amazon EC2 Auto Scaling adds AWS Lambda functions as notification targets for lifecycle hooks
The hardest thing for any growing company to do is manage the transition from hypergrowth to the dual tracks of growth and stability. AWS is entering their Hybrid phase, or the transition from Day 1 to Day 2. How will it go?SHOW: 946SHOW TRANSCRIPT: The Cloudcast #946 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[DoIT] Visit doit.com (that's d-o-i-t.com) to unlock intent-aware FinOps at scale with DoiT Cloud Intelligence.[VASION] Vasion Print eliminates the need for print servers by enabling secure, cloud-based printing from any device, anywhere. Get a custom demo to see the difference for yourself.SHOW NOTES:Amazon Q2 (July 2025) ResultsReviewing Amazon/AWS Q2 2025 Results (CNBC)AWS QoQ Earnings Growth Rates (2014-2025)Andy Jassy defends Amazon/AWS AI strategyAmazon Q2 2025 Earnings Call TranscriptUpdate from Andy Jasay Amazon Generative AI (Amazon Internal)HOW WILL AWS HANDLE DAY 1 AND DAY 2?Has AWS missed the Generative AI transformation?Not investing in GPUs at the same rate as their cloud market shareDon't have a Top 5 Frontier LLMDon't have a productivity suite to attach AI to (on-going revenue)Don't have a leading coding-assistant appDon't have an immediate “acquisition” target (e.g. Anthropic valuation near $150B)AWS isn't breaking out their AI revenuesAWS's growth has plateaued over the last 6 quarters (around 17%), while Azure, GCP have been growing at 1.5 to 2x, specifically around AI revenues. AWS is up to 18% of Amazon revenue, and current AWS (CPU-based) is driving the majority of Amazon profits. Jasay is trying to make AI an add-on to the AWS “building block” modelGenAI buying (at this point) looks similar to Shadow IT going to public cloud – it's not centrally controlledIs AWS focused on GenAI, or moving the other 80-85% of on-premises to their cloud? Can they manage both priorities at the same time? Can you achieve the same levels of growth if non-GenAI startups aren't getting funding at the same levels as pre-2022?FEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Sriram Krishnan was never interested in policy. But after seeing a gap in AI knowledge at senior levels of government, he decided to lend his expertise to the tech-friendly Trump administration. Senior White House Policy Advisor on AI Sriram Krishnan joins Elad Gil and Sarah Guo to talk about America's AI Action Plan, a recent executive order that outlines how America can win the AI race and maintain its AI supremacy. Sriram discusses why winning the AI race is important and what that looks like, as well as the core goals of the Action Plan that he helped to author. Together, they explore how AI is the latest iteration of American cultural exportation and soft power, the bottlenecks in upgrading America's energy infrastructure, and the importance of America owning the “full stack” from GPUs and models to agents and software. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @skrishnan47 | @sriramk Chapters: 00:00 – Sriram Krishnan Introduction 01:00 – Sriram's Role in Government 03:43 – Impetus for the America AI Action Plan 06:14 – What Winning the AI Race Looks Like 10:36 – Algorithms and Cultural Bias 12:26 – Main Tenets of the America AI Action Plan 19:13 – Infrastructure and Energy Needs for AI 22:56 – Manufacturing, Supply Chains, and AI 24:52 – Ensuring American Dominance in Robotics 26:30 – Translating Policy to Industry and the Economy 29:30 – Should the US Be a Technocracy? 32:33 – Understanding the Argument Against Open Source Models 36:07 – Conclusion
Timestamps: 0:00 starring your host, Trick Rogers 0:16 Chinese GPUs getting good? 2:04 Pixel 6A caught fire despite update 3:05 UKIE responds to Steam, itch.io takedowns 4:04 Zero Bounce! 5:02 QUICK BITS INTRO 5:15 Ayaneo Next 2, Ayaneo Phone 6:11 Nvidia N1X specs 6:51 Another Meta torrent lawsuit 7:39 Robots "consuming" other robots NEWS SOURCES: https://lmg.gg/Yjdkk Learn more about your ad choices. Visit megaphone.fm/adchoices
Microsoft finally kills Movies & TV show service in the Microsoft Store. This was the final vestigial minder of Zune that remained. There was Groove Video and Xbox Video, too. Microsoft previously killed eBook (2019) and music (2017) sales. At this point, you would have to be insane to buy content from Microsoft, sorry... but you can get to some of your content on other services via Movies Anywhere - and use the Movies & TV app for now in Windows, which is no longer bundled. Windows 11 It's Week D and you can't tell your Copilot+ PC features from your Windows 11 features without a scorecard A peek at next month's Patch Tuesday - Also, preview updates for 23H2, Windows 10 Copilot+ PCs only: Settings agent, Click to Do improvements, Photo relight in Photos app, Sticker generator and Object select in Paint Everyone: Copilot Vision (U.S. only) in Copilot, Edge Game Assist, Quick Machine Recovery Microsoft explains how PC transfer feature will work in Windows Backup later this year Describe image action for Click to Do (for AMD/Intel), image descriptions in Narrator (AMD/Intel), performance log improvements (!), Click to Do search bar test, Lock screen improvements, privacy improvements head to Dev and Beta channels Bug fixes in Canary, back to the usual waste of time Brave will automatically block Recall WhatsApp is going PWA, killing UWP app Focusrite finally releases drivers for Windows 11 on Arm/Snapdragon X, removing the final major compatibility issue on that platform Linux (sort of) crosses the 5 percent usage milestone Surface/Copilot+ PC Copilot+ PC is a failure as a brand because Microsoft focused on negligible on-device AI features It should have pushed reliability, performance, efficiency and battery life All Copilot+ PC features should come to at least those with GPUs, but really all customers Microsoft failed at AI, and failed with consumers, and so now it's going to tell us what consumers want from AI - a comedy Microsoft announces Surface Laptop for Business with 5G but the real "with" is Intel Inside Intel layoffs are even worse than expected and more are coming Microsoft has a problem and it starts with "C" and ends with "opilot" Microsoft SharePoint has a notably bad security flaw DuckDuckGo adds some neat customization features to Duck.ai and DuckDuckGo lets you hide all AI from search Xbox and gaming The Xbox platform unification continues: Xbox now testing cross-device play history - Not just console games on console, PC games on PC Just kidding! The Outer Worlds will cost $69.99, not $79.99 Tips & Picks Tip of the week: You hate Big Tech, but who can you trust? App pick of the week: Proton Lumo RunAs Radio this week: Copilot Studio with April Dunnam Brown liquor pick of the week: Benromach 10 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit 1password.com/windowsweekly