POPULARITY
Categories
Matching up the audio this week for a change of pace! That Snapdragon X2 Elite Extreme sometimes compares favorably, there's a new Kindle Scribe and you will never guess who's coming to the Intel investment party. Also Microsoft extends security updates for Windows 10 if you live in the right places, and the they're also looking into micro-channel cooling? All this and so much more!00:00 Intro00:44 Patreon02:33 Food with Josh05:58 Snapdragon X2 Elite Extreme benchmarks11:32 Qualcomm wins final battle with Arm over Oryon14:23 Amazon Kindle Scribe lineup now bigger, offers first color model18:10 LG has world's first 6K TB5 display21:54 Apple might invest in Intel?26:48 Intel 13th and 14th Gen price hike32:03 Microsoft gives in on Windows 10 at the 11th hour - sort of36:42 Microsoft also exploring tiny channels on CPUs for microfluidic cooling42:16 Podcast sponsor Zapier43:36 (In)Security Corner53:52 Gaming Quick Hits1:06:25 Picks of the Week1:23:57 Outro ★ Support this podcast on Patreon ★
News and Updates: Apple iOS 26 delivers one of the biggest iPhone upgrades in years. The new Liquid Glass interface adds a translucent, holographic look, while Spatial Scenes uses AI to turn photos into dynamic 3D wallpapers. Major app redesigns include a cleaner Camera for one-handed use, a simplified Photos layout, customizable Messages with polls and chat backgrounds, and an upgraded Lock Screen. New Battery Settings now estimate charging times and debut Adaptive Power Mode (on iPhone 15 Pro+). But the flashy Liquid Glass design has drawn complaints of eye strain, dizziness, and legibility issues, with Apple offering accessibility tweaks as workarounds. Intel + Nvidia struck a $5B partnership that could reshape PCs. Nvidia bought a 4–5% stake in Intel, and the two are co-developing hybrid CPUs with Nvidia GPU chiplets connected via NVLink. These SoCs could boost AI PCs, power slimmer gaming laptops, and bring workstation-level performance to mini desktops — potentially blurring the line between integrated and discrete graphics. Nvidia + OpenAI announced a massive $100B investment deal. Nvidia will fund the buildout of 10 gigawatts of AI data centers using its upcoming Vera Rubin chips, more than doubling today's top AI hardware. The arrangement lets Nvidia recycle investment into chip sales while giving OpenAI infrastructure to push toward “superintelligence.” The deal lifted Nvidia's market cap to nearly $4.5T, the largest in the world. SpaceX Starlink filed to launch up to 15,000 new satellites to supercharge its direct-to-cell service. The move follows a $17B spectrum deal with EchoStar and will boost capacity 20-fold, enabling LTE-like performance for calls and messaging in dead zones. T-Mobile remains the US launch partner, but CEO Elon Musk hinted SpaceX could eventually sell mobile service directly, competing with carriers. Microsoft is injecting Copilot into all Microsoft 365 accounts, unless you manually use the Customization feature to stop the auto install.
When we think about what separates winning traders from those who struggle, we usually picture strategies, indicators, or a bit of insider know-how. But what if the biggest edge has been sitting on your desk all along? In this episode, I sit down with Eddie Z, also known as Russ Hazelcorn, the founder of EZ Trading Computers and EZBreakouts. With more than 37 years of experience as a trader, stockbroker, technologist, and educator, Eddie has built his career around one mission: helping traders cut through noise, avoid expensive mistakes, and get the tools they need to stay competitive in a fast-moving market. Eddie breaks down the specs that actually matter when building a trading setup, from RAM to CPUs to data feeds, and exposes which so-called “upgrades” are nothing more than overpriced fluff. We also dig into the rise of AI-powered trading platforms and bots, and what traders can do today to prepare their machines for the next wave. As Eddie points out, a lagging system or a missed feed isn't just an inconvenience—it can be the difference between a profitable trade and a costly loss. Beyond the hardware, we explore the broader picture. Rising tariffs and global supply chain disruptions are already reshaping the way traders access technology, and Eddie shares practical steps to avoid being caught short. He also explains why many experienced traders overlook their machines as a “secret weapon” and how quick, targeted fixes can transform reliability and performance in under an hour. This conversation goes deeper than specs and gadgets. Eddie opens up about the philosophy behind the EZ-Factor, his unique approach that blends decades of Wall Street expertise with cutting-edge technology to simplify trading and help people succeed. We talk about his ventures, including EZ Trading Computers, trusted by over 12,000 traders, and EZBreakouts, which delivers actionable daily and weekly picks backed by years of experience. For traders looking to level up—whether you're just starting out or managing multiple screens in a professional setting—this episode is packed with insights that can help you sharpen your edge. Eddie's perspective is clear: the right machine, the right mindset, and the right knowledge can make trading not only more profitable, but, as he likes to put it, as “EZ” as possible. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
This is a free preview of a paid episode. To hear more, visit sub.thursdai.newsHola AI aficionados, it's yet another ThursdAI, and yet another week FULL of AI news, spanning Open Source LLMs, Multimodal video and audio creation and more! Shiptember as they call it does seem to deliver, and it was hard even for me to follow up on all the news, not to mention we had like 3-4 breaking news during the show today! This week was yet another Qwen-mas, with Alibaba absolutely dominating across open source, but also NVIDIA promising to invest up to $100 Billion into OpenAI. So let's dive right in! As a reminder, all the show notes are posted at the end of the article for your convenience. ThursdAI - Because weeks are getting denser, but we're still here, weekly, sending you the top AI content! Don't miss outTable of Contents* Open Source AI* Qwen3-VL Announcement (Qwen3-VL-235B-A22B-Thinking):* Qwen3-Omni-30B-A3B: end-to-end SOTA omni-modal AI unifying text, image, audio, and video* DeepSeek V3.1 Terminus: a surgical bugfix that matters for agents* Evals & Benchmarks: agents, deception, and code at scale* Big Companies, Bigger Bets!* OpenAI: ChatGPT Pulse: Proactive AI news cards for your day* XAI Grok 4 fast - 2M context, 40% fewer thinking tokens, shockingly cheap* Alibaba Qwen-Max and plans for scaling* This Week's Buzz: W&B Fully Connected is coming to London and Tokyo & Another hackathon in SF* Vision & Video: Wan 2.2 Animate, Kling 2.5, and Wan 4.5 preview* Moondream-3 Preview - Interview with co-founders Via & Jay* Wan open sourced Wan 2.2 Animate (aka “Wan Animate”): motion transfer and lip sync* Kling 2.5 Turbo: cinematic motion, cheaper and with audio* Wan 4.5 preview: native multimodality, 1080p 10s, and lip-synced speech* Voice & Audio* ThursdAI - Sep 25, 2025 - TL;DR & Show notesOpen Source AIThis was a Qwen-and-friends week. I joked on stream that I should just count how many times “Alibaba” appears in our show notes. It's a lot.Qwen3-VL Announcement (Qwen3-VL-235B-A22B-Thinking): (X, HF, Blog, Demo)Qwen 3 launched earlier as a text-only family; the vision-enabled variant just arrived, and it's not timid. The “thinking” version is effectively a reasoner with eyes, built on a 235B-parameter backbone with around 22B active (their mixture-of-experts trick). What jumped out is the breadth of evaluation coverage: MMU, video understanding (Video-MME, LVBench), 2D/3D grounding, doc VQA, chart/table reasoning—pages of it. They're showing wins against models like Gemini 2.5 Pro and GPT‑5 on some of those reports, and doc VQA is flirting with “nearly solved” territory in their numbers.Two caveats. First, whenever scores get that high on imperfect benchmarks, you should expect healthy skepticism; known label issues can inflate numbers. Second, the model is big. Incredible for server-side grounding and long-form reasoning with vision (they're talking about scaling context to 1M tokens for two-hour video and long PDFs), but not something you throw on a phone.Still, if your workload smells like “reasoning + grounding + long context,” Qwen 3 VL looks like one of the strongest open-weight choices right now.Qwen3-Omni-30B-A3B: end-to-end SOTA omni-modal AI unifying text, image, audio, and video (HF, GitHub, Qwen Chat, Demo, API)Omni is their end-to-end multimodal chat model that unites text, image, and audio—and crucially, it streams audio responses in real time while thinking separately in the background. Architecturally, it's a 30B MoE with around 3B active parameters at inference, which is the secret to why it feels snappy on consumer GPUs.In practice, that means you can talk to Omni, have it see what you see, and get sub-250 ms replies in nine speaker languages while it quietly plans. It claims to understand 119 languages. When I pushed it in multilingual conversational settings it still code-switched unexpectedly (Chinese suddenly appeared mid-flow), and it occasionally suffered the classic “stuck in thought” behavior we've been seeing in agentic voice modes across labs. But the responsiveness is real, and the footprint is exciting for local speech streaming scenarios. I wouldn't replace a top-tier text reasoner with this for hard problems, yet being able to keep speech native is a real UX upgrade.Qwen Image Edit, Qwen TTS Flash, and Qwen‑GuardQwen's image stack got a handy upgrade with multi-image reference editing for more consistent edits across shots—useful for brand assets and style-tight workflows. TTS Flash (API-only for now) is their fast speech synth line, and Q‑Guard is a new safety/moderation model from the same team. It's notable because Qwen hasn't really played in the moderation-model space before; historically Meta's Llama Guard led that conversation.DeepSeek V3.1 Terminus: a surgical bugfix that matters for agents (X, HF)DeepSeek whale resurfaced to push a small 0.1 update to V3.1 that reads like a “quality and stability” release—but those matter if you're building on top. It fixes a code-switching bug (the “sudden Chinese” syndrome you'll also see in some Qwen variants), improves tool-use and browser execution, and—importantly—makes agentic flows less likely to overthink and stall. On the numbers, Humanities Last Exam jumped from 15 to 21.7, while LiveCodeBench dipped slightly. That's the story here: they traded a few raw points on coding for more stable, less dithery behavior in end-to-end tasks. If you've invested in their tool harness, this may be a net win.Liquid Nanos: small models that extract like they're big (X, HF)Liquid Foundation Models released “Liquid Nanos,” a set of open models from roughly 350M to 2.6B parameters, including “extract” variants that pull structure (JSON/XML/YAML) from messy documents. The pitch is cost-efficiency with surprisingly competitive performance on information extraction tasks versus models 10× their size. If you're doing at-scale doc ingestion on CPUs or small GPUs, these look worth a try.Tiny IBM OCR model that blew up the charts (HF)We also saw a tiny IBM model (about 250M parameters) for image-to-text document parsing trending on Hugging Face. Run in 8-bit, it squeezes into roughly 250 MB, which means Raspberry Pi and “toaster” deployments suddenly get decent OCR/transcription against scanned docs. It's the kind of tiny-but-useful release that tends to quietly power entire products.Meta's 32B Code World Model (CWM) released for agentic code reasoning (X, HF)Nisten got really excited about this one, and once he explained it, I understood why. Meta released a 32B code world model that doesn't just generate code - it understands code the way a compiler does. It's thinking about state, types, and the actual execution context of your entire codebase.This isn't just another coding model - it's a fundamentally different approach that could change how all future coding models are built. Instead of treating code as fancy text completion, it's actually modeling the program from the ground up. If this works out, expect everyone to copy this approach.Quick note, this one was released with a research license only! Evals & Benchmarks: agents, deception, and code at scaleA big theme this week was “move beyond single-turn Q&A and test how these things behave in the wild.” with a bunch of new evals released. I wanted to cover them all in a separate segment. OpenAI's GDP Eval: “economically valuable tasks” as a bar (X, Blog)OpenAI introduced GDP Eval to measure model performance against real-world, economically valuable work. The design is closer to how I think about “AGI as useful work”: 44 occupations across nine sectors, with tasks judged against what an industry professional would produce.Two details stood out. First, OpenAI's own models didn't top the chart in their published screenshot—Anthropic's Claude Opus 4.1 led with roughly a 47.6% win rate against human professionals, while GPT‑5-high clocked in around 38%. Releasing a benchmark where you're not on top earns respect. Second, the tasks are legit. One example was a manufacturing engineer flow where the output required an overall design with an exploded view of components—the kind of deliverable a human would actually make.What I like here isn't the precise percent; it's the direction. If we anchor progress to tasks an economy cares about, we move past “trivia with citations” and toward “did this thing actually help do the work?”GAIA 2 (Meta Super Intelligence Labs + Hugging Face): agents that execute (X, HF)MSL and HF refreshed GAIA, the agent benchmark, with a thousand new human-authored scenarios that test execution, search, ambiguity handling, temporal reasoning, and adaptability—plus a smartphone-like execution environment. GPT‑5-high led across execution and search; Kimi's K2 was tops among open-weight entries. I like that GAIA 2 bakes in time and budget constraints and forces agents to chain steps, not just spew plans. We need more of these.Scale AI's “SWE-Bench Pro” for coding in the large (HF)Scale dropped a stronger coding benchmark focused on multi-file edits, 100+ line changes, and large dependency graphs. On the public set, GPT‑5 (not Codex) and Claude Opus 4.1 took the top two slots; on a commercial set, Opus edged ahead. The broader takeaway: the action has clearly moved to test-time compute, persistent memory, and program-synthesis outer loops to get through larger codebases with fewer invalid edits. This aligns with what we're seeing across ARC‑AGI and SWE‑bench Verified.The “Among Us” deception test (X)One more that's fun but not frivolous: a group benchmarked models on the social deception game Among Us. OpenAI's latest systems reportedly did the best job both lying convincingly and detecting others' lies. This line of work matters because social inference and adversarial reasoning show up in real agent deployments—security, procurement, negotiations, even internal assistant safety.Big Companies, Bigger Bets!Nvidia's $100B pledge to OpenAI for 10GW of computeLet's say that number again: one hundred billion dollars. Nvidia announced plans to invest up to $100B into OpenAI's infrastructure build-out, targeting roughly 10 gigawatts of compute and power. Jensen called it the biggest infrastructure project in history. Pair that with OpenAI's Stargate-related announcements—five new datacenters with Oracle and SoftBank and a flagship site in Abilene, Texas—and you get to wild territory fast.Internal notes circulating say OpenAI started the year around 230MW and could exit 2025 north of 2GW operational, while aiming at 20GW in the near term and a staggering 250GW by 2033. Even if those numbers shift, the directional picture is clear: the GPU supply and power curves are going vertical.Two reactions. First, yes, the “infinite money loop” memes wrote themselves—OpenAI spends on Nvidia GPUs, Nvidia invests in OpenAI, the market adds another $100B to Nvidia's cap for good measure. But second, the underlying demand is real. If we need 1–8 GPUs per “full-time agent” and there are 3+ billion working adults, we are orders of magnitude away from compute saturation. The power story is the real constraint—and that's now being tackled in parallel.OpenAI: ChatGPT Pulse: Proactive AI news cards for your day (X, OpenAI Blog)In a #BreakingNews segment, we got an update from OpenAI, that currently works only for Pro users but will come to everyone soon. Proactive AI, that learns from your chats, email and calendar and will show you a new “feed” of interesting things every morning based on your likes and feedback! Pulse marks OpenAI's first step toward an AI assistant that brings the right info before you ask, tuning itself with every thumbs-up, topic request, or app connection. I've tuned mine for today, we'll see what tomorrow brings! P.S - Huxe is a free app from the creators of NotebookLM (Ryza was on our podcast!) that does a similar thing, so if you don't have pro, check out Huxe, they just launched! XAI Grok 4 fast - 2M context, 40% fewer thinking tokens, shockingly cheap (X, Blog)xAI launched Grok‑4 Fast, and the name fits. Think “top-left” on the speed-to-cost chart: up to 2 million tokens of context, a reported 40% reduction in reasoning token usage, and a price tag that's roughly 1% of some frontier models on common workloads. On LiveCodeBench, Grok‑4 Fast even beat Grok‑4 itself. It's not the most capable brain on earth, but as a high-throughput assistant that can fan out web searches and stitch answers in something close to real time, it's compelling.Alibaba Qwen-Max and plans for scaling (X, Blog, API)Back in the Alibaba camp, they also released their flagship API model, Qwen 3 Max, and showed off their future roadmap. Qwen-max is over 1T parameters, MoE that gets 69.6 on Swe-bench verified and outperforms GPT-5 on LMArena! And their plan is simple: scale. They're planning to go from 1 million to 100 million token context windows and scale their models into the terabytes of parameters. It culminated in a hilarious moment on the show where we all put on sunglasses to salute a slide from their presentation that literally said, “Scaling is all you need.” AGI is coming, and it looks like Alibaba is one of the labs determined to scale their way there. Their release schedule lately (as documented by Swyx from Latent.space) is insane. This Week's Buzz: W&B Fully Connected is coming to London and Tokyo & Another hackathon in SFWeights & Biases (now part of the CoreWeave family) is bringing Fully Connected to London on Nov 4–5, with another event in Tokyo on Oct 31. If you're in Europe or Japan and want two days of dense talks and hands-on conversations with teams actually shipping agents, evals, and production ML, come hang out. Readers got a code on stream; if you need help getting a seat, ping me directly.Links: fullyconnected.comWe are also opening up registrations to our second WeaveHacks hackathon in SF, October 11-12, yours trully will be there, come hack with us on Self Improving agents! Register HEREVision & Video: Wan 2.2 Animate, Kling 2.5, and Wan 4.5 previewThis is the most exciting space in AI week-to-week for me right now. The progress is visible. Literally.Moondream-3 Preview - Interview with co-founders Via & JayWhile I've already reported on Moondream-3 in the last weeks newsletter, this week we got the pleasure of hosting Vik Korrapati and Jay Allen the co-founders of MoonDream to tell us all about it. Tune in for that conversation on the pod starting at 00:33:00Wan open sourced Wan 2.2 Animate (aka “Wan Animate”): motion transfer and lip sync Tongyi's Wan team shipped an open-source release that the community quickly dubbed “Wanimate.” It's a character-swap/motion transfer system: provide a single image for a character and a reference video (your own motion), and it maps your movement onto the character with surprisingly strong hair/cloth dynamics and lip sync. If you've used runway's Act One, you'll recognize the vibe—except this is open, and the fidelity is rising fast.The practical uses are broader than “make me a deepfake.” Think onboarding presenters with perfect backgrounds, branded avatars that reliably say what you need, or precise action blocking without guessing at how an AI will move your subject. You act it; it follows.Kling 2.5 Turbo: cinematic motion, cheaper and with audioKling quietly rolled out a 2.5 Turbo tier that's 30% cheaper and finally brings audio into the loop for more complete clips. Prompts adhere better, physics look more coherent (acrobatics stop breaking bones across frames), and the cinematic look has moved from “YouTube short” to “film-school final.” They seeded access to creators and re-shared the strongest results; the consistency is the headline. (Source X: @StevieMac03)I've chatted with my kiddos today over facetime, and they were building minecraft creepers. I took a screenshot, sent to Nano Banana to make their creepers into actual minecraft ones, and then with Kling, Animated the explosions for them. They LOVED it! Animations were clear, while VEO refused for me to even upload their images, Kling didn't care hahaWan 4.5 preview: native multimodality, 1080p 10s, and lip-synced speechWan also teased a 4.5 preview that unifies understanding and generation across text, image, video, and audio. The eye-catching bit: generate a 1080p, 10-second clip with synced speech from just a script. Or supply your own audio and have it lip-sync the shot. I ran my usual “interview a polar bear dressed like me” test and got one of the better results I've seen from any model. We're not at “dialogue scene” quality, but “talking character shot” is getting… good. The generation of audio (not only text + lipsync) is one of the best ones besides VEO, it's really great to see how strongly this improves, sad that this wasn't open sourced! And apparently it supports “draw text to animate” (Source: X) Voice & AudioSuno V5: we've entered the “I can't tell anymore” eraSuno calls V5 a redefinition of audio quality. I'll be honest, I'm at the edge of my subjective hearing on this. I've caught myself listening to Suno streams instead of Spotify and forgetting anything is synthetic. The vocals feel more human, the mixes cleaner, and the remastering path (including upgrading V4 tracks) is useful. The last 10% to “you fooled a producer” is going to be long, but the distance between V4 and V5 already makes me feel like I should re-cut our ThursdAI opener.MiMI Audio: a small omni-chat demo that hints at the floorWe tried a MiMI Audio demo live—a 7B-ish model with speech in/out. It was responsive but stumbled on singing and natural prosody. I'm leaving it in here because it's a good reminder that the open floor for “real-time voice” is rising quickly even for small models. And the moment you pipe a stronger text brain behind a capable, native speech front-end, the UX leap is immediate.Ok, another DENSE week that finishes up Shiptember, tons of open source, Qwen (Tongyi) shines, and video is getting so so good. This is all converging folks, and honestly, I'm just happy to be along for the ride! This week was also Rosh Hashanah, which is the Jewish new year, and I've shared on the pod that I've found my X post from 3 years ago, using the state of the art AI models of the time. WHAT A DIFFERENCE 3 years make, just take a look, I had to scale down the 4K one from this year just to fit into the pic! Shana Tova to everyone who's reading this, and we'll see you next week
Rate cut - rates up? Diet Stocks - losing weight Good news/bad news - all good for markets Bessent for Fed Chair and Treasury Secretary? PLUS we are now on Spotify and Amazon Music/Podcasts! Click HERE for Show Notes and Links DHUnplugged is now streaming live - with listener chat. Click on link on the right sidebar. Love the Show? Then how about a Donation? Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter Warm-Up - BRAND New server - all provisioned - Much faster DH Site - Need a new CTP stock! - New Clear Stocks! - To the Sky - Money Tree Market - Tik Tok news Markets - Rate cut - rates up - Diet Stocks - losing weight - Good news/bad news - all good for markets - StubHub IPO Update SELL Rosh Hashanah - Buy Yom Kippur? Vanguard Issues? Got a call this morning..Gent in NY... NEW CLEAR - On Fire! - Have you seen the returns on some of these stocks? - YTD - - URA (Uranium ETF) Up 75% -- SMR (NuScale) Up 164% - - OKLO (OKL) up 518% - - CCJ (Cameco) up 65% TikTok Nonsense - President Donald Trump said in an interview that aired Sunday that conservative media baron Rupert Murdoch and his son Lachlan are likely to be involved in the proposal to save TikTok in the United States. -Trump also said that Oracle executive chairman Larry Ellison and Dell Technologies CEO Michael Dell are also likely to be involved in the TikTok deal. More TikTok - White House Press Secretary Karoline Leavitt says TikTok's algorithm will be secured, retrained, and operated in the U.S. outside of Bytedance's control; Oracle (ORCL) will serve as Tiktok's security provider; President Trump will sign TikTok deal later this week - What does that mean and will it be the same TikTok. - Who is doing the retraining??????? SO MANY QUESTIONS MEME ALERT! - Eric Jackson, a hedge fund manager who partly contributed to the trading explosion in Opendoor, unveiled his new pick Monday — Better Home & Finance Holding Co. - Jackson said his firm holds a position in Better Home but didn't disclose its size. - Shares of Better Home soared 46.6% on Monday after Jackson touted the stock on X. At one point during the session, the stock more than doubled in price. - The New York-based mortgage lender jumped more than 36% last week. Intel - INTC getting even more money. - Now, NVDA pouring in $5B - Nvidia and Intel announced a partnership to jointly develop multiple generations of custom data center and PC products. Intel will manufacture new x86 CPUs customized for Nvidia's AI infrastructure, and also build system-on-chips (SoCs) for PCs that integrate Nvidia's RTX GPU chiplets. - Both the US Government and NVDA got BELOW market pricing on their shares. NVDA $$ - Nvidia is investing in OpenAI. On September 22, 2025, Nvidia announced a strategic partnership with OpenAI, which includes an investment of up to $100 billion - The agreement will help deploy at least 10 gigawatts of Nvidia systems, which will include millions of its GPUs. The first phase is scheduled to launch in the second half of 2026, using Nvidia's Vera Rubin platform. Autism Link - Shares of Kenvue (KVUE) are trading lower largely due to reports from the White House and HHS suggesting a forthcoming warning linking prenatal use of acetaminophen (Tylenol's active ingredient) to autism risk. - Investors are concerned that such a warning could lead to regulatory action, changes in labeling requirements, litigation risk, or reduced demand for one of KVUE's key products. It's estimated that Tylenol accounts for approximately 7-9% of KVUE's total revenue. - The company has strongly denied any scientific basis for the link, but the uncertainty itself is hurting sentiment. - Finally, this also comes on top of recent weak financial performance: KVUE posted a Q2 revenue decline of 4% and cut its full-year guidance on August 7. - - Lawsuits to follow... Pfizer
Elizabeth Figura is a Wine developer at Code Weavers. We discuss how Wine and Proton make it possible to run Windows applications on other operating systems. Related links WineHQ Proton Crossover Direct3D MoltenVK XAudio2 Mesa 3D Graphics Library Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Elizabeth Figuera. She's a wine developer at Code Weavers. And today we're gonna talk about what that is and, uh, all the work that goes into it. [00:00:09] Elizabeth: Thank you Jeremy. I'm glad to be here. What's Wine [00:00:13] Jeremy: I think the first thing we should talk about is maybe saying what Wine is because I think a lot of people aren't familiar with the project. [00:00:20] Elizabeth: So wine is a translation layer. in fact, I would say wine is a Windows emulator. That is what the name originally stood for. it re implements the entire windows. Or you say win 32 API. so that programs that make calls into the API, will then transfer that code to wine and and we allow that Windows programs to run on, things that are not windows. So Linux, Mac, os, other operating systems such as Solaris and BSD. it works not by emulating the CPU, but by re-implementing every API, basically from scratch and translating them to their equivalent or writing new code in case there is no, you know, equivalent. System Calls [00:01:06] Jeremy: I believe what you're doing is you're emulating system calls. Could you explain what those are and, and how that relates to the project? [00:01:15] Elizabeth: Yeah. so system call in general can be used, referred to a call into the operating system, to execute some functionality that's built into the operating system. often it's used in the context of talking to the kernel windows applications actually tend to talk at a much higher level, because there's so much, so much high level functionality built into Windows. When you think about, as opposed to other operating systems that we basically, we end up end implementing much higher level behavior than you would on Linux. [00:01:49] Jeremy: And can you give some examples of what some of those system calls would be and, I suppose how they may be higher level than some of the Linux ones. [00:01:57] Elizabeth: Sure. So of course you have like low level calls like interacting with a file system, you know, created file and read and write and such. you also have, uh, high level APIs who interact with a sound driver. [00:02:12] Elizabeth: There's, uh, one I was working on earlier today, called XAudio where you, actually, you know, build this bank of of sounds. It's meant to be, played in a game and then you can position them in various 3D space. And the, and the operating system in a sense will, take care of all of the math that goes into making that work. [00:02:36] Elizabeth: That's all running on your computer and. And then it'll send that audio data to the sound card once it's transformed it. So it sounds like it's coming from a certain space. a lot of other things like, you know, parsing XML is another big one. That there's a lot of things. The, there, the, the, the space is honestly huge [00:02:59] Jeremy: And yeah, I can sort of see how those might be things you might not expect to be done by the operating system. Like you gave the example of 3D audio and XML parsing and I think XML parsing in, in particular, you would've thought that that would be something that would be handled by the, the standard library of whatever language the person was writing their application as. [00:03:22] Jeremy: So that's interesting that it's built into the os. [00:03:25] Elizabeth: Yeah. Well, and languages like, see it's not, it isn't even part of the standard library. It's higher level than that. It's, you have specific libraries that are widespread but not. Codified in a standard, but in Windows you, in Windows, they are part of the operating system. And in fact, there's several different, XML parsers in the operating system. Microsoft likes to deprecate old APIs and make new ones that do the same thing very often. [00:03:53] Jeremy: And something I've heard about Windows is that they're typically very reluctant to break backwards compatibility. So you say they're deprecated, but do they typically keep all of them still in there? [00:04:04] Elizabeth: It all still It all still works. [00:04:07] Jeremy: And that's all things that wine has to implement as well to make sure that the software works as well. [00:04:14] Jeremy: Yeah. [00:04:14] Elizabeth: Yeah. And, and we also, you know, need to make it work. we also need to implement those things to make old, programs work because there is, uh, a lot of demand, at least from, at least from people using wine for making, for getting some really old programs, working from the. Early nineties even. What people run with Wine (Productivity, build systems, servers) [00:04:36] Jeremy: And that's probably a good, thing to talk about in terms of what, what are the types of software that, that people are trying to run with wine, and what operating system are they typically using? [00:04:46] Elizabeth: Oh, in terms of software, literally all kinds, any software you can imagine that runs on Windows, people will try to run it on wine. So we're talking games, office software productivity, software accounting. people will run, build systems on wine, build their, just run, uh, build their programs using, on visual studio, running on wine. people will run wine on servers, for example, like software as a service kind of things where you don't even know that it's running on wine. really super domain specific stuff. Like I've run astronomy, software, and wine. Design, computer assisted design, even hardware drivers can sometimes work unwind. There's a bit of a gray area. How games are different [00:05:29] Jeremy: Yeah, it's um, I think from. Maybe the general public, or at least from what I've seen, I think a lot of people's exposure to it is for playing games. is there something different about games versus all those other types of, productivity software and office software that, that makes supporting those different. [00:05:53] Elizabeth: Um, there's some things about it that are different. Games of course have gotten a lot of publicity lately because there's been a huge push, largely from valve, but also some other companies to get. A lot of huge, wide range of games working well under wine. And that's really panned out in the, in a way, I think, I think we've largely succeeded. [00:06:13] Elizabeth: We've made huge strides in the past several years. 5, 5, 10 years, I think. so when you talk about what makes games different, I think, one thing games tend to do is they have a very limited set of things they're working with and they often want to make things run fast, and so they're working very close to the me They're not, they're not gonna use an XML parser, for example. [00:06:44] Elizabeth: They're just gonna talk directly as, directly to the graphics driver as they can. Right. And, and probably going to do all their own sound design. You know, I did talk about that XAudio library, but a lot of games will just talk directly as, directly to the sound driver as Windows Let some, so this is a often a blessing, honestly, because it means there's less we have to implement to make them work. when you look at a lot of productivity applications, and especially, the other thing that makes some productivity applications harder is, Microsoft makes 'em, and They like to, make a library, for use in this one program like Microsoft Office and then say, well, you know, other programs might use this as well. Let's. Put it in the operating system and expose it and write an API for it and everything. And maybe some other programs use it. mostly it's just office, but it means that office relies on a lot of things from the operating system that we all have to reimplement. [00:07:44] Jeremy: Yeah, that's somewhat counterintuitive because when you think of games, you think of these really high performance things that that seem really complicated. But it sounds like from what you're saying, because they use the lower level primitives, they're actually easier in some ways to support. [00:08:01] Elizabeth: Yeah, certainly in some ways, they, yeah, they'll do things like re-implement the heap allocator because the built-in heap allocator isn't fast enough for them. That's another good example. What makes some applications hard to support (Some are hard, can't debug other people's apps) [00:08:16] Jeremy: You mentioned Microsoft's more modern, uh, office suites. I, I've noticed there's certain applications that, that aren't supported. Like, for example, I think the modern Adobe Creative Suite. What's the difference with software like that and does that also apply to the modern office suite, or is, or is that actually supported? [00:08:39] Elizabeth: Well, in one case you have, things like Microsoft using their own APIs that I mentioned with Adobe. That applies less, I suppose, but I think to some degree, I think to some degree the answer is that some applications are just hard and there's, and, and there's no way around it. And, and we can only spend so much time on a hard application. I. Debugging things. Debugging things can get very hard with wine. Let's, let me like explain that for a minute because, Because normally when you think about debugging an application, you say, oh, I'm gonna open up my debugger, pop it in, uh, break at this point, see what like all the variables are, or they're not what I expect. Or maybe wait for it to crash and then get a back trace and see where it crashed. And why you can't do that with wine, because you don't have the application, you don't have the symbols, you don't have your debugging symbols. You don't know anything about the code you're running unless you take the time to disassemble and decompile and read through it. And that's difficult every time. It's not only difficult, every time I've, I've looked at a program and been like, I really need to just. I'm gonna just try and figure out what the program is doing. [00:10:00] Elizabeth: It takes so much time and it is never worth it. And sometimes you have to, sometimes you have no other choice, but usually you end up, you ask to rely on seeing what calls it makes into the operating system and trying to guess which one of those is going wrong. Now, sometimes you'll get lucky and it'll crash in wine code, or sometimes it'll make a call into, a function that we don't implement yet, and we know, oh, we need to implement that function. But sometimes it does something, more obscure and we have to figure out, well, like all of these millions of calls it made, which one of them is, which one of them are we implementing incorrectly? So it's returning the wrong result or not doing something that it should. And, then you add onto that the. You know, all these sort of harder to debug things like memory errors that we could make. And it's, it can be very difficult and so sometimes some applications just suffer from those hard bugs. and sometimes it's also just a matter of not enough demand for something for us to spend a lot of time on it. [00:11:11] Elizabeth: Right. [00:11:14] Jeremy: Yeah, I can see how that would be really challenging because you're, like you were saying, you don't have the symbols, so you don't have the source code, so you don't know what any of this software you're supporting, how it was actually written. And you were saying that I. A lot of times, you know, there may be some behavior that's wrong or a crash, but it's not because wine crashed or there was an error in wine. [00:11:42] Jeremy: so you just know the system calls it made, but you don't know which of the system calls didn't behave the way that the application expected. [00:11:50] Elizabeth: Exactly. Test suite (Half the code is tests) [00:11:52] Jeremy: I can see how that would be really challenging. and wine runs so many different applications. I'm, I'm kind of curious how do you even track what's working and what's not as you, you change wine because if you support thousands or tens thousands of applications, you know, how do you know when you've got a, a regression or not? [00:12:15] Elizabeth: So, it's a great question. Um, probably over half of wine by like source code volume. I actually actually check what it is, but I think it's, i, I, I think it's probably over half is what we call is tests. And these tests serve two purposes. The one purpose is a regression test. And the other purpose is they're conformance tests that test, that test how, uh, an API behaves on windows and validates that we are behaving the same way. So we write all these tests, we run them on windows and you know, write the tests to check what the windows returns, and then we run 'em on wine and make sure that that matches. and we have just such a huge body of tests to make sure that, you know, we're not breaking anything. And that every, every, all the code that we, that we get into wine that looks like, wow, it's doing that really well. Nope, that's what Windows does. The test says so. So pretty much any code that we, any new code that we get, it has to have tests to validate, to, to demonstrate that it's doing the right thing. [00:13:31] Jeremy: And so rather than testing against a specific application, seeing if it works, you're making a call to a Windows system call, seeing how it responds, and then making the same call within wine and just making sure they match. [00:13:48] Elizabeth: Yes, exactly. And that is obviously, or that is a lot more, automatable, right? Because otherwise you have to manually, you know, there's all, these are all graphical applications. [00:14:02] Elizabeth: You'd have to manually do the things and make sure they work. Um, but if you write automateable tests, you can just run them all and the machine will complain at you if it fails it continuous integration. How compatibility problems appear to users [00:14:13] Jeremy: And because there's all these potential compatibility issues where maybe a certain call doesn't behave the way an application expects. What, what are the types of what that shows when someone's using software? I mean, I, I think you mentioned crashes, but I imagine there could be all sorts of other types of behavior. [00:14:37] Elizabeth: Yes, very much so. basically anything, anything you can imagine again is, is what will happen. You can have, crashes are the easy ones because you know when and where it crashed and you can work backwards from there. but you can also get, it can, it could hang, it could not render, right? Like maybe render a black screen. for, you know, for games you could very frequently have, graphical glitches where maybe some objects won't render right? Or the entire screen will be read. Who knows? in a very bad case, you could even bring down your system and we usually say that's not wine's fault. That's the graphics library's fault. 'cause they're not supposed to do that, uh, no matter what we do. But, you know, sometimes we have to work around that anyway. but yeah, there's, there's been some very strange and idiosyncratic bugs out there too. [00:15:33] Jeremy: Yeah. And like you mentioned that uh, there's so many different things that could have gone wrong that imagine's very difficult to find. Yeah. And when software runs through wine, I think, Performance is comparable to native [00:15:49] Jeremy: A lot of our listeners will probably be familiar with running things in a virtual machine, and they know that there's a big performance impact from doing that. [00:15:57] Jeremy: How does the performance of applications compare to running natively on the original Windows OS versus virtual machines? [00:16:08] Elizabeth: So. In theory. and I, I haven't actually done this recently, so I can't speak too much to that, but in theory, the idea is it's a lot faster. so there, there, is a bit of a joke acronym to wine. wine is not an emulator, even though I started out by saying wine is an emulator, and it was originally called a Windows emulator. but what this basically means is wine is not a CPU emulator. It doesn't, when you think about emulators in a general sense, they're often, they're often emulators for specific CPUs, often older ones like, you know, the Commodore emulator or an Amiga emulator. but in this case, you have software that's written for an x86 CPU. And it's running on an x86 CPU by giving it the same instructions that it's giving on windows. It's just that when it says, now call this Windows function, it calls us instead. So that all should perform exactly the same. The only performance difference at that point is that all should perform exactly the same as opposed to a, virtual machine where you have to interpret the instructions and maybe translate them to a different instruction set. The only performance difference is going to be, in the functions that we are implementing themselves and we try to, we try to implement them to perform. As well, or almost as well as windows. There's always going to be a bit of a theoretical gap because we have to translate from say, one API to another, but we try to make that as little as possible. And in some cases, the operating system we're running on is, is just better than Windows and the libraries we're using are better than Windows. [00:18:01] Elizabeth: And so our games will run faster, for example. sometimes we can, sometimes we can, do a better job than Windows at implementing something that's, that's under our purview. there there are some games that do actually run a little bit faster in wine than they do on Windows. [00:18:22] Jeremy: Yeah, that, that reminds me of how there's these uh, gaming handhelds out now, and some of the same ones, they have a, they either let you install Linux or install windows, or they just come with a pre-installed, and I believe what I've read is that oftentimes running the same game on both operating systems, running the same game on Linux, the battery life is better and sometimes even the performance is better with these handhelds. [00:18:53] Jeremy: So it's, it's really interesting that that can even be the case. [00:18:57] Elizabeth: Yeah, it's really a testament to the huge amount of work that's gone into that, both on the wine side and on the, side of the graphics team and the colonel team. And, and of course, you know, the years of, the years of, work that's gone into Linux, even before these gaming handhelds were, were even under consideration. Proton and Valve Software's role [00:19:21] Jeremy: And something. So for people who are familiar with the handhelds, like the steam deck, they may have heard of proton. Uh, I wonder if you can explain what proton is and how it relates to wine. [00:19:37] Elizabeth: Yeah. So, proton is basically, how do I describe this? So, proton is a sort of a fork, uh, although we try to avoid the term fork. It's a, we say it's a downstream distribution because we contribute back up to wine. so it is a, it is, it is a alternate distribution fork of wine. And it's also some code that basically glues wine into, an embedding application originally intended for steam, and developed for valve. it has also been used in, others, but it has also been used in other software. it, so where proton differs from wine besides the glue part is it has some, it has some extra hacks in it for bugs that are hard to fix and easy to hack around as some quick hacks for, making games work now that are like in the process of going upstream to wine and getting their code quality improved and going through review. [00:20:54] Elizabeth: But we want the game to work now, when we distribute it. So that'll, that'll go into proton immediately. And then once we have, once the patch makes it upstream, we replace it with the version of the patch from upstream. there's other things to make it interact nicely with steam and so on. And yeah, I think, yeah, I think that's, I got it. [00:21:19] Jeremy: Yeah. And I think for people who aren't familiar, steam is like this, um, I, I don't even know what you call it, like a gaming store and a [00:21:29] Elizabeth: store game distribution service. it's got a huge variety of games on it, and you just publish. And, and it's a great way for publishers to interact with their, you know, with a wider gaming community, uh, after it, just after paying a cut to valve of their profits, they can reach a lot of people that way. And because all these games are on team and, valve wants them to work well on, on their handheld, they contracted us to basically take their entire catalog, which is huge, enormous. And trying and just step by step. Fix every game and make them all work. [00:22:10] Jeremy: So, um, and I guess for people who aren't familiar Valve, uh, softwares the company that runs steam, and so it sounds like they've asked, uh, your company to, to help improve the compatibility of their catalog. [00:22:24] Elizabeth: Yes. valve contracted us and, and again, when you're talking about wine using lower level libraries, they've also contracted a lot of other people outside of wine. Basically, the entire stack has had a tremendous, tremendous investment by valve software to make gaming on Linux work. Well. The entire stack receives changes to improve Wine compatibility [00:22:48] Jeremy: And when you refer to the entire stack, like what are some, some of those pieces, at least at a high level. [00:22:54] Elizabeth: I, I would, let's see, let me think. There is the wine project, the. Mesa Graphics Libraries. that's a, that's another, you know, uh, open source, software project that existed, has existed for a long time. But Valve has put a lot of, uh, funding and effort into it, the Linux kernel in various different ways. [00:23:17] Elizabeth: the, the desktop, uh, environment and Window Manager for, um, are also things they've invested in. [00:23:26] Jeremy: yeah. Everything that the game needs, on any level and, and that the, and that the operating system of the handheld device needs. Wine's history [00:23:37] Jeremy: And wine's been going on for quite a while. I think it's over a decade, right? [00:23:44] Elizabeth: I believe. Oh, more than, oh, far more than a decade. I believe it started in 1990, I wanna say about 1995, mid nineties. I'm, I probably have that date wrong. I believe Wine started about the mid nineties. [00:24:00] Jeremy: Mm. [00:24:00] Elizabeth: it's going on for three decades at this rate. [00:24:03] Jeremy: Wow. Okay. [00:24:06] Jeremy: And so all this time, how has the, the project sort of sustained itself? Like who's been involved and how has it been able to keep going this long? [00:24:18] Elizabeth: Uh, I think as is the case with a lot of free software, it just, it just keeps trudging along. There's been. There's been times where there's a lot of interest in wine. There's been times where there's less, and we are fortunate to be in a time where there's a lot of interest in it. we've had the same maintainer for almost this entire, almost this entire existence. Uh, Alexander Julliard, there was one person starting who started, maintained it before him and, uh, left it maintainer ship to him after a year or two. Uh, Bob Amstat. And there has been a few, there's been a few developers who have been around for a very long time. a lot of developers who have been around for a decent amount of time, but not for the entire duration. And then a very, very large number of people who come and submit a one-off fix for their individual application that they want to make work. [00:25:19] Jeremy: How does crossover relate to the wine project? Like, it sounds like you had mentioned Valve software hired you for subcontract work, but crossover itself has been around for quite a while. So how, how has that been connected to the wine project? [00:25:37] Elizabeth: So I work for, so the, so the company I work for is Code Weavers and, crossover is our flagship software. so Code Weavers is a couple different things. We have a sort of a porting service where companies will come to us and say, can we port my application usually to Mac? And then we also have a retail service where Where we basically have our own, similar to Proton, but you know, older, but the same idea where we will add some hacks into it for very difficult to solve bugs and we have a, a nice graphical interface. And then, the other thing that we're selling with crossover is support. So if you, you know, try to run a certain application and you buy crossover, you can submit a ticket saying this doesn't work and we now have a financial incentive to fix it. You know, we'll try to, we'll try to fix your, we'll spend company resources to fix your bug, right? So that's been so, so code we v has been around since 1996 and crossover, I don't know the date, but it's crossover has been around for probably about two decades, if I'm not mistaken. [00:27:01] Jeremy: And when you mention helping companies port their software to, for example, MacOS. [00:27:07] Jeremy: Is the approach that you would port it natively to MacOS APIs or is it that you would help them get it running using wine on MacOS? [00:27:21] Elizabeth: Right. That's, so that's basically what makes us so unique among porting companies is that instead of rewriting their software, we just, we just basically stick it inside of crossover and, uh, and, and make it run. [00:27:36] Elizabeth: And the idea has always been, you know, the more we implement, the more we get correct, the, the more applications will, you know, work. And sometimes it works out that way. Sometimes not really so much. And there's always work we have to do to get any given application to work, but. Yeah, so it's, it's very unusual because we don't ask companies for any of their code. We don't need it. We just fix the windows API [00:28:07] Jeremy: And, and so in that case, the ports would be let's say someone sells a MacOS version of their software. They would bundle crossover, uh, with their software. [00:28:18] Elizabeth: Right? And usually when you do this, it doesn't look like there's crossover there. Like it just looks like this software is native, but there is soft, there is crossover under the hood. Loading executables and linked libraries [00:28:32] Jeremy: And so earlier we were talking about how you're basically intercepting the system calls that these binaries are making, whether that's the executable or the, the DLLs from Windows. Um, but I think probably a lot of our listeners are not really sure how that's done. Like they, they may have built software, but they don't know, how do I basically hijack, the system calls that this application is making. [00:29:01] Jeremy: So maybe you could talk a little bit about how that works. [00:29:04] Elizabeth: So there, so there's a couple steps to go into it. when you think about a program that's say, that's a big, a big file that's got all the machine code in it, and then it's got stuff at the beginning saying, here's how the program works and here's where in the file the processor should start running. that's, that's your EXE file. And then in your DLL files are libraries that contain shared code and you have like a similar sort of file. It says, here's the entry point. That runs this function, this, you know, this pars XML function or whatever have you. [00:29:42] Elizabeth: And here's this entry point that has the generate XML function and so on and so forth. And, and, then the operating system will basically take the EXE file and see all the bits in it. Say I want to call the pars XML function. It'll load that DLL and hook it up. So it, so the processor ends up just seeing jump directly to this pars XML function and then run that and then return and so on. [00:30:14] Elizabeth: And so what wine does, is it part of wine? That's part of wine is a library, is that, you know, the implementing that parse XML and read XML function, but part of it is the loader, which is the part of the operating system that hooks everything together. And when we load, we. Redirect to our libraries. We don't have Windows libraries. [00:30:38] Elizabeth: We like, we redirect to ours and then we run our code. And then when you jump back to the program and yeah. [00:30:48] Jeremy: So it's the, the loader that's a part of wine. That's actually, I'm not sure if running the executable is the right term. [00:30:58] Elizabeth: no, I think that's, I think that's a good term. It's, it's, it's, it starts in a loader and then we say, okay, now run the, run the machine code and it's executable and then it runs and it jumps between our libraries and back and so on. [00:31:14] Jeremy: And like you were saying before, often times when it's trying to make a system call, it ends up being handled by a function that you've written in wine. And then that in turn will call the, the Linux system calls or the MacOS system calls to try and accomplish the, the same result. [00:31:36] Elizabeth: Right, exactly. [00:31:40] Jeremy: And something that I think maybe not everyone is familiar with is there's this concept of user space versus kernel space. you explain what the difference is? [00:31:51] Elizabeth: So the way I would explain, the way I would describe a kernel is it's the part of the operating system that can do anything, right? So any program, any code that runs on your computer is talking to the processor, and the processor has to be able to do anything the computer can do. [00:32:10] Elizabeth: It has to be able to talk to the hardware, it has to set up the memory space. That, so actually a very complicated task has to be able to switch to another task. and, and, and, and basically talk to another program and. You have to have something there that can do everything, but you don't want any program to be able to do everything. Um, not since the, not since the nineties. It's about when we realized that we can't do that. so the kernel is a part that can do everything. And when you need to do something that requires those, those permissions that you can't give everyone, you have to talk to the colonel and ask it, Hey, can you do this for me please? And in a very restricted way where it's only the safe things you can do. And a degree, it's also like a library, right? It's the kernel. The kernels have always existed, and since they've always just been the core standard library of the computer that does the, that does the things like read and write files, which are very, very complicated tasks under the hood, but look very simple because all you say is write this file. And talk to the hardware and abstract away all the difference between different drivers. So the kernel is doing all of these things. So because the kernel is a part that can do everything and because when you think about the kernel, it is basically one program that is always running on your computer, but it's only one program. So when a user calls the kernel, you are switching from one program to another and you're doing a lot of complicated things as part of this. You're switching to the higher privilege level where you can do anything and you're switching the state from one program to another. And so it's a it. So this is what we mean when we talk about user space, where you're running like a normal program and kernel space where you've suddenly switched into the kernel. [00:34:19] Elizabeth: Now you're executing with increased privileges in a different. idea of the process space and increased responsibility and so on. [00:34:30] Jeremy: And, and so do most applications. When you were talking about the system calls for handling 3D audio or parsing XML. Are those considered, are those system calls considered part of user space and then those things call the kernel space on your behalf, or how, how would you describe that? [00:34:50] Elizabeth: So most, so when you look at Windows, most of most of the Windows library, the vast, vast majority of it is all user space. most of these libraries that we implement never leave user space. They never need to call into the kernel. there's the, there only the core low level stuff. Things like, we need to read a file, that's a kernel call. when you need to sleep and wait for some seconds, that's a kernel. Need to talk to a different process. Things that interact with different processes in general. not just allocate memory, but allocate a page of memory, like a, from the memory manager and then that gets sub allocated by the heap allocator. so things like that. [00:35:31] Jeremy: Yeah, so if I was writing an application and I needed to open a file, for example, does, does that mean that I would have to communicate with the kernel to, to read that file? [00:35:43] Elizabeth: Right, exactly. [00:35:46] Jeremy: And so most applications, it sounds like it's gonna be a mixture. You're gonna have a lot of things that call user space calls. And then a few, you mentioned more low level ones that are gonna require you to communicate with the kernel. [00:36:00] Elizabeth: Yeah, basically. And it's worth noting that in, in all operating systems, you're, you're almost always gonna be calling a user space library. That might just be a thin wrapper over the kernel call. It might, it's gonna do like just a little bit of work in end call the kernel. [00:36:19] Jeremy: [00:36:19] Elizabeth: In fact, in Windows, that's the only way to do it. Uh, in many other operating systems, you can actually say, you can actually tell the processor to make the kernel call. There is a special instruction that does this and just, and it'll go directly to the kernel, and there's a defined interface for this. But in Windows, that interface is not defined. It's not stable. Or backwards compatible like the rest of Windows is. So even if you wanted to use it, you couldn't. and you basically have to call into the high level libraries or low level libraries, as it were, that, that tell you that create a file. And those don't do a lot. [00:37:00] Elizabeth: They just kind of tweak their parameters a little and then pass them right down to the kernel. [00:37:07] Jeremy: And so wine, it sounds like it needs to implement both the user space calls of windows, but then also the, the kernel, calls as well. But, but wine itself does that, is that only in Linux user space or MacOS user space? [00:37:27] Elizabeth: Yes. This is a very tricky thing. but all of wine, basically all of what is wine runs in, in user space and we use. Kernel calls that are already there to talk to the colonel, to talk to the host Colonel. You have to, and you, you get, you get, you get the sort of second nature of thinking about the Windows, user space and kernel. [00:37:50] Elizabeth: And then there's a host user space and Kernel and wine is running all in user, in the user, in the host user space, but it's emulating the Windows kernel. In fact, one of the weirdest, trickiest parts is I mentioned that you can run some drivers in wine. And those drivers actually, they actually are, they think they're running in the Windows kernel. which in a sense works the same way. It has libraries that it can load, and those drivers are basically libraries and they're making, kernel calls and they're, they're making calls into the kernel library that does some very, very low level tasks that. You're normally only supposed to be able to do in a kernel. And, you know, because the kernel requires some privileges, we kind of pretend we have them. And in many cases, you're even the drivers are using abstractions. We can just implement those abstractions kind of over the slightly higher level abstractions that exist in user space. [00:39:00] Jeremy: Yeah, I hadn't even considered the being able to use hardware devices, but I, I suppose if in, in the end, if you're reproducing the kernel, then whether you're running software or you're talking to a hardware device, as long as you implement the calls correctly, then I, I suppose it works. [00:39:18] Elizabeth: Cause you're, you're talking about device, like maybe it's some kind of USB device that has drivers for Windows, but it doesn't for, for Linux. [00:39:28] Elizabeth: no, that's exactly, that's a, that's kind of the, the example I've used. Uh, I think there is, I think I. My, one of my best success stories was, uh, drivers for a graphing calculator. [00:39:41] Jeremy: Oh, wow. [00:39:42] Elizabeth: That connected via USB and I basically just plugged the windows drivers into wine and, and ran it. And I had to implement a lot of things, but it worked. But for example, something like a graphics driver is not something you could implement in wine because you need the graphics driver on the host. We can't talk to the graphics driver while the host is already doing so. [00:40:05] Jeremy: I see. Yeah. And in that case it probably doesn't make sense to do so [00:40:11] Elizabeth: Right? [00:40:12] Elizabeth: Right. It doesn't because, the transition from user into kernel is complicated. You need the graphics driver to be in the kernel and the real kernel. Having it in wine would be a bad idea. Yeah. [00:40:25] Jeremy: I, I think there's, there's enough APIs you have to try and reproduce that. I, I think, uh, doing, doing something where, [00:40:32] Elizabeth: very difficult [00:40:33] Jeremy: right. Poor system call documentation and private APIs [00:40:35] Jeremy: There's so many different, calls both in user space and in kernel space. I imagine the, the user space ones Microsoft must document to some extent, but, oh. Is that, is that a [00:40:51] Elizabeth: well, sometimes, [00:40:54] Jeremy: Sometimes. Okay. [00:40:55] Elizabeth: I think it's actually better now than it used to be. But some, here's where things get fun, because sometimes there will be, you know, regular documented calls. Sometimes those calls are documented, but the documentation isn't very good. Sometimes programs will just sort of look inside Microsoft's DLLs and use calls that they aren't supposed to be using. Sometimes they use calls that they are supposed to be using, but the documentation has disappeared. just because it's that old of an API and Microsoft hasn't kept it around. sometimes some, sometimes Microsoft, Microsoft own software uses, APIs that were never documented because they never wanted anyone else using them, but they still ship them with the operating system. there was actually a kind of a lawsuit about this because it is an antitrust lawsuit, because by shipping things that only they could use, they were kind of creating a trust. and that got some things documented. At least in theory, they kind of haven't stopped doing it, though. [00:42:08] Jeremy: Oh, so even today they're, they're, I guess they would call those private, private APIs, I suppose. [00:42:14] Elizabeth: I suppose. Uh, yeah, you could say private APIs. but if we want to get, you know, newer versions of Microsoft Office running, we still have to figure out what they're doing and implement them. [00:42:25] Jeremy: And given that they're either, like you were saying, the documentation is kind of all over the place. If you don't know how it's supposed to behave, how do you even approach implementing them? [00:42:38] Elizabeth: and that's what the conformance tests are for. And I, yeah, I mentioned earlier we have this huge body of conformance tests that double is regression tests. if we see an API, we don't know what to do with or an API, we do know, we, we think we know what to do with because the documentation can just be wrong and often has been. Then we write tests to figure out what it's supposed to behave. We kind of guess until we, and, and we write tests and we pass some things in and see what comes out and see what. The see what the operating system does until we figure out, oh, so this is what it's supposed to do and these are the exact parameters in, and, and then we, and, and then we implement it according to those tests. [00:43:24] Jeremy: Is there any distinction in approach for when you're trying to implement something that's at the user level versus the kernel level? [00:43:33] Elizabeth: No, not really. And like I, and like I mentioned earlier, like, well, I mean, a kernel call is just like a library call. It's just done in a slightly different way, but it's still got, you know, parameters in, it's still got a set of parameters. They're just encoded differently. And, and again, like the, the way kernel calls are done is on a level just above the kernel where you have a library, that just passes things through. Almost verbatim to the kernel and we implement that library instead. [00:44:10] Jeremy: And, and you've been working on i, I think, wine for over, over six years now. [00:44:18] Elizabeth: That sounds about right. Debugging and having broad knowledge of Wine [00:44:20] Jeremy: What does, uh, your, your day to day look like? What parts of the project do you, do you work on? [00:44:27] Elizabeth: It really varies from day to day. and I, I, a lot of people, a lot of, some people will work on the same parts of wine for years. Uh, some people will switch around and work on all sorts of different things. [00:44:42] Elizabeth: And I'm, I definitely belong to that second group. Like if you name an area of wine, I have almost certainly contributed a patch or two to it. there's some areas I work on more than others, like, 3D graphics, multimedia, a, I had, I worked on a compiler that exists, uh, socket. So networking communication is another thing I work a lot on. day to day, I kind of just get, I, I I kind of just get a bug for some program or another. and I take it and I debug it and figure out why the program's broken and then I fix it. And there's so much variety in that. because a bug can take so many different forms like I described, and, and, and the, and then the fix can be simple or complicated or, and it can be in really anywhere to a degree. [00:45:40] Elizabeth: being able to work on any part of wine is sometimes almost a necessity because if a program is just broken, you don't know why. It could be anything. It could be any sort of API. And sometimes you can hand the API to somebody who's got a lot of experience in that, but sometimes you just do whatever. You just fix whatever's broken and you get an experience that way. [00:46:06] Jeremy: Yeah, I mean, I was gonna ask about the specialized skills to, to work on wine, but it sounds like maybe in your case it's all of them. [00:46:15] Elizabeth: It's, there's a bit of that. it's a wine. We, the skills to work on wine are very, it's a very unique set of skills because, and it largely comes down to debugging because you can't use the tools you normally use debug. [00:46:30] Elizabeth: You have to, you have to be creative and think about it different ways. Sometimes you have to be very creative. and programs will try their hardest to avoid being debugged because they don't want anyone breaking their copy protection, for example, or or hacking, or, you know, hacking in sheets. They want to be, they want, they don't want anyone hacking them like that. [00:46:54] Elizabeth: And we have to do it anyway for good and legitimate purposes. We would argue to make them work better on more operating systems. And so we have to fight that every step of the way. [00:47:07] Jeremy: Yeah, it seems like it's a combination of. F being able, like you, you were saying, being able to, to debug. and you're debugging not necessarily your own code, but you're debugging this like behavior of, [00:47:25] Jeremy: And then based on that behavior, you have to figure out, okay, where in all these different systems within wine could this part be not working? [00:47:35] Jeremy: And I, I suppose you probably build up some kind of, mental map in your head of when you get a, a type of bug or a type of crash, you oh, maybe it's this, maybe it's here, or something [00:47:47] Elizabeth: Yeah. That, yeah, there is a lot of that. there's, you notice some patterns, you know, after experience helps, but because any bug could be new, sometimes experience doesn't help and you just, you just kind of have to start from scratch. Finding a bug related to XAudio [00:48:08] Jeremy: At sort of a high level, can you give an example of where you got a specific bug report and then where you had to look to eventually find which parts of the the system were the issue? [00:48:21] Elizabeth: one, one I think good example, that I've done recently. so I mentioned this, this XAudio library that does 3D audio. And if you say you come across a bug, I'm gonna be a little bit generics here and say you come across a bug where some audio isn't playing right, maybe there's, silence where there should be the audio. So you kind of, you look in and see, well, where's that getting lost? So you can basically look in the input calls and say, here's the buffer it's submitting that's got all the audio data in it. And you look at the output, you look at where you think the output should be, like, that library will internally call a different library, which programs can interact with directly. [00:49:03] Elizabeth: And this our high level library interacts with that is the, give this sound to the audio driver, right? So you've got XAudio on top of, um. mdev, API, which is the other library that gives audio to the driver. And you see, well, the ba the buffer is that XAudio is passing into MM Dev, dev API. They're empty, there's nothing in them. So you have to kind of work through the XAudio library to see where is, where's that sound getting lost? Or maybe, or maybe that's not getting lost. Maybe it's coming through all garbled. And I've had to look at the buffer and see why is it garbled. I'll open up it up in Audacity and look at the weight shape of the wave and say, huh, that shape of the wave looks like it's, it looks like we're putting silence every 10 nanoseconds or something, or, or reversing something or interpreting it wrong. things like that. Um, there's a lot of, you'll do a lot of, putting in print fs basically all throughout wine to see where does the state change. Where was, where is it? Where is it? Right? And then where do things start going wrong? [00:50:14] Jeremy: Yeah. And in the audio example, because they're making a call to your XAudio implementation, you can see that Okay, the, the buffer, the audio that's coming in. That part is good. It, it's just that later on when it sends it to what's gonna actually have it be played by the, the hardware, that's when missing. So, [00:50:37] Elizabeth: We did something wrong in a library that destroyed the buffer. And I think on a very, high level a lot of debugging, wine is about finding where things are good and finding where things are bad, and then narrowing that down until we find the one spot where things go wrong. There's a lot of processes that go like that. [00:50:57] Jeremy: like you were saying, the more you see these problems, hopefully the, the easier it gets to, to narrow down where, [00:51:04] Elizabeth: Often. Yeah. Especially if you keep debugging things in the same area. How much code is OS specific?c [00:51:09] Jeremy: And wine supports more than one operating system. I, I saw there was Linux, MacOS I think free BSD. How much of the code is operating system specific versus how much can just be shared across all of them? [00:51:27] Elizabeth: Not that much is operating system specific actually. so when you think about the volume of wine, the, the, the, vast majority of it is the high level code that doesn't need to interact with the operating system on a low level. Right? Because Windows keeps putting, because Microsoft keeps putting lots and lots of different libraries in their operating system. And a lot of these are high level libraries. and even when we do interact with the operating system, we're, we're using cross-platform libraries or we're using, we're using ics. The, uh, so all these operating systems that we are implementing are con, basically conformed to the posix standard. which is basically like Unix, they're all Unix based. Psic is a Unix based standard. Microsoft is, you know, the big exception that never did implement that. And, and so we have to translate its APIs to Unix, APIs. now that said, there is a lot of very operating system, specific code. Apple makes things difficult by try, by diverging almost wherever they can. And so we have a lot of Apple specific code in there. [00:52:46] Jeremy: another example I can think of is, I believe MacOS doesn't support, Vulkan [00:52:53] Elizabeth: yes. Yeah.Yeah, That's a, yeah, that's a great example of Mac not wanting to use, uh, generic libraries that work on every other operating system. and in some cases we, we look at it and are like, alright, we'll implement a wrapper for that too, on top of Yuri, on top of your, uh, operating system. We've done it for Windows, we can do it for Vulkan. and that's, and then you get the Molten VK project. Uh, and to be clear, we didn't invent molten vk. It was around before us. We have contributed a lot to it. Direct3d, Vulkan, and MoltenVK [00:53:28] Jeremy: Yeah, I think maybe just at a high level might be good to explain the relationship between Direct 3D or Direct X and Vulcan and um, yeah. Yeah. Maybe if you could go into that. [00:53:42] Elizabeth: so Direct 3D is Microsoft's 3D API. the 3D APIs, you know, are, are basically a way to, they're way to firstly abstract out the differences between different graphics, graphics cards, which, you know, look very different on a hardware level. [00:54:03] Elizabeth: Especially. They, they used to look very different and they still do look very different. and secondly, a way to deal with them at a high level because actually talking to the graphics card on a low level is very, very complicated. Even talking to it on a high level is complicated, but it gets, it can get a lot worse if you've ever been a, if you've ever done any graphics, driver development. so you have a, a number of different APIs that achieve these two goals of, of, abstraction and, and of, of, of building a common abstraction and of building a, a high level abstraction. so OpenGL is the broadly the free, the free operating system world, the non Microsoft's world's choice, back in the day. [00:54:53] Elizabeth: And then direct 3D was Microsoft's API and they've and Direct 3D. And both of these have evolved over time and come up with new versions and such. And when any, API exists for too long. It gains a lot of croft and needs to be replaced. And eventually, eventually the people who developed OpenGL decided we need to start over, get rid of the Croft to make it cleaner and make it lower level. [00:55:28] Elizabeth: Because to get in a maximum performance games really want low level access. And so they made Vulcan, Microsoft kind of did the same thing, but they still call it Direct 3D. they just, it's, it's their, the newest version of Direct 3D is lower level. It's called Direct 3D 12. and, and, Mac looked at this and they decided we're gonna do the same thing too, but we're not gonna use Vulcan. [00:55:52] Elizabeth: We're gonna define our own. And they call it metal. And so when we want to translate D 3D 12 into something that another operating system understands. That's probably Vulcan. And, and on Mac, we need to translate it to metal somehow. And we decided instead of having a separate layer from D three 12 to metal, we're just gonna translate it to Vulcan and then translate the Vulcan to metal. And it also lets things written for Vulcan on Windows, which is also a thing that exists that lets them work on metal. [00:56:30] Jeremy: And having to do that translation, does that have a performance impact or is that not really felt? [00:56:38] Elizabeth: yes. It's kind of like, it's kind of like anything, when you talk about performance, like I mentioned this earlier, there's always gonna be overhead from translating from one API to another. But we try to, what we, we put in heroic efforts to. And try, try to make sure that doesn't matter, to, to make sure that stuff that needs to be fast is really as fast as it can possibly be. [00:57:06] Elizabeth: And some very clever things have been done along those lines. and, sometimes the, you know, the graphics drivers underneath are so good that it actually does run better, even despite the translation overhead. And then sometimes to make it run fast, we need to say, well, we're gonna implement a new API that behaves more like windows, so we can do less work translating it. And that's, and sometimes that goes into the graphics library and sometimes that goes into other places. Targeting Wine instead of porting applications [00:57:43] Jeremy: Yeah. Something I've found a little bit interesting about the last few years is [00:57:49] Jeremy: Developers in the past, they would generally target Windows and you might be lucky to get a Mac port or a Linux port. And I wonder, like, in your opinion now, now that a lot of developers are just targeting Windows and relying on wine or, or proton to, to run their software, is there any, I suppose, downside to doing that? [00:58:17] Jeremy: Or is it all just upside, like everyone should target Windows as this common platform? [00:58:23] Elizabeth: Yeah. It's an interesting question. I, there's some people who seem to think it's a bad thing that, that we're not getting native ports in the same sense, and then there's some people who. Who See, no, that's a perfectly valid way to do ports just right for this defacto common API it was never intended as a cross platform common API, but we've made it one. [00:58:47] Elizabeth: Right? And so why is that any worse than if it runs on a different API on on Linux or Mac and I? Yeah, I, I, I guess I tend to, I, that that argument tends to make sense to me. I don't, I don't really see, I don't personally see a lot of reason for, to, to, to say that one library is more pure than another. [00:59:12] Elizabeth: Right now, I do think Windows APIs are generally pretty bad. I, I'm, this might be, you know, just some sort of, this might just be an effect of having to work with them for a very long time and see all their flaws and have to deal with the nonsense that they do. But I think that a lot of the. Native Linux APIs are better. But if you like your Windows API better. And if you want to target Windows and that's the only way to do it, then sure why not? What's wrong with that? [00:59:51] Jeremy: Yeah, and I think the, doing it this way, targeting Windows, I mean if you look in the past, even though you had some software that would be ported to other operating systems without this compatibility layer, without people just targeting Windows, all this software that people can now run on these portable gaming handhelds or on Linux, Most of that software was never gonna be ported. So yeah, absolutely. And [01:00:21] Elizabeth: that's [01:00:22] Jeremy: having that as an option. Yeah. [01:00:24] Elizabeth: That's kind of why wine existed, because people wanted to run their software. You know, that was never gonna be ported. They just wanted, and then the community just spent a lot of effort in, you know, making all these individual programs run. Yeah. [01:00:39] Jeremy: I think it's pretty, pretty amazing too that, that now that's become this official way, I suppose, of distributing your software where you say like, Hey, I made a Windows version, but you're on your Linux machine. it's officially supported because, we have this much belief in this compatibility layer. [01:01:02] Elizabeth: it's kind of incredible to see wine having got this far. I mean, I started working on a, you know, six, seven years ago, and even then, I could never have imagined it would be like this. [01:01:16] Elizabeth: So as we, we wrap up, for the developers that are listening or, or people who are just users of wine, um, is there anything you think they should know about the project that we haven't talked about? [01:01:31] Elizabeth: I don't think there's anything I can think of. [01:01:34] Jeremy: And if people wanna learn, uh, more about the wine project or, or see what you're up to, where, where should they, where should they head? Getting support and contributing [01:01:45] Elizabeth: We don't really have any things like news, unfortunately. Um, read the release notes, uh, follow some, there's some, there's some people who, from Code Weavers who do blogs. So if you, so if you go to codeweavers.com/blog, there's some, there's, there's some codeweavers stuff, uh, some marketing stuff. But there's also some developers who will talk about bugs that they are solving and. And how it's easy and, and the experience of working on wine. [01:02:18] Jeremy: And I suppose if, if someone's. Interested in like, like let's say they have a piece of software, it's not working through wine. what's the best place for them to, to either get help or maybe even get involved with, with trying to fix it? [01:02:37] Elizabeth: yeah. Uh, so you can file a bug on, winehq.org,or, or, you know, find, there's a lot of developer resources there and you can get involved with contributing to the software. And, uh, there, there's links to our mailing list and IRC channels and, uh, and, and the GitLab, where all places you can find developers. [01:03:02] Elizabeth: We love to help you. Debug things. We love to help you fix things. We try our very best to be a welcoming community and we have got a long, we've got a lot of experience working with people who want to get their application working. So, we would love to, we'd love to have another. [01:03:24] Jeremy: Very cool. Yeah, I think wine is a really interesting project because I think for, I guess it would've been for decades, it seemed like very niche, like not many people [01:03:37] Jeremy: were aware of it. And now I think maybe in particular because of the, the Linux gaming handhelds, like the steam deck,wine is now something that a bunch of people who would've never heard about it before, and now they're aware of it. [01:03:53] Elizabeth: Absolutely. I've watched that transformation happen in real time and it's been surreal. [01:04:00] Jeremy: Very cool. Well, Elizabeth, thank you so much for, for joining me today. [01:04:05] Elizabeth: Thank you, Jeremy. I've been glad to be here.
NVIDIA is doubling down on AI dominance with massive investments across cloud, chips, and infrastructure. It struck a $6.3B deal with CoreWeave to secure long-term GPU demand, is investing $5B in Intel to co-develop custom CPUs and PC chips that pair Intel processors with NVIDIA GPUs, and is committing up to $100B with OpenAI to build data centers requiring 10 gigawatts of power. These moves lock in demand, expand NVIDIA's role across computing ecosystems, and cement its leadership in the race to scale global AI infrastructure. This and more on the Tech Field Day News Rundown with Alastair Cooke and guest host Scott Robohn. Time Stamps: 0:00 - Cold Open 0:36 - Welcome to the Tech Field Day News Rundown1:22 - Hugging Face Brings Open-Source Models to GitHub Copilot Chat3:52 - Pulumi Introduces AI Agents to Automate Infrastructure Management6:51 - Cisco DevNet is now Cisco Automation 9:12 - North Dakota to Test Portable Micro Data Centers for AI in Oil Fields12:14 - Sumo Logic Launches AI Agents to Streamline Cybersecurity Operations14:46 - Justice Department Moves to Break Up Google's Ad Business17:43 - NVIDIA's Multi-Billion-Dollar Moves Expand AI and Computing Leadership21:35 - The Weeks Ahead22:58 - Thanks for Watching the Tech Field Day News RundownGuest Host: Scott Robohn, CEO of SolutionalFollow our hosts Tom Hollingsworth, Alastair Cooke, and Stephen Foskett. Follow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon.
La alianza entre NVIDIA e Intel busca desarrollar infraestructura de inteligencia artificial y nuevas PCs con integración nativa de aceleración de IA, combinando GPUs RTX con CPUs x86 para transformar el futuro del cómputo empresarial y personal.
Der große Kracher der Woche: Nvidia investiert fünf Milliarden US-Dollar in Intel! Dabei wurde eine weitreichende Kooperation angekündigt, zum Einen im Bereich Server und Datacenter, zum Anderen aber auch bei Consumer-Produkten. So soll Intel in Zukunft für ihre CPUs Nvidia-Chiplets bzw. -Tiles verwenden können anstelle der eigenen auf Basis der Xe-Architektur. Was mag das für Intels GPU-Abteilung bedeuten? Da können wir nur abwarten und spekulieren, vor 2027 werden keine konkreten Produkte aus dieser Kooperation erwartet. Die langerwartete Xbox Full Screen Experience für Windows-Handhelds ist da! Also fast. Über das Windows-Insider-Programm kann man jetzt schon Zugriff auf das große Herbst-Update 25H2 von Windows 11 erhalten und so mit ausgewählten Handhelds, u.a. dem originalen Asus ROG Ally oder dem MSI Claw AI 8, sich einen Vorschau auf das verschaffen, was mit dem Xbox Ally kommen soll. Eine Preview gibt es von uns auch auf "The Lift" von Entwickler Fantastic Signals und Publisher tinyBuild. Die Kurzfassung: House Flipper trifft auf sowjetische Science Fiction mit Einflüssen von Stranger Things und SCP Foundation. "The Lift" spielt in einem Institut, das nach einem nicht näher beschriebenen Vorfall verlassen und heruntergekommen in einer Art Zwischendimension existiert. Unsere Aufgabe als "Keeper" ist es, hier wieder für Ordnung zu sorgen, aber nicht mit Arsenal verschiedener Waffen, sondern mit dem Schraubendreher. Ist Ahti unser Kollege? Hmmmm. Viel Spaß mit Folge 274! Sprecher: Michael Kister, Mohammed Ali DadAudioproduktion: Michael KisterVideoproduktion: Mohammed Ali Dad, Michael KisterTitelbild: Mohammed Ali DadBildquellen: Wikipedia/Nvidia/IntelAufnahmedatum: 19.09.2025 Besucht unsim Discord https://discord.gg/SneNarVCBMauf Bluesky https://bsky.app/profile/technikquatsch.deauf TikTok https://www.tiktok.com/@technikquatschauf Youtube https://www.youtube.com/@technikquatschTechnikquatsch Gaming https://www.youtube.com/@TechnikquatschGamingauf Instagram https://www.instagram.com/technikquatschauf Twitch https://www.twitch.tv/technikquatsch RSS-Feed https://technikquatsch.de/feed/podcast/Spotify https://open.spotify.com/show/62ZVb7ZvmdtXqqNmnZLF5uApple Podcasts https://podcasts.apple.com/de/podcast/technikquatsch/id1510030975 00:00:00 neuer Youtube Kanal Technikquatsch Gaming https://www.youtube.com/@TechnikquatschGaming 00:08:20 Nvidia steigt bei Intel mit 5 Milliarden Dollar einhttps://www.theverge.com/report/781330/nvidia-intel-explain-5-billion-deal-jensen-huang-lip-bu-tan-amdhttps://www.reuters.com/world/asia-pacific/nvidia-bets-big-intel-with-5-billion-stake-chip-partnership-2025-09-18/https://www.computerbase.de/news/wirtschaft/intel-mit-nvlink-und-rtx-gpus-nvidia-investiert-5-mrd-us-dollar-in-intel-fuer-pcs-ai-und-mehr.94379/Digital Foundry: AMD's Most Powerful APU Yet - Strix Halo/Ryzen AI Max+ 395 - GMKTec Evo-X2 Review https://www.youtube.com/watch?v=vMGX35mzsWg 00:28:56 spannende CPUs von Intel mit Panther Lake und Nova Lake und AMD mit Zen 6 in den nächsten Jahren; Erinnerungen an Computerläden von damals 00:38:18 Dürkheimer Wurstmarkt: Warn-App Katwarn fordert zum Singen des Pfalzlieds auf.https://www.heise.de/news/Inakzeptabel-Ueber-Warn-App-Katwarn-zum-Singen-des-Pfalzlieds-aufgerufen-10662190.html 00:42:32 Steam: Unterstützung für 32-bit-Betriebssysteme zum 01. Januar 2026 eingestellt; offizieller Support für Windows 10 endet am 14. Oktober 2025https://videocardz.com/newz/steam-ends-32-bit-operating-system-support-in-2026Linux Mint 22.2 https://linuxmint.com/; Nobara Linux https://nobaraproject.org/, CachyOS https://cachyos.org/ 00:49:13 kurzer Eindruck zu iOS 26https://www.computerbase.de/news/betriebssysteme/fuer-iphone-ipad-mac-und-watch-ios-26-ipados-26-macos-26-und-watchos-26-erschienen.94322/ 00:51:37 Preview der neuen Windows Gaming Fullscreen Experience auf Handheld-PCs wie Asus ROG Ally möglich mit Windows 11 25H2The Phawx: Wish It Was Better - LE...
This week on the podcast we go over our review of the KLEVV CRAS V RGB DDR5-6000 32GB Memory Kit. We also discuss some interesting new CPU releases including the Ryzen 5 5600F and Core i5-110, Nintendo bringing back the Virtual Boy, new PC cases, and much more!
Ruby core team member Aaron Patterson (tenderlove) takes us deep into the cutting edge of Ruby's performance frontier in this technical exploration of how one of the world's most beloved programming languages continues to evolve.At Shopify, Aaron works on two transformative projects: ZJIT, a method-based JIT compiler that builds on YJIT's success by optimizing register allocation to reduce memory spills, and enhanced Ractor support to enable true CPU parallelism in Ruby applications. He explains the fundamental differences between these approaches - ZJIT makes single CPU utilization more efficient, while Ractors allow Ruby code to run across multiple CPUs simultaneously.The conversation reveals how real business needs drive language development. Shopify's production workloads unpredictably alternate between CPU-bound and IO-bound tasks, creating resource utilization challenges. Aaron's team aims to build auto-scaling web server infrastructure using Ractors that can dynamically adjust to workload characteristics - potentially revolutionizing how Ruby applications handle variable traffic patterns.For developers interested in contributing to Rails, Aaron offers practical advice: start reading the source code, understand the architecture, and look for ways to improve it. He shares insights on the challenges of making Rails Ractor-safe, particularly around passing lambdas between Ractors while maintaining memory safety.The episode concludes with a delightful tangent into Aaron's latest hardware project - building a color temperature sensor for camera calibration that combines his photography hobby with his programming expertise. True to form, even his leisure activities inevitably transform into coding projects.Whether you're a seasoned Ruby developer or simply curious about language design and performance optimization, Aaron's unique blend of deep technical knowledge and playful enthusiasm makes this an engaging journey through Ruby's exciting future.Send us some love. HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show
Today we have Dr. Ewelina Kurtys on the show. Ewelina has a background in Neuroscience and is currently working at FinalSpark. FinalSpark is using live Neurons for computations instead of traditional electric CPUs. The advantage is that live Neurons are significantly more energy efficient than traditional computing, and given all the energy concerns right now with regards to running AI workloads and data centers, this seems quite relevant, even though bioprocessors are still very much in the research phase.
Condor Technology, through its subsidiary Condor Computing, introduced the Cuzco RISC-V CPU for datacenter applications. The Cuzco core supports up to eight cores with private L2 and shared L3 cache, features a 12-stage pipeline, and uses a time-based instruction scheduling system to reduce power consumption. Andes Technology, a founding member of RISC-V International, reported $42 million in 2024 sales and shipped IP for over 17 billion RISC-V chips since 2005. Nearly 40 percent of Andes' 2024 revenue came from AI sector deployments. Major technology companies and the European Union are investing in RISC-V, with the first Cuzco processors expected to reach users by year-end.Learn more on this news by visiting us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.
Parce que… c'est l'épisode 0x629! Shameless plug 12 au 17 octobre 2025 - Objective by the sea v8 14 et 15 octobre 2025 - ATT&CKcon 6.0 14 et 15 octobre 2025 - Forum inCyber Canada Code rabais de 30% - CA25KDUX92 4 et 5 novembre 2025 - FAIRCON 2025 10 au 12 novembre 2025 - IAQ - Le Rendez-vous IA Québec 17 au 20 novembre 2025 - European Cyber Week 25 et 26 février 2026 - SéQCure 2026 Description Notes Apple Memory Integrity Enforcement: A complete vision for memory safety in Apple devices iCloud Calendar abused to send phishing emails from Apple's servers Dormant macOS Backdoor ChillyHell Resurfaces Microsoft Microsoft Patch Tuesday September 2025 Fixes Risky Kernel Flaws Senator blasts Microsoft for making default Windows vulnerable to “Kerberoasting” Senator blasts Microsoft for ‘dangerous, insecure software' that helped pwn US hospitals Microsoft adds malicious link warnings to Teams private chats Microsoft cloud services disrupted by Red Sea cable cuts Microsoft is officially sending employees back to the office. Read the memo Supply chain Hackers Booked Very Little Profit with Widespread npm Supply Chain Attack Hackers Hijacked 18 Very Popular npm Packages With 2 Billion Weekly Downloads Défensif The Quiet Revolution in Kubernetes Security TailGuard - La solution Docker qui marie WireGuard et Tailscale pour du VPN surpuissant Geedge & MESA Leak: Analyzing the Great Firewall's Largest Document Leak Forget disappearing messages – now Signal will store 100MB of them for you for free Introducing Signal Secure Backups We have early access to Android Security Bulletin patches MISP 2.5.21 Released with a new recorrelate feature, various fixes and updates Threat Actor Installed EDR on Their Systems, Revealing Workflows and Tools Used Offensif Jaguar Land Rover discloses a data breach after recent cyberattack Jaguar Land Rover extends shutdown after cyber attack Salty2FA Takes Phishing Kits to Enterprise Level Police Body Camera Apps Sending Data to Cloud Servers Hosted in China Via TLS Port 9091 Weaponizing Ads: How Governments Use Google Ads and Facebook Ads to Wage Propaganda Wars Spectre haunts CPUs again: VMSCAPE vulnerability leaks cloud secrets VirusTotal finds hidden malware phishing campaign in SVG files IA CVE-2025-58444 - MCP Inspector is Vulnerable to Potential Command Execution via XSS When Connecting to an Untrusted MCP Server Cursor AI Code Editor RCE Vulnerability Enables “autorun” of Malicious on your Machine The Software Engineers Paid to Fix Vibe Coded Messes TheAuditor - L'outil de sécurité qui rend vos assistants IA moins laxistes sur la sécurité de votre code Insolite / Divers Brussels faces privacy crossroads over encryption backdoors My Latest Book: Rewiring Democracy A love letter to Internet Relay Chat Collaborateurs Nicolas-Loïc Fortin Crédits Montage par Intrasecure inc Locaux réels par Intrasecure inc
Episode 81: We're back! Lots to discuss in this video, including YouTube weirdness, the future of AMD and Intel's CPU platforms, the good old CPU core debate, upcoming GPU rumors and more.CHAPTERS00:00 - Intro03:13 - Our YouTube views are down, this is what the stats say31:14 - Zen 7 on AM5 and Intel's competing platform54:13 - How important is platform longevity?1:07:58 - Six core CPUs are still powerful for gaming1:17:27 - Will Intel make an Arc B770?1:26:22 - No RTX Super any time soon1:29:14 - Updates from our boring livesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
Send us a textCloud computing is transforming biotech by offering purpose-built infrastructure that supports AI-driven drug discovery and development while meeting strict regulatory requirements. Dr. Ilya Burkov explains how Nebius provides full-stack solutions that democratize access to powerful technology, enabling researchers to achieve breakthroughs that previously required generations.• Cloud computing market was built for general purpose workloads but biotech needs specialized infrastructure for sensitive data and AI models• GPUs enable parallel processing that accelerates AI applications—like "a whole classroom solving math problems at once" versus CPUs solving one at a time• Applications include drug discovery, genomics, protein structure modeling, quantum chemistry, and single-cell modeling for cancer treatment• Nebius provides full-stack solutions with hardware and software layers, working with NVIDIA to offer specialized packages• Democratizing access to AI infrastructure is leveling the playing field between small biotechs and large pharmaceutical companies• Scientists can now accomplish in their lifetime what previously would have taken multiple generations of researchers• Breaking down silos between data teams and institutions is crucial for accelerating healthcare innovationSupport the show________Reach out to Ivanna RosendalJoin the conversation on our LinkedIn page
This week we talk about General Motors, the Great Recession, and semiconductors.We also discuss Goldman Sachs, US Steel, and nationalization.Recommended Book: Abundance by Ezra Klein and Derek ThompsonTranscriptNationalization refers to the process through which a government takes control of a business or business asset.Sometimes this is the result of a new administration or regime taking control of a government, which decides to change how things work, so it gobbles up things like oil companies or railroads or manufacturing hubs, because that stuff is considered to be fundamental enough that it cannot be left to the whims, and the ebbs and eddies and unpredictable variables of a free market; the nation needs reliable oil, it needs to be churning out nails and screws and bullets, so the government grabs the means of producing these things to ensure nothing stops that kind of output or operation.That more holistic reworking of a nation's economy so that it reflects some kind of socialist setup is typically referred to as socialization, though commentary on the matter will still often refer to the individual instances of the government taking ownership over something that was previously private as nationalization.In other cases these sorts of assets are nationalized in order to right some kind of perceived wrong, as was the case when the French government, in the wake of WWII, nationalized the automobile company Renault for its alleged collaboration with the Nazis when they occupied France.The circumstances of that nationalization were questioned, as there was a lot of political scuffling between capitalist and communist interests in the country at that time, and some saw this as a means of getting back against the company's owner, Louis Renault, for his recent, violent actions against workers who had gone on strike before France's occupation—but whatever the details, France scooped up Renault and turned it into a state-owned company, and in 1994, the government decided that its ownership of the company was keeping its products from competing on the market, and in 1996 it was privatized and they started selling public shares, though the French government still owns about 15% of the company.Nationalization is more common in some non-socialist nations than others, as there are generally considered to be significant pros and cons associated with such ownership.The major benefit of such ownership is that a government owned, or partially government owned entity will tend to have the government on its side to a greater or lesser degree, which can make it more competitive internationally, in the sense that laws will be passed to help it flourish and grow, and it may even benefit from direct infusions of money, when needed, especially with international competition heats up, and because it generally allows that company to operate as a piece of government infrastructure, rather than just a normal business.Instead of being completely prone to the winds of economic fortune, then, the US government can ensure that Amtrak, a primarily state-owned train company that's structured as a for-profit business, but which has a government-appointed board and benefits from federal funding, is able to keep functioning, even when demand for train services is low, and barbarians at the gate, like plane-based cargo shipping and passenger hauling, becomes a lot more competitive, maybe even to the point that a non-government-owned entity may have long-since gone under, or dramatically reduced its service area, by economic necessity.A major downside often cited by free-market people, though, is that these sorts of companies tend to do poorly, in terms of providing the best possible service, and in terms of making enough money to pay for themselves—services like Amtrak are structured so that they pay as much of their own expenses as much as possible, for instance, but are seldom able to do so, requiring injections of resources from the government to stay afloat, and as a result, they have trouble updating and even maintaining their infrastructure.Private companies tend to be a lot more agile and competitive because they have to be, and because they often have leadership that is less political in nature, and more oriented around doing better than their also private competition, rather than merely surviving.What I'd like to talk about today is another vital industry that seems to have become so vital, like trains, that the US government is keen to ensure it doesn't go under, and a stake that the US government took in one of its most historically significant, but recently struggling companies.—The Emergency Economic Stabilization Act of 2008 was a law passed by the US government after the initial whammy of the Great Recession, which created a bunch of bailouts for mostly financial institutions that, if they went under, it was suspected, would have caused even more damage to the US economy.These banks had been playing fast and loose with toxic assets for a while, filling their pockets with money, but doing so in a precarious and unsustainable manner.As a result, when it became clear these assets were terrible, the dominos started falling, all these institutions started going under, and the government realized that they would either lose a significant portion of their banks and other financial institutions, or they'd have to bail them out—give them money, basically.Which wasn't a popular solution, as it looked a lot like rewarding bad behavior, and making some businesses, private businesses, too big to fail, because the country's economy relied on them to some degree. But that's the decision the government made, and some of these institutions, like Goldman Sachs, had their toxic assets bought by the government, removing these things from their balance sheets so they could keep operating as normal. Others declared bankruptcy and were placed under government control, including Fannie Mae and Freddie Mac, which were previously government supported, but not government run.The American International Group, the fifth largest insurer in the world at that point, was bought by the US government—it took 92% of the company in exchange for $141.8 billion in assistance, to help it stay afloat—and General Motors, not a financial institution, but a car company that was deemed vital to the continued existence of the US auto market, went bankrupt, the fourth largest bankruptcy in US history. The government allowed its assets to be bought by a new company, also called GM, which would then function as normal, which allowed the company to keep operating, employees to keep being paid, and so on, but as part of that process, the company was given a total of $51 billion by the government, which took a majority stake in the new company in exchange.In late-2013, the US government sold its final shares of GM stock, having lost about $10.7 billion over the course of that ownership, though it's estimated that about 1.5 million jobs were saved as a result of keeping GM and Chrysler, which went through a similar process, afloat, rather than letting them go under, as some people would have preferred.In mid-August of this year, the US government took another stake in a big, historically significant company, though this time the company in question wasn't going through a recession-sparked bankruptcy—it was just falling way behind its competition, and was looking less and less likely to ever catch up.Intel was founded 1968, and it designs, produces, and sells all sorts of semiconductor products, like the microprocessors—the computer chips—that power all sorts of things, these days.Intel created the world's first commercial computer chip back in 1971, and in the 1990s, its products were in basically every computer that hit the market, its range and dominance expanding with the range and dominance of Microsoft's Windows operating system, achieving a market share of about 90% in the mid- to late-1990s.Beginning in the early 2000s, though, other competitors, like AMD, began to chip away at Intel's dominance, and though it still boasts a CPU market share of around 67% as of Q2 of 2025, it has fallen way behind competitors like Nvidia in the graphics card market, and behind Samsung in the larger semiconductor market.And that's a problem for Intel, as while CPUs are still important, the overall computing-things, high-tech gadget space has been shifting toward stuff that Intel doesn't make, or doesn't do well.Smaller things, graphics-intensive things. Basically all the hardware that's powered the gaming, crypto, and AI markets, alongside the stuff crammed into increasingly small personal devices, are things that Intel just isn't very good at, and doesn't seem to have a solid means of getting better at, so it's a sort of aging giant in the computer world—still big and impressive, but with an outlook that keeps getting worse and worse, with each new generation of hardware, and each new innovation that seems to require stuff it doesn't produce, or doesn't produce good versions of.This is why, despite being a very unusual move, the US government's decision to buy a 10% stake in Intel for $8.9 billion didn't come as a total surprise.The CEO of Intel had been raising the possibility of some kind of bailout, positioning Intel as a vital US asset, similar to all those banks and to GM—if it went under, it would mean the US losing a vital piece of the global semiconductor pie. The government already gave Intel $2.2 billion as part of the CHIPS and Science Act, which was signed into law under the Biden administration, and which was meant to shore-up US competitiveness in that space, but that was a freebie—this new injection of resources wasn't free.Response to this move has been mixed. Some analysts think President Trump's penchant for netting the government shares in companies it does stuff for—as was the case with US Steel giving the US government a so-called ‘golden share' of its company in exchange for allowing the company to merge with Japan-based Nippon Steel, that share granting a small degree of governance authority within the company—they think that sort of quid-pro-quo is smart, as in some cases it may result in profits for a government that's increasingly underwater in terms of debt, and in others it gives some authority over future decisions, giving the government more levers to use, beyond legal ones, in steering these vital companies the way it wants to steer them.Others are concerned about this turn of events, though, as it seems, theoretically at least, anti-competitive. After all, if the US government profits when Intel does well, now that it owns a huge chunk of the company, doesn't that incentivize the government to pass laws that favor Intel over its competitors? And even if the government doesn't do anything like that overtly, doesn't that create a sort of chilling effect on the market, making it less likely serious competitors will even emerge, because investors might be too spooked to invest in something that would be going up against a partially government-owned entity?There are still questions about the legality of this move, as it may be that the CHIPS Act doesn't allow the US government to convert grants into equity, and it may be that shareholders will find other ways to rebel against the seeming high-pressure tactics from the White House, which included threats by Trump to force the firing of its CEO, in part by withholding some of the company's federal grants, if he didn't agree to giving the government a portion of the company in exchange for assistance.This also raises the prospect that Intel, like those other bailed-out companies, has become de facto too big to fail, which could lead to stagnation in the company, especially if the White House goes further in putting its thumb on the scale, forcing more companies, in the US and elsewhere, to do business with the company, despite its often uncompetitive offerings.While there's a chance that Intel takes this influx of resources and support and runs with it, catching up to competitors that have left it in the dust and rebuilding itself into something a lot more internationally competitive, then, there's also the chance that it continues to flail, but for much longer than it would have, otherwise, because of that artificial support and government backing.Show Noteshttps://www.reuters.com/legal/legalindustry/did-trump-save-intel-not-really-2025-08-23/https://www.nytimes.com/2025/08/23/business/trump-intel-us-steel-nvidia.htmlhttps://arstechnica.com/tech-policy/2025/08/intel-agrees-to-sell-the-us-a-10-stake-trump-says-hyping-great-deal/https://en.wikipedia.org/wiki/General_Motors_Chapter_11_reorganizationhttps://www.investopedia.com/articles/economics/08/government-financial-bailout.asphttps://www.tomshardware.com/pc-components/cpus/amds-desktop-pc-market-share-hits-a-new-high-as-server-gains-slow-down-intel-now-only-outsells-amd-2-1-down-from-9-1-a-few-years-agohttps://www.spglobal.com/commodity-insights/en/news-research/latest-news/metals/062625-in-rare-deal-for-us-government-owns-a-piece-of-us-steelhttps://en.wikipedia.org/wiki/Renaulthttps://en.wikipedia.org/wiki/State-owned_enterprises_of_the_United_Stateshttps://247wallst.com/special-report/2021/04/07/businesses-run-by-the-us-government/https://en.wikipedia.org/wiki/Nationalizationhttps://www.amtrak.com/stakeholder-faqshttps://en.wikipedia.org/wiki/General_Motors_Chapter_11_reorganization This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
In this cutting-edge episode, we explore how Edge AI is transforming drug discovery and revolutionising laboratory workflows, real-time molecular analysis, and protein folding predictions—all at the source of data collection. Joining us is Nuri Cankaya, Vice President of Commercial Marketing at Intel Corporation, and a renowned thought leader in AI and healthcare innovation.You'll discover how AI at the edge—enabled by on-device NPUs, GPUs, and CPUs—is unlocking privacy-preserving, high-performance computing in the most sensitive environments, such as clinical labs and pharmaceutical R&D centers. Nuri shares his deep experience in AI, discusses hardware configurations for edge deployments, and provides real-world examples of AI accelerating high-throughput screening, compound discovery, and target validation.Key Topics:What is Edge AI and how it differs from cloud-based AIHow real-time AI in the lab enables faster, cheaper drug discoveryHardware requirements: NPU, GPU, CPU integration for edge computingThe role of AlphaFold and protein folding prediction in therapeutic developmentUse cases in molecular screening, genomics, and clinical trial simulationsHow Edge AI preserves data privacy and complies with GDPR and HIPAAPredictions for AGI (Artificial General Intelligence) and Quantum Computing in healthcareStrategic advice for pharma leaders and biotech innovators looking to pilot AIThe energy efficiency and sustainability gains from Edge AI vs. cloud AIAbout the PodcastAI for Pharma Growth is a podcast focused on exploring how artificial intelligence can revolutionise healthcare by addressing disparities and creating equitable systems. Join us as we unpack groundbreaking technologies, real-world applications, and expert insights to inspire a healthier, more equitable future.This show brings together leading experts and changemakers to demystify AI and show how it's being used to transform healthcare. Whether you're in the medical field, technology sector, or just curious about AI's role in social good, this podcast offers valuable insights.AI For Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr. Andree Bates created to help organisations understand how the use of AI based technologies can easily save them time and grow their brands and business. This show blends deep experience in the sector with demystifying AI for all pharma people, from start up biotech right through to Big Pharma. In this podcast Dr Andree will teach you the tried and true secrets to building a pharma company using AI that anyone can use, at any budget. As the author of many peer-reviewed journals and having addressed over 500 industry conferences across the globe, Dr Andree Bates uses her obsession with all things AI and futuretech to help you to navigate through the, sometimes confusing but, magical world of AI powered tools to grow pharma businesses. This podcast features many experts who have developed powerful AI powered tools that are the secret behind some time saving and supercharged revenue generating business results. Those who share their stories and expertise show how AI can be applied to sales, marketing, production, social media, psychology, customer insights and so much more. Dr. Andree Bates LinkedIn | Facebook | Twitter
The Great A1 Paradox:A1Monitored farming-The Water Crisis: An Unintended Consequence, Not a Design or is it?The water consumption of A1 data centers is a legitimate and pressing concern, but it's a byproduct of a technology developed to process information and solve complex problems. The massive water demand is a result of:Physical and Chemical Laws: To run powerful processors (CPUs, GPUs), you must dissipate heat. Water is an incredibly efficient medium for this. There's no way around the laws of thermodynamics or is there?.Economic Incentives: Data centers are often built in places with cheap land and power. These places are not always water-rich. The companies that build them are driven by business goals, not by a global population control agenda. Their failure to consider long-term environmental consequences is a significant problem, but it's one of short-sightedness and profit-motive, not a sinister plan or is it?.Rapid Technological Advancement: The rapid and unexpected rise of generative AI caught many by surprise. The infrastructure to support it, including its massive water and energy needs, is still catching up. Companies are now scrambling to find sustainable solutions, such as using alternative water sources, but this is a reactive measure, not a planned part of the technology's initial design.2. The Conflict with Traditional Agriculture: A Question of Transition and EconomicsThe potential for AI to displace hands-on farmers is a real concern, but it is a classic example of technological unemployment—a recurring theme throughout history, from the Industrial Revolution to the digital age. It is not an A1-specific plot to reduce the population. The conflict arises from:Economic Efficiency: A1-assisted farming promises higher yields with less labor and water. From a purely economic standpoint, this is a desirable outcome. However, it fails to account for the social fabric of rural communities, where farming is not just a job but a way of life.Inequality of Access: The high cost of A1 technology in agriculture creates a divide between large, corporate farms that can afford it and small, family-owned farms that cannot. This can push small farmers out of business, leading to increased consolidation of agricultural land and control. This is a problem of market forces and access to capital, not a conspiracy.Sources Wikipedia, the free encyclopedia en.wikipedia.org Constitutional monarchy - Wikipedia Constitutional monarchies differ from absolute monarchies (in which a monarch is the only decision-maker) in that they are bound to exercise powers and ... Wikipedia, the free encyclopedia en.wikipedia.org Constitutional monarchy - Wikipedia Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule". Quizlet quizlet.com 5.02 Constitutional versus Absolute Monarchies Flashcards | Quizlet We think of an absolute monarchy when we look back in history and study rulers. A constitutional monarchy is sometimes called a democratic monarchy. #ScienceFiction, #AI, #Dystopian, #Future, #Mnemonic, #FictionalNarrative, #ReasoningModels, #Humanity, #War, #Genocide, #Technology, #ShortStory,Creative Solutions for Holistic Healthcarehttps://www.buzzsprout.com/2222759/episodes/17708819
In this Electropages podcast, host Robin Mitchell is joined by Ali Ors, Global Director of AI and ML Strategy and Technologies for Edge Processing at NXP Semiconductors, to discuss the technical and design challenges of running advanced processing at the network edge. They examine how to implement demanding workloads on microcontrollers and embedded processors with tight constraints on power, memory, and compute, and how techniques such as model optimisation and quantisation are enabling practical deployment in automotive, industrial, and IoT systems. The discussion also covers the growing importance of natural, conversational interfaces, the balance between versatile CPUs and dedicated accelerators, and how hardware flexibility supports long-term product viability. Ali shares insights into securing devices in the field, protecting against evolving threats, and ensuring reliable operation over extended lifecycles. This episode offers an in-depth look at the engineering strategies and hardware considerations shaping the future of intelligent edge devices.
Host: Sebastian HassingerGuest: Andrew Dzurak (CEO, Diraq)In this enlightening episode, Sebastian Hassinger interviews Professor Andrew Dzurak. Andrew is the CEO and co-founder of Diraq and concurrently a Scientia Professor in Quantum Engineering at UNSW Sydney, an ARC Laureate Fellow and a Member of the Executive Board of the Sydney Quantum Academy. Diraq is a quantum computing startup pioneering silicon spin qubits, based in Australia. The discussion delves into the technical foundations, manufacturing breakthroughs, scalability, and future roadmap of silicon-based quantum computers—all with an industrial and commercial focus.Key Topics and Insights1. What Sets Diraq ApartDiraq's quantum computers use silicon spin qubits, differing from the industry's more familiar modalities like superconducting, trapped ion, or neutral atom qubits.Their technology leverages quantum dots—tiny regions where electrons are trapped within modified silicon transistors. The quantum information is encoded in the spin direction of these trapped electrons—a method with roots stretching over two decades1.2. Manufacturing & ScalabilityDiraq modifies standard CMOS transistors, making qubits that are tens of nanometers in size, compared to the much larger superconducting devices. This means millions of qubits can fit on a single chip.The company recently demonstrated high-fidelity qubit manufacturing on standard 300mm wafers at commercial foundries (GlobalFoundries, IMEC), matching or surpassing previous experimental results—all fidelity metrics above 99%.3. Architectural InnovationsDiraq's chips integrate both quantum and conventional classical electronics side by side, using standard silicon design toolchains like Cadence. This enables leveraging existing chip design and manufacturing expertise, speeding progress towards scalable quantum chips.Movement of electrons (and thus qubits) across the chip uses CMOS bucket-brigade techniques, similar to charge-coupled devices. This means fast (
This week on the podcast we go over our reviews of the Valkyrie V360 Lite Liquid CPU Cooler and the Acer FA200 Gen4 Solid State Drive. We also discuss AMD possibly coming out with dual 3D V-Cache CPUs, AMD's AM6 socket, a $100 price tag for GTA 6, and all of the Battlefield 6 news!
“If we build it, will they come?” Jensen said: “If we don't build it, they can't come.” “You have to believe in what you believe, and you have to pursue that belief.” “This is at the core of our company.” The “big bet” Jensen Huang made 30 years ago: by inventing the technology AND the market at the same time, Jensen aimed to expand, augment, and accelerate general purpose computing CPUs with specialized algorithms for the video game niche. Jensen Huang had the foresight three decades ago to create CUDA, a compatible accelerated computing architecture that became the pillars for AI advancement today. The visualized hardware platform invented in 1994 demanded that Nvidia grow other parts of the “flywheel”: developer ecosystem, install base, and the subsequent demand for GPUs invented by Nvidia. Read it as a 5-min blog Watch it as a 12-min video ©Joanne Z. Tan all rights reserved. Please don't forget to like it, comment, or better, SHARE IT WITH OTHERS! - To stay in the loop, subscribe to our Newsletter (About 10 Plus Brand: In addition to the “whole 10 yards” of brand building, digital marketing, and content creation for business and personal brands. To contact us: 1-888-288-4533.) - Visit our Websites: https://10plusbrand.com/ https://10plusprofile.com/ Phone: 888-288-4533 - Find us online by clicking or follow these hashtags: #10PlusBrand #10PlusPodcast #JoanneZTan #10PlusInterviews #BrandDNA #BeYourOwnBrand #StandForSomething #SuperBowlTVCommercials #PoemsbyJoanneTan #GenuineVideo #AIXD #AI Experience Design #theSecondRenaissance #2ndRenaissance
Not only do we never underestimate the power of sunglasses, we bring you another show after a "sick" week off. We've got some external storage to review, Threadripper high-wattage benchmarks, and some Zen time on top of all the other high quality news items and spontaneous commentary you know you want. And need. Topics below.Timestamps:00:00 Intro01:04 Patreon1:35 Food with Josh03:24 Next-gen Radeon may have 96 CUs, 384-bit memory14:18 Threadripper PRO 9995WX's insane Cinebench score (and power draw)17:57 AM5 motherboards revised for Zen 6 CPUs?22:55 We mention an exhaustive study of AMD memory speeds28:30 NVIDIA adding native RISC-V support to CUDA30:19 Each of us blocks Wi-Fi in our own special way33:49 MAINGEAR goes retro39:34 Self-destructing SSDs42:03 Belkin notifies users that Wemo products will be bricked45:22 (In)Security Corner1:01:26 Gaming Quick Hits1:12:00 Crucial X10 Portable SSD review1:16:52 Picks of the Week1:26:42 Outro ★ Support this podcast on Patreon ★
Any donation is greatly appreciated! 47e6GvjL4in5Zy5vVHMb9PQtGXQAcFvWSCQn2fuwDYZoZRk3oFjefr51WBNDGG9EjF1YDavg7pwGDFSAVWC5K42CBcLLv5U OR DONATE HERE: https://www.monerotalk.live/donate TODAY'S SHOW: In this episode of Monero Talk, legal expert Zach Shapiro joins Douglas Tuman to discuss U.S. cryptocurrency legislation, the legal challenges facing privacy tech, and the philosophical divide between building unstoppable systems versus working within regulatory frameworks. Shapiro, who runs a crypto-focused law firm and is involved with the Bitcoin Policy Institute and Peer-to-Peer Rights Foundation, outlines recent bills in Congress—including the Clarity Act and Genius Act—and their implications for developers and privacy advocates. He and Doug debate Bitcoin vs. Monero, focusing on fungibility and censorship resistance, with Shapiro defending Bitcoin's legal positioning and Doug championing Monero's privacy features. The episode also covers ongoing cases like Tornado Cash, the status of Samurai Wallet, and efforts to repeal New York's restrictive BitLicense. TIMESTAMPS: (00:02:12) – Introduction to Zach's background and involvement with the Bitcoin Policy Institute, Peer-to-Peer Rights Foundation. (00:08:13) – Zach's perspective on various technologies: Bitcoin, stablecoins, DAOs. (00:12:21) – Debate on fungibility: Bitcoin vs Monero. (00:17:09) – Is Bitcoin functionally fungible? Legal and policy perspectives. (00:20:00) – Cash vs Bitcoin legal treatment in cases of stolen funds. (00:28:57) – Mining decentralization: ASICs, CPUs, regulatory capture. (00:33:18) – Zach's overall take on Monero vs Bitcoin. (00:36:15) – Explanation of 3 key crypto-related bills (Genius Act, Clarity Act, Anti-CBDC Bill) (00:43:23) – Implications of Section 110 for privacy developers. (00:46:25) – Concerns over Genius Act enabling “backdoor CBDC.” (00:53:00) – What would Satoshi think about current crypto laws and stablecoins? (00:58:02) – Genius Act's effect on algorithmic stablecoins (likely banned). (01:02:12) – Genius Act vs Clarity Act: Pros and cons for Monero. (01:06:01) – Eliminating capital gains for crypto use — is it possible? (01:07:50) – Comments on the Bank Secrecy Act, impact of Calirty Act for Monero, NY's BitLicense, and Monero exchange bans. (01:11:18) - Closing Remarks GUEST LINKS: https://x.com/zackbshapiro Purchase Cafe & tip the farmers w/ XMR! https://gratuitas.org/ Purchase a plug & play Monero node at https://moneronodo.com SPONSORS: Cakewallet.com, the first open-source Monero wallet for iOS. You can even exchange between XMR, BTC, LTC & more in the app! Monero.com by Cake Wallet - ONLY Monero wallet (https://monero.com/) StealthEX, an instant exchange. Go to (https://stealthex.io) to instantly exchange between Monero and 450 plus assets, w/o having to create an account or register & with no limits. WEBSITE: https://www.monerotopia.com CONTACT: monerotalk@protonmail.com ODYSEE: https://odysee.com/@MoneroTalk:8 TWITTER: https://twitter.com/monerotalk FACEBOOK: https://www.facebook.com/MoneroTalk HOST: https://twitter.com/douglastuman INSTAGRAM: https://www.instagram.com/monerotalk TELEGRAM: https://t.me/monerotopia MATRIX: https://matrix.to/#/%23monerotopia%3Amonero.social MASTODON: @Monerotalk@mastodon.social MONERO.TOWN: https://monero.town/u/monerotalkAny donation is greatly appreciated!Any donation is greatly appreciated!
- GPU-ASIC War - Hyperscalers' CPUs, “GPUs", DPUs, QPUs - Google TPU-7 and Open AI? - Meta's AI chip tape out - Microsoft's AI chip delays - Why do engineering projects get delayed? - Chip co-designers break into chip supply chain [audio mp3="https://orionx.net/wp-content/uploads/2025/06/HPCNB_20250630.mp3"][/audio] The post HPC News Bytes – 20250630 appeared first on OrionX.net.
After what seems like ages, but was actually only a week off, we are BACK. Enjoy what some have called "PCPer's greatest podcast episode of all time, even if it was kind of a slow news cycle". It's the energy, really. Have some news on AMD Threadrippers, Intel ARC, depressing Microsoft news and even Google Earth!00:00 Intro (with show and tell)04:29 Patreon05:57 Food with Josh07:49 AMD launches Ryzen Threadripper PRO 9000 and Radeon AI PRO 900013:38 Next Intel desktop CPUs to offer more cores, more lanes, 100W less power?21:37 Intel Arc A750 LE goes EOL22:51 TPU does some interesting IPC testing with recent GPU architectures26:35 PSA: Newest Steam overlay lets you track generated frames27:11 A new Sound Blaster31:12 A trio of Microsoft stories - mostly depressing40:28 Google Earth turns 2042:50 Podcast sponsor44:23 (In)Security Corner56:22 Gaming Quick Hits1:03:48 Picks of the Week1:16:48 Outro (it just sort of ends) ★ Support this podcast on Patreon ★
Anna Bicker, heise-online-Chefredakteur Dr. Volker Zota und Malte Kirchner sprechen in dieser Ausgabe der #heiseshow unter anderem über folgende Themen: - Sprachlos: Verändert KI unseren Wortschatz? Eine aktuelle Studie zeigt, wie Künstliche Intelligenz unsere Sprache beeinflusst. Welche Auswirkungen hat der verstärkte KI-Einsatz auf unsere alltägliche Kommunikation? Verlieren wir durch KI-generierte Texte die Vielfalt unserer Sprache? Und wie können wir sicherstellen, dass menschliche Kreativität im Sprachgebrauch erhalten bleibt? - Aufgedeckt: So schützt man sich vor Prozessorfälschungen – Gefälschte Prozessoren sind ein ernstes Problem in der IT-Branche. Wie erkennt man gefälschte CPUs und welche Risiken bergen sie für Unternehmen und Privatnutzer? Welche Maßnahmen können Händler und Käufer ergreifen, um sich zu schützen? Und wie sollte die Industrie auf diese Bedrohung reagieren? Zu Gast ist Nico Ernst. - Abgehoben: China und die Magnetschwebebahn-Pläne – China plant eine revolutionäre Magnetschwebebahn. Jetzt wurde bei einem Test eine Geschwindigkeit von 650 km/h erreicht. Ist diese Geschwindigkeit sicher für Passagiere? Welche Herausforderungen müssen für eine kommerzielle Umsetzung bewältigt werden? Und könnte diese Technologie auch in anderen Ländern Anwendung finden? Außerdem wieder mit dabei: ein Nerd-Geburtstag, das WTF der Woche und knifflige Quizfragen.
Anna Bicker, heise-online-Chefredakteur Dr. Volker Zota und Malte Kirchner sprechen in dieser Ausgabe der #heiseshow unter anderem über folgende Themen: - Sprachlos: Verändert KI unseren Wortschatz? Eine aktuelle Studie zeigt, wie Künstliche Intelligenz unsere Sprache beeinflusst. Welche Auswirkungen hat der verstärkte KI-Einsatz auf unsere alltägliche Kommunikation? Verlieren wir durch KI-generierte Texte die Vielfalt unserer Sprache? Und wie können wir sicherstellen, dass menschliche Kreativität im Sprachgebrauch erhalten bleibt? - Aufgedeckt: So schützt man sich vor Prozessorfälschungen – Gefälschte Prozessoren sind ein ernstes Problem in der IT-Branche. Wie erkennt man gefälschte CPUs und welche Risiken bergen sie für Unternehmen und Privatnutzer? Welche Maßnahmen können Händler und Käufer ergreifen, um sich zu schützen? Und wie sollte die Industrie auf diese Bedrohung reagieren? Zu Gast ist Nico Ernst. - Abgehoben: China und die Magnetschwebebahn-Pläne – China plant eine revolutionäre Magnetschwebebahn. Jetzt wurde bei einem Test eine Geschwindigkeit von 650 km/h erreicht. Ist diese Geschwindigkeit sicher für Passagiere? Welche Herausforderungen müssen für eine kommerzielle Umsetzung bewältigt werden? Und könnte diese Technologie auch in anderen Ländern Anwendung finden? Außerdem wieder mit dabei: ein Nerd-Geburtstag, das WTF der Woche und knifflige Quizfragen.
The future of AI isn't coming; it's already here. With NVIDIA's recent announcement of forthcoming 600kW+ racks, alongside the skyrocketing power costs of inference-based AI workloads, now's the time to assess whether your data center is equipped to meet these demands. Fortunately, two-phase direct-to-chip liquid cooling is prepared to empower today's AI boom—and accommodate the next few generations of high-powered CPUs and GPUs. Join Accelsius CEO Josh Claman and CTO Dr. Richard Bonner as they walk through the ways in which their NeuCool™ 2P D2C technology can safely and sustainably cool your data center. During the webinar, Accelsius leadership will illustrate how NeuCool can reduce energy savings by up to 50% vs. traditional air cooling, drastically slash operational overhead vs. single-phase direct-to-chip, and protect your critical infrastructure from any leak-related risks. While other popular liquid cooling methods carry require constant oversight or designer fluids to maintain peak performance, two-phase direct-to-chip technologies require less maintenance and lower flow rates to achieve better results. Beyond a thorough overview of NeuCool, viewers will take away these critical insights: The deployment of Accelsius' Co-Innovation Labs—global hubs enabling data center leaders to witness NeuCool's thermal performance capabilities in real-world settings Our recent testing at 4500W of heat capture—the industry record for direct-to-chip liquid cooling How Accelsius has prioritized resilience and stability in the midst of global supply chain uncertainty Our upcoming launch of a multi-rack solution able to cool 250kW across up to four racks Be sure to join us to discover how two-phase direct-to-chip cooling is enabling the next era of AI.
John is joined by Spencer Collins, Executive Vice President and Chief Legal Officer of Arm Holdings, the UK-based semiconductor design firm known for powering over 99% of smartphones globally with its energy-efficient CPU designs. They discuss the legal challenges that arise from Arm's unique position in the semiconductor industry. Arm has a unique business model, centered on licensing intellectual property rather than manufacturing processors. This model is evolving as Arm considers moving “up the stack,” potentially entering into processor production to compete more directly in the AI hardware space. Since its $31 billion acquisition by SoftBank in 2016, Arm has seen tremendous growth, culminating in an IPO in 2023 at a $54 billion valuation and its market value nearly doubling since.AI is a major strategic focus for Arm, as its CPUs are increasingly central to AI processing in cloud and edge environments. Arm's high-profile AI projects include Nvidia's Grace Hopper superchip and Microsoft's new AI server chips, both of which rely heavily on Arm CPU cores. Arm is positioned to be a key infrastructure player in AI's future based on its broad customer base, the low power consumption of its semiconductors, and their extensive security features. Nvidia's proposed $40 billion acquisition of ARM collapsed due to regulatory pushback in the U.S., Europe, and China. This led SoftBank to pivot to taking 10% of Arm public. Arm is now aggressively strengthening its intellectual property strategy, expanding patent filings, and upgrading legal operations to better protect its innovations in the AI space.Spencer describes his own career path—from law firm M&A work to a leadership role at SoftBank's Vision Fund, where he worked on deals like the $7.7 billion Uber investment—culminating in his current post. He suggests that general counsel for major tech firms must be intellectually agile, invest in best-in-class advisors, and maintain geopolitical awareness to navigate today's rapidly changing legal and regulatory landscape.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi
A handheld Xbox that's really an ROG Ally with a new Ryzen processor?? An LCD that actually NEEDS bright sunlight like a Game Boy Color?? (Oh, and Josh's legendary food segment.) There's some EVGA sad news mixed in there with a cool new GOG feature and too many security stories.Timestamps:00:00 Intro00:39 Patreon01:20 Food with Josh03:30 ASUS ROG Xbox Ally handhelds have new AMD Ryzen Z2 processors06:51 Nintendo sold a record number of Switch 2 consoles08:37 NVIDIA N1X competitive with high-end mobile CPUs?12:38 Samsung now selling 3GB GDDR7 modules16:27 Apple uses car model years now, and Tahoe is their last OS supporting Intel22:01 EVGA motherboards have issues with RTX 50 GPUs?27:48 Josh talks about a new PNY flash drive30:01 (in)Security Corner54:07 Gaming Quick Hits1:00:46 Eazeye Monitor 2.0 - an RLCD monitor review1:11:53 Picks of the Week1:33:21 Outro ★ Support this podcast on Patreon ★
In this episode of the Data Center Frontier Show, we sit down with Kevin Cochrane, Chief Marketing Officer of Vultr, to explore how the company is positioning itself at the forefront of AI-native cloud infrastructure, and why they're all-in on AMD's GPUs, open-source software, and a globally distributed strategy for the future of inference. Cochrane begins by outlining the evolution of the GPU market, moving from a scarcity-driven, centralized training era to a new chapter focused on global inference workloads. With enterprises now seeking to embed AI across every application and workflow, Vultr is preparing for what Cochrane calls a “10-year rebuild cycle” of enterprise infrastructure—one that will layer GPUs alongside CPUs across every corner of the cloud. Vultr's recent partnership with AMD plays a critical role in that strategy. The company is deploying both the MI300X and MI325X GPUs across its 32 data center regions, offering customers optimized options for inference workloads. Cochrane explains the advantages of AMD's chips, such as higher VRAM and power efficiency, which allow large models to run with fewer GPUs—boosting both performance and cost-effectiveness. These deployments are backed by Vultr's close integration with Supermicro, which delivers the rack-scale servers needed to bring new GPU capacity online quickly and reliably. Another key focus of the episode is ROCm (Radeon Open Compute), AMD's open-source software ecosystem for AI and HPC workloads. Cochrane emphasizes that Vultr is not just deploying AMD hardware; it's fully aligned with the open-source movement underpinning it. He highlights Vultr's ongoing global ROCm hackathons and points to zero-day ROCm support on platforms like Hugging Face as proof of how open standards can catalyze rapid innovation and developer adoption. “Open source and open standards always win in the long run,” Cochrane says. “The future of AI infrastructure depends on a global, community-driven ecosystem, just like the early days of cloud.” The conversation wraps with a look at Vultr's growth strategy following its $3.5 billion valuation and recent funding round. Cochrane envisions a world where inference workloads become ubiquitous and deeply embedded into everyday life—from transportation to customer service to enterprise operations. That, he says, will require a global fabric of low-latency, GPU-powered infrastructure. “The world is going to become one giant inference engine,” Cochrane concludes. “And we're building the foundation for that today.” Tune in to hear how Vultr's bold moves in open-source AI infrastructure and its partnership with AMD may shape the next decade of cloud computing, one GPU cluster at a time.
Episode 61: What will the next generation of AI-powered PCs mean for your everyday computing—and how will features like on-device AI, privacy controls, and new processors transform our digital lives? Matt Wolfe (https://x.com/mreflow) is joined by Pavan Davuluri (https://x.com/pavandavuluri), Corporate Vice President of Windows and Devices at Microsoft, who's leading the charge on bringing AI to mainstream computers. In this episode of The Next Wave, Matt dives deep with Pavan into the world of AI PCs, exploring how specialized hardware like NPUs (Neural Processing Units) make AI more accessible and affordable. They break down the difference between CPUs, GPUs, and NPUs, and discuss game-changing Windows features like Recall—digging into the privacy safeguards and how AI can now run locally on your device. Plus, you'll hear Satya Nadella (https://x.com/satyanadella), Microsoft's CEO, share his vision for how agentic AI could revolutionize healthcare and what the future holds for AI-powered Windows experiences. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) NPUs: The Third Processor Revolution (05:41) NPU Efficiency in AI Devices (09:31) Windows Empowering Users Faster (13:00) Evolving Windows Ecosystem Opportunities (13:49) AI Enhancing M365 Copilot Research (15:43) Satya Nadella On AI and Healthcare — Mentions: Want the ultimate guide to Advanced Prompt Engineering? Get it here: https://clickhubspot.com/wbv Pavan Davuluri: https://www.linkedin.com/in/pavand/ Satya Nadella: https://www.linkedin.com/in/satyanadella/ Microsoft: https://www.microsoft.com/ Microsoft 365: https://www.microsoft365.com/ Microsoft Recall https://learn.microsoft.com/en-us/windows/ai/recall/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Everyone Counts by Dr. Jürgen Weimann - Der Podcast über Transformation mit Begeisterung
In dieser Folge spreche ich mit Henrik Klages, Managing Partner von TNG Technology Consulting, über die faszinierende und rasante Entwicklung großer Sprachmodelle (LLMs) – und was das für uns alle bedeutet. Henrik erklärt auf verständliche Weise, wie LLMs funktionieren, warum GPUs wichtiger als CPUs sind und wieso der Mythos vom „nächsten Wort“ die wahre Kraft dieser Systeme unterschätzt. Außerdem räumt er mit Irrtümern rund um KI auf und zeigt anhand konkreter Beispiele aus Praxis und Forschung, wie Unternehmen heute aktiv werden müssen, um nicht den Anschluss zu verlieren.
Robert Hallock, VP and GM at Intel, joins us for a deep dive into the rise of AI PCs and why they're more than just a buzzword. We unpack how new hardware accelerators are making smarter, faster, and more private computing possible, and why local, offline AI is about to become as essential as graphics in tomorrow's laptops. Robert explains Intel's ecosystem strategy, the real differences between CPUs, GPUs, and NPUs, and what it will take for AI features to reach everyone, not just creative pros but everyday users. Support the show on Patreon! http://patreon.com/aiinsideshow Subscribe to the YouTube channel! http://www.youtube.com/@aiinsideshow Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:41 - Defining the AI PC: What Makes It Different and Why Now? 0:03:47 - Architectural Shifts: How AI PCs Differ from Traditional PCs 0:05:29 - Intel's Role in the AI Ecosystem: Hardware, Software, and Industry Enablement 0:08:20 - Lessons from the Past: The Intel Web Tablet and Driving Industry Change 0:09:32 - Hardware Evolution: What Needs to Change for AI PCs? 0:11:02 - Real-World AI PC Use Cases: Enterprise, Creative, and Consumer Adoption Waves 0:13:51 - Local vs. Cloud AI: Privacy, Personalization, and the Value of On-Device AI 0:16:50 - Trust and Branding: The Meaning of “AI Inside” for Intel 0:19:26 - Accessibility and User Personas: Who Benefits from AI PCs Today? 0:22:30 - The Graphics-AI Connection: Why GPUs Became Essential for AI Workloads 0:25:10 - The Evolution of GPUs: From Graphics to AI Powerhouses 0:26:56 - Gaming's Role in Driving AI Adoption 0:28:00 - Historical Tech Drivers: Media, Typography, and Early AI Tools 0:29:37 - The Local AI Movement: Are We at an Inflection Point? 0:30:44 - AI Hardware Breakdown: CPUs, GPUs, and NPUs Explained 0:33:49 - Internal Challenges: Education and Customer Awareness at Intel 0:36:06 - Robert Hallock's Role at Intel and Closing Thoughts 0:37:17 - Thank you to Robert Hallock and episode wrap-up Learn more about your ad choices. Visit megaphone.fm/adchoices
At Google Cloud Next '25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity. Google's long-term investment in liquid cooling, now in its fourth generation, plays a key role in managing the heat generated by these powerful chips. Czop highlighted the incremental design improvements made visible through changes in the data center setup, such as liquid cooling pipe placements. Customers often ask whether to use TPUs or GPUs, but the answer depends on their specific workloads and infrastructure. Some, like Moloco, have seen a 10x performance boost by moving directly from CPUs to TPUs. However, many still use both TPUs and GPUs. As models evolve faster than hardware, Google relies on collaborations with teams like DeepMind to anticipate future needs.Learn more from The New Stack about the latest AI infrastructure insights from Google Cloud:Google Cloud Therapist on Bringing AI to Cloud Native InfrastructureA2A, MCP, Kafka and Flink: The New Stack for AI AgentsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this episode of Book Overflow, Carter and Nathan discuss the first half of Grokking Concurrency by Kirill Bobrov! Join them as they discuss the basic building blocks of concurrency, how concurrency has evolved over time, and how building concurrent applications can increase performance!Go Proverbs: https://go-proverbs.github.io/-- Books Mentioned in this Episode --Note: As an Amazon Associate, we earn from qualifying purchases.----------------------------------------------------------Grokking Concurrency by Kirill Bobrovhttps://amzn.to/3GRbnby (paid link)Web Scalability for Startup Engineers by Artur Ejsmonthttps://amzn.to/3F1VWwF (paid link)----------------00:00 Intro02:07 About the Book and Author03:35 Initial Thoughts on the Book09:12 What is Concurrency vs Parallelism12:35 CPUs and Moore's Law22:19 IO Performance, Embarrassingly Parallel and Conway's Law28:25 Building Blocks of Concurrency: Processes and Threads33:05 Memory Sharing vs Communicating39:13 Multitasking and Context Switching45:24 Task Decomposition and Data Pipelines52:35 Final Thoughts----------------Spotify: https://open.spotify.com/show/5kj6DLCEWR5nHShlSYJI5LApple Podcasts: https://podcasts.apple.com/us/podcast/book-overflow/id1745257325X: https://x.com/bookoverflowpodCarter on X: https://x.com/cartermorganNathan's Functionally Imperative: www.functionallyimperative.com----------------Book Overflow is a podcast for software engineers, by software engineers dedicated to improving our craft by reading the best technical books in the world. Join Carter Morgan and Nathan Toups as they read and discuss a new technical book each week!The full book schedule and links to every major podcast player can be found at https://www.bookoverflow.io
Na adoção de sistemas de Inteligência Artificial para uma ou mais áreas do negócio, as empresas devem começar cada um dos projetos de maneira estruturada e bem planejada. Pensando mais pequeno, se for o caso - e depois, baseadas nos dados, irem escalando o uso da IA de forma mais assertiva, evitando cair no hype de investir recursos e tempo em uma tecnologia tão inovadora sem muito planejamento, apenas para surfar a onda do momento. Para falar desse tema e das iniciativas para democratizar a IA a grandes, médios e pequenos negócios, por meio do processamento em CPUs, mais baratas e energeticamente mais eficientes do que os grandes sistemas baseados em GPUs, tecnologia que vem ganhando espaço no mercado, e o casamento entre a Inteligência Artificial com o desenvolvimento no padrão Open Source (Código Aberto), que incentiva a colaboração e a integração de ecossistemas com diferentes parceiros, o Start Eldorado recebe Sandra Vaz, country manager da Red Hat para o Brasil, que conversou sobre estes e mais temas com o apresentador Daniel Gonzales. O programa vai ao ar todas as quartas-feiras, às 21h, em FM 107,3 para toda a Grande São Paulo, site, app, canais digitais e assistentes de voz.See omnystudio.com/listener for privacy information.
At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm's decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm's architecture through vital tools and system software.Wafaa also challenged the hype around GPUs in AI, asserting that CPUs—especially those enhanced with Arm's Scalable Matrix Extension (SME2) and Scalable Vector Extension (SVE2)—are often more suitable for inference workloads. CPUs offer greater flexibility, and Arm's innovations aim to reduce dependency on expensive GPU fleets.On the AI framework front, Wafaa pointed to PyTorch as the emerging hub, likening its ecosystem-building potential to Kubernetes. As a PyTorch Foundation board member, he sees PyTorch becoming the central open source platform in AI development, with broad community and industry backing.Learn more from The New Stack about the latest insights about Arm: Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm Arm: See a Demo About Migrating a x86-Based App to ARM64Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
What distinguishes CPUs from GPUs in architecture, and how does this impact their performance in computing tasks? Why are GPUs considered better at handling tasks like graphics rendering compared to CPUs? How do different rendering techniques in games versus offline programs affect the processing demands on CPUs and GPUs? ... we explain like I'm five Thank you to the r/explainlikeimfive community and in particular the following users whose questions and comments formed the basis of this discussion: insane_eraser, popejustice, warlocktx, pourliver, dmartis, and arentol. To the community that has supported us so far, thanks for all your feedback and comments. Join us on Twitter: https://www.twitter.com/eli5ThePodcast/ or send us an e-mail: ELI5ThePodcast@gmail.com
Heimir Thor Sverrisson joins Robby to discuss the importance of software architecture in long-term maintainability. With over four decades in the industry, Heimir has witnessed firsthand how poor architectural decisions can set teams up for failure. He shares his experiences mentoring engineers, tackling technical debt, and solving large-scale performance problems—including one bank's misguided attempt to fix system slowness by simply adding more CPUs.Heimir also discusses his work at MojoTech, the value of code reviews in consulting, and his volunteer efforts designing radiation-tolerant software for satellites.Episode Highlights[00:01:12] Why architecture is the foundation of maintainability – Heimir explains why starting with the wrong architecture dooms software projects.[00:02:20] Upfront design vs. agile methodologies – The tension between planning and iterative development.[00:03:33] When architecture becomes the problem – How business pivots can render initial designs obsolete.[00:05:06] The rising demand for rapid software delivery – Why modern projects have less time for deep architectural planning.[00:06:15] Defining technical debt in practical terms – How to clean up code without waiting for permission.[00:09:56] The rewrite that never launched – What happens when a company cancels a multi-million-dollar software project.[00:12:43] How a major bank tackled system slowness the wrong way – Adding CPUs didn't solve their performance problems.[00:15:00] Performance tuning as an ongoing process – Why fixing one bottleneck only reveals the next.[00:22:34] How MojoTech mentors instead of manages – Heimir explains how their consultancy approaches team development.[00:27:54] Building software for space – How AMSAT develops radiation-resistant software for satellites.[00:32:52] Staying relevant after four decades in tech – The power of curiosity in a constantly changing industry.[00:34:26] How AI might (or might not) help maintainable software – Heimir shares his cautious optimism.[00:37:14] Non-technical book recommendation – The Man Who Broke Capitalism and its relevance to the tech industry.Resources & LinksHeimir Thor Sverrisson on LinkedInHeimir's GitHubMojoTechAMSAT – Amateur Radio Satellite OrganizationThe Man Who Broke CapitalismHow to Make Things Faster
The PC hardware market has finally settled down with the release of AMD's new Radeon 9000 series and no more major CPU or GPU product launches later this year. So we assess the state of the PC union a bit this week, with a focus on the new AMD cards and their dramatically improved upscaling, ray-tracing, video encoding, and perhaps most of all, price. Plus, some updates on Intel's low-end Battlemage, Nvidia's mounting 50-series woes, the possible delay of Intel's next-gen Panther Lake CPU to 2026, new rumored low-power CPUs for Brad to get excited about running a Linux router on, and more. Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
TikTok is back on the App Store and the Play Store in the U.S. Elon Musk's DOGE Website Is Already Getting Hacked IRS Acquiring Nvidia Supercomputer Elon's bid for OpenAI is about making the for-profit transition as painful as possible for Altman, Intel has spoken with the Trump administration and TSMC over the past few months about a deal for TSMC to take control of Intel's foundry business Broadcom Joins TSMC In Considering Deals For Parts of Intel Arm to start making server CPUs in-house Thomson Reuters wins the first major US AI copyright ruling against fair use, in a case filed in May 2020 against legal research AI startup Ross Intelligence Perplexity just made AI research crazy cheap—what that means for the industry YouTube Surprise: CEO Says TV Overtakes Mobile as "Primary Device" for Viewing Google Maps now shows the 'Gulf of America' Scarlett Johansson Urges Government to Limit A.I. After Faked Video of Her Opposing Kanye West Goes Viral Google CEO Sees 'Useful' Quantum Computers 5 to 10 Years Away Trump says he has directed US Treasury to stop minting new pennies, citing rising cost Nearly 10 years after Data and Goliath, Bruce Schneier says: Privacy's still screwed Amazon's revamped Alexa might launch over a month after its announcement event Meta's Brain-to-Text AI Host: Leo Laporte Guests: Wesley Faulkner, Iain Thomson, and Brian McCullough Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: shopify.com/twit oracle.com/twit zscaler.com/security ziprecruiter.com/twit joindeleteme.com/twit promo code TWIT
TikTok is back on the App Store and the Play Store in the U.S. Elon Musk's DOGE Website Is Already Getting Hacked IRS Acquiring Nvidia Supercomputer Elon's bid for OpenAI is about making the for-profit transition as painful as possible for Altman, Intel has spoken with the Trump administration and TSMC over the past few months about a deal for TSMC to take control of Intel's foundry business Broadcom Joins TSMC In Considering Deals For Parts of Intel Arm to start making server CPUs in-house Thomson Reuters wins the first major US AI copyright ruling against fair use, in a case filed in May 2020 against legal research AI startup Ross Intelligence Perplexity just made AI research crazy cheap—what that means for the industry YouTube Surprise: CEO Says TV Overtakes Mobile as "Primary Device" for Viewing Google Maps now shows the 'Gulf of America' Scarlett Johansson Urges Government to Limit A.I. After Faked Video of Her Opposing Kanye West Goes Viral Google CEO Sees 'Useful' Quantum Computers 5 to 10 Years Away Trump says he has directed US Treasury to stop minting new pennies, citing rising cost Nearly 10 years after Data and Goliath, Bruce Schneier says: Privacy's still screwed Amazon's revamped Alexa might launch over a month after its announcement event Meta's Brain-to-Text AI Host: Leo Laporte Guests: Wesley Faulkner, Iain Thomson, and Brian McCullough Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: shopify.com/twit oracle.com/twit zscaler.com/security ziprecruiter.com/twit joindeleteme.com/twit promo code TWIT
TikTok is back on the App Store and the Play Store in the U.S. Elon Musk's DOGE Website Is Already Getting Hacked IRS Acquiring Nvidia Supercomputer Elon's bid for OpenAI is about making the for-profit transition as painful as possible for Altman, Intel has spoken with the Trump administration and TSMC over the past few months about a deal for TSMC to take control of Intel's foundry business Broadcom Joins TSMC In Considering Deals For Parts of Intel Arm to start making server CPUs in-house Thomson Reuters wins the first major US AI copyright ruling against fair use, in a case filed in May 2020 against legal research AI startup Ross Intelligence Perplexity just made AI research crazy cheap—what that means for the industry YouTube Surprise: CEO Says TV Overtakes Mobile as "Primary Device" for Viewing Google Maps now shows the 'Gulf of America' Scarlett Johansson Urges Government to Limit A.I. After Faked Video of Her Opposing Kanye West Goes Viral Google CEO Sees 'Useful' Quantum Computers 5 to 10 Years Away Trump says he has directed US Treasury to stop minting new pennies, citing rising cost Nearly 10 years after Data and Goliath, Bruce Schneier says: Privacy's still screwed Amazon's revamped Alexa might launch over a month after its announcement event Meta's Brain-to-Text AI Host: Leo Laporte Guests: Wesley Faulkner, Iain Thomson, and Brian McCullough Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: shopify.com/twit oracle.com/twit zscaler.com/security ziprecruiter.com/twit joindeleteme.com/twit promo code TWIT
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Reminder: 7-Zip MoW The MoW must be added to any files extracted from ZIP or other compound file formats. 7-Zip does not do so by default unless you alter the default configuration. https://isc.sans.edu/diary/Reminder%3A%207-Zip%20%26%20MoW/31668 Apple Fixes 0-Day Apple released updates to iOS and iPadOS fixing a bypass for USB Restricted Mode. The vulnerability is already being exploited. https://support.apple.com/en-us/122174 AMD ZEN CPU Microcode Update An attacker is able to replace microcode on some AMD CPUs. This may alter how the CPUs function and Google released a PoC showing how it can be used to manipulate the random number generator. https://github.com/google/security-research/security/advisories/GHSA-4xq7-4mgh-gp6w Trimble Cityworks Exploited CISA added a recent Trimble Cityworks vulnerabliity to its list of exploited vulnerabilities. https://learn.assetlifecycle.trimble.com/i/1532182-cityworks-customer-communication-2025-02-06-docx/0? Google Tag Manager Skimmer Steals Credit Card Info Sucuri released a blog post with updates to the mage cart campaign. The latest version is injecting malicious code as part of the google tag manager / analytics code. https://blog.sucuri.net/2025/02/google-tag-manager-skimmer-steals-credit-card-info-from-magento-site.html
Dylan Patel is the founder of SemiAnalysis, a research & analysis company specializing in semiconductors, GPUs, CPUs, and AI hardware. Nathan Lambert is a research scientist at the Allen Institute for AI (Ai2) and the author of a blog on AI called Interconnects. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep459-sc See below for timestamps, and to give feedback, submit questions, contact Lex, etc. CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Dylan's X: https://x.com/dylan522p SemiAnalysis: https://semianalysis.com/ Nathan's X: https://x.com/natolambert Nathan's Blog: https://www.interconnects.ai/ Nathan's Podcast: https://www.interconnects.ai/podcast Nathan's Website: https://www.natolambert.com/ Nathan's YouTube: https://youtube.com/@natolambert Nathan's Book: https://rlhfbook.com/ SPONSORS: To support this podcast, check out our sponsors & get discounts: Invideo AI: AI video generator. Go to https://invideo.io/i/lexpod GitHub: Developer platform and AI code editor. Go to https://gh.io/copilot Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex AG1: All-in-one daily nutrition drinks. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (13:28) - DeepSeek-R1 and DeepSeek-V3 (35:02) - Low cost of training (1:01:19) - DeepSeek compute cluster (1:08:52) - Export controls on GPUs to China (1:19:10) - AGI timeline (1:28:35) - China's manufacturing capacity (1:36:30) - Cold war with China (1:41:00) - TSMC and Taiwan (2:04:38) - Best GPUs for AI (2:19:30) - Why DeepSeek is so cheap (2:32:49) - Espionage (2:41:52) - Censorship (2:54:46) - Andrej Karpathy and magic of RL (3:05:17) - OpenAI o3-mini vs DeepSeek r1 (3:24:25) - NVIDIA (3:28:53) - GPU smuggling (3:35:30) - DeepSeek training on OpenAI data (3:45:59) - AI megaclusters (4:21:21) - Who wins the race to AGI? (4:31:34) - AI agents (4:40:16) - Programming and AI (4:47:43) - Open source (4:56:55) - Stargate (5:04:24) - Future of AI PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
Questions! The time to answer them is here again, and this month we do our best with such topics as the relative scarcity of nuclear energy, nested comment systems, USB thumb drives versus portable SSDs, browser RAM usage, why CPUs get faster from one model to the next, the difficulty of naming operating systems, phones without camera bumps, learning to read an analog clock (and a lot of other things), and when we'll finally get around to reviewing that high-tech toilet.Submit ideas about secret information encoding in the world around us for an upcoming episode: https://forms.gle/VYgL9gLeSBKkNtfy9 Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod