POPULARITY
Categories
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet's approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. The complete show notes for this episode can be found at https://twimlai.com/go/757.
A corrida pelos semicondutores virou peça central da disputa tecnológica contemporânea. Entre todas as nações que tentam garantir seu espaço nessa arena, nenhuma avança tão rapidamente quanto a China. A pauta da autossuficiência tecnológica virou prioridade máxima em Pequim, e o setor de chips, antes um ponto vulnerável, transformou-se no centro de uma estratégia nacional de longo prazo. Thiago de Aragão, analista político Hoje, começa a ficar claro que a China está perigosamente próxima de alcançar independência em áreas que eram quase monopólio dos Estados Unidos e de seus aliados. Isso mexe profundamente com o equilíbrio geopolítico global, com as grandes empresas do setor e com o futuro da própria inovação. Até poucos anos atrás, a China importava praticamente tudo o que havia de sofisticado em semicondutores. Dependia de fornecedores estrangeiros para inteligência artificial, supercomputação e boa parte da indústria moderna. Mas decidiu inverter essa lógica. Por meio de políticas industriais agressivas, investimentos estatais bilionários e incentivos fiscais capazes de remodelar cidades inteiras, o país passou a construir uma cadeia de semicondutores completa, capaz de operar desde o design até a fabricação e o encapsulamento. Não foi um movimento tímido. A China atraiu engenheiros de outros países, formou centenas de milhares de profissionais qualificados, ergueu parques industriais dedicados exclusivamente ao setor e começou a desenvolver seus próprios equipamentos e softwares. Ainda há setores sensíveis em que o país não alcançou a liderança, como a litografia ultravioleta extrema, mas o avanço foi tão rápido que o atraso deixou de ser determinante. A Huawei, por exemplo, conseguiu produzir um smartphone com chip nacional de 7 nanômetros, algo que poucos analistas consideravam possível em tão pouco tempo. Pequim persegue a meta de autossuficiência não como retórica, mas como projeto de Estado de longo prazo. Esse avanço chinês ocorre ao mesmo tempo em que a disputa geopolítica entre China e Estados Unidos atinge temperaturas inéditas. Para os americanos, chips avançados deixaram de ser apenas componentes industriais e passaram a ser tratados como ativos fundamentais de segurança nacional. A resposta de Washington foi endurecer as regras de exportação, restringindo profundamente o acesso chinês aos semicondutores mais sofisticados e às máquinas usadas para produzi-los. A China, por sua vez, lê essas restrições como tentativa de contenção e responde reforçando sua própria musculatura industrial. Ao limitar exportações de minerais estratégicos e ao aumentar a escala de investimentos internos, Pequim sinaliza que está preparada para travar sua própria batalha assimétrica. O que antes era uma disputa essencialmente econômica virou uma disputa sistêmica entre dois modelos de poder. Impacto nos EUA Nesse ambiente, as gigantes do setor começaram a sentir impactos muito concretos. A Nvidia foi a mais atingida. Por anos, dominou o mercado chinês de chips de inteligência artificial. Quando os Estados Unidos restringiram a exportação dos modelos mais avançados, a empresa viu sua participação ser praticamente zerada no maior mercado de IA do planeta. Tentou adaptar-se criando versões menos potentes de seus chips, mas até essas passaram a enfrentar risco de bloqueio. Ao mesmo tempo, empresas chinesas se posicionaram para ocupar o espaço deixado. A Huawei avançou agressivamente com seus próprios chips de IA e passou a abastecer grande parte dos projetos domésticos. Startups chinesas ganharam impulso imediato, amparadas por um governo disposto a substituição tecnológica acelerada. Para a AMD e a Intel, o cenário segue a mesma linha. A exigência de que data centers ligados ao Estado utilizem apenas chips nacionais reduziu as perspectivas de crescimento dessas empresas e deixou claro que o impulso à autossuficiência chinesa não será revertido. Mesmo em PCs e servidores comuns, cresce a aposta chinesa em projetar e fabricar suas próprias CPUs e GPUs, erosão lenta porém contínua do espaço das fabricantes americanas. A Qualcomm enfrenta um tipo diferente de vulnerabilidade. Quase metade de sua receita global depende do ecossistema chinês de smartphones. Se a China consolidar produção própria de chips móveis em escala industrial, e se empresas como a Huawei retomarem posição dominante nas redes 5G e nos aparelhos premium, a Qualcomm enfrenta o risco real de perder um de seus pilares de receita. Revisão de estratégias Enquanto tudo isso acontece, o resto do mundo tenta reagir. Os Estados Unidos lançaram o CHIPS Act para trazer fábricas ao território nacional e fortalecer sua indústria. A Europa adotou suas próprias medidas, tentando recuperar relevância num setor que abandonou décadas atrás. Japão, Coreia do Sul, Taiwan e Índia entraram na disputa com incentivos fiscais, diplomacia tecnológica e promessas de redução de dependência externa. Pela primeira vez em décadas, países passaram a reorganizar cadeias de suprimento não pelo critério econômico clássico, mas por alinhamento político e percepção de risco. A lógica é simples: amigos produzem com amigos. O preço é a perda de eficiência e o aumento dos custos. O ganho é a sensação, ainda que relativa, de segurança estratégica. Mesmo assim, fragmentar um sistema global tão integrado quanto o dos semicondutores significa mexer com toda a estrutura da economia digital. As cadeias que antes conectavam Japão, Taiwan, Holanda, China e Estados Unidos passam agora a se reconfigurar em blocos paralelos, fragmentando o que já foi o setor mais globalizado do planeta. É um processo lento, caro e turbulento, mas inevitável à medida que as tensões aumentam. Avanço chinês Tudo isso mostra que o avanço chinês na fabricação de semicondutores não é um fato isolado. Ele redefine mercados, geopolítica e modelos de desenvolvimento. Empresas como Nvidia, AMD, Intel e Qualcomm percebem que, mesmo sendo líderes históricas, perderam um mercado onde o jogo mudou de regras. Países percebem que o fluxo de tecnologia deixou de ser neutro e se tornou arma estratégica. Consumidores perceberão, nos próximos anos, que existem tecnologias que só estarão disponíveis em determinados blocos, enquanto outros seguirão caminhos diferentes. A história ainda está sendo escrita, e ainda é cedo para dizer quem terá a vantagem definitiva. Mas uma coisa é clara: a disputa por chips é hoje a disputa pelo controle do futuro digital, da inteligência artificial, da computação avançada, da defesa e de tudo que depende de processamento. A China está acelerando, e o resto do mundo precisa decidir se corre junto, se cria obstáculos ou se tenta reinventar o jogo. O século XXI será escrito, em grande parte, por quem dominar essa indústria. E esse domínio já não está tão concentrado quanto esteve no passado recente.
Cloudlflare asks Amazon to hold it's virtual "beer", HEVC and H.265 support being removed from CPUs ... sorta, Steam Machine priced like lobster and will Intel bLLC compete with AMD 3D-vCache? But we mostly complain about DDR5 in this exciting episode!00:00 Intro00:30 Patreon01:44 Food with Josh04:10 We talk about the DDR5 problem for the third week in a row18:44 TSMC confirmed September power outage at Arizona fab23:01 Dell and HP removing HEVC from some laptops26:33 Unpowered SSDs slowly lose data32:32 Is Intel bLLC really an X3D competitor?34:13 (in)Security Corner42:01 Gaming Quick Hits46:18 Picks of the Week1:06:12 Outro ★ Support this podcast on Patreon ★
Much attention has been focused in the news on the useful life of GPUs. While the pervasive narrative suggests GPUs have a short lifespan, and operators are “cooking the books,” our research suggests that GPUs, like CPUs before, have a significantly longer useful life than many claim.
In this episode, we welcome Lead Principal Technologist Hari Kannan to cut through the noise and tackle some of the biggest myths surrounding AI data management and the revolutionary FlashBlade//EXA platform. With GPU shipments now outstripping CPUs, the foundation of modern AI is shifting, and legacy storage architectures are struggling to keep up. Hari dives into the implications of this massive GPU consumption, setting the stage for why a new approach is desperately needed for companies driving serious AI initiatives. Hari dismantles three critical myths that hold IT leaders back. First, he discusses how traditional storage is ill-equipped for modern AI's millions of small, concurrent files, where metadata performance is the true bottleneck—a problem FlashBlade//EXA solves with its metadata-data separation and single namespace. Second, he addresses the outdated notion that high-performance AI is file-only, highlighting FlashBlade//EXA's unified, uncompromising delivery of both file and object storage at exabyte scale and peak efficiency. Finally, Hari explains that GPUs are only as good as the data they consume, countering the belief that only raw horsepower matters. FlashBlade//EXA addresses this by delivering reliable, scalable throughput, efficient DirectFlash Modules up to 300 TB, and the metadata performance required to keep expensive GPUs fully utilized and models training faster. Join us as we explore the blind spots in current AI data strategies during our "Hot Takes" segment and recount a favorite FlashBlade success story. Hari closes with a compelling summary of how Pure Storage's complete portfolio is perfectly suited to provide the complementary data management essential for scaling AI. Tune in to discover why FlashBlade//EXA is the non-compromise, exabyte-scale solution built to keep your AI infrastructure running at its full potential. For more information, visit: https://www.pure.ai/flashblade-exa.html Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 04:30 Primer on FlashBlade 11:32 Stat of the Episode on GPU Shipments 13:25 What is FlashBlade//EXA 18:58 Myth #1: Traditional Storage Challenges for AI Data 22:01 Myth #2: AI Workloads are not just File-based 26:42: Myth #3: AI Needs more than just GPUs 31:35 Hot Takes Segment
Shawn Tierney meets up with Mark Berger of Siemens to learn how Siemens integrates SIRIUS ACT devices (push buttons, selector switches, pilot lights) with PROFINET in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 253 Show Notes: Special thanks to Mark Berger of Siemens for coming on the show and sending us a sample! Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Thank you for tuning back in to the automation podcast. My name is Shawn Tierney from Insights. And today on the show, we have a special treat. We have Mark Berger back on from Siemens to bring us up to speed on serious act. He’s gonna tell us all about the product, and then we’re even gonna do a small demo and take a look at it working live. So with that said, let’s go ahead and jump into this episode with Mark Burger from Siemens and learn all about their push buttons on PROFINET. Mark, it’s been a while since you’ve been on the show. Thank you for coming back on and agreeing to talk about this. Mark Berger (Siemens): Oh, thank you so much. I truly appreciate you letting me be on. I appreciate your channel, and I enjoy watching it. And I’m excited to show you some of this great technology. So I’ve got, the PowerPoint up here. We’ll just do a simple PowerPoint to kinda give you an overview, and then we’ll dive into the hardware. Shawn Tierney (Host): Appreciate it. Thank you. Mark Berger (Siemens): No problem. So as we stated, the Sirius X over PROFINET, let me emphasize that, the, actuators, the push buttons, the estops, the selector switches, they are all standard, when you use these. So if you have those on the shelf, the only thing that PROFINET does is that it adds, removes the normal contact blocks and adds the PROFINET, terminal blocks on the back. So every all the actuators that we’re showing are just standard actuators for the 22 millimeter push button line. So easy to use, modern design, performance and action, and extremely rugged and flexible. The, 22 millimeter is out of the box IP 69 k, which for those who are maybe in the food and beverage, verticals would understand what that is. And that’s for direct hose down, wash down, able to handle a high pressure washing and not able to leak past the actuator into the panel. So IP 69 k is a a great place for dust and wash down and hosing and where you’re having rain and so forth, to be able to protect for a keep of any, water passing into the panel. So introduction wise, it’s, the PROFINET push buttons for us. It it is, again, the same actuators, the same, connections, and so forth, but what we’re going to exchange is the terminal blocks, for it. So on there, I stated it’s, IP 69 k is standard. You don’t need any, extra covers forward or anything to fulfill that requirement, But it’s, it’s insensitive to dust and oil and caustic solutions, you know, like citric acid where you’re hosing down some stainless steel parts and so forth. Now what we have here is, changing out the terminal blocks that have wiring. So usually on a push button, you have two wires coming in, and then you have, for illuminated, you have two wires coming in and so forth and going out. And after you have 20 or 30 push buttons or 10 or 15 push buttons, you’ve got a substantial little bit of wiring or cabling that will be passing from the door over into the main cabinet of your control cabinet. What we’re going to do with PROFINET push buttons is we’re going to eliminate all that wiring. And then in addition, eliminate the input and output cards that you will need for your PLC and take it down to a pro, Ethernet cable, an r j r j 45 cable, and then down to a 24 volts. And that’s all that will pass from the cabinet onto the door where you’re mounting your push buttons. So, huge, safe and cost of wires. We’re reducing all the wire outlay. And, you know, back in the day when I build panels, it was an art how you got all the wires all nice and pretty and got them laid out and wire tied them down and so forth and just made the a piece of art on the backside. And then, it it was all done. You got it all wired. And then, of course, there was somebody that said, hey. We forgot to add another selector switch. So you had to go back and cut all that stuff and redo the whole layout and so forth. So with PROFINET, it’s extremely flexible and easily, to adapt to if you need something, more because you’re not taking all that wiring back to the panel, passing it across the hinge of the door and so forth. It is also with a safety PLC. You do have PROFIsafe, so we can do estops on the door as you can see here in the picture, but then we can do non safe applications also. So today, we’ll be just doing some non safe applications. And then the communications again is PROFINET. But then also just to touch real quick, we do have it on IO Link and on Aussie with our push buttons. So what is SiriusACT with PROFINET? There we go. So what you have is the first, block or interface module that you put on the back of your push button, that’s where the, Ethernet is plugged into and your 24 volts is plugged into. And then after that, subsequently, then the push buttons that you have is that you have what we call a terminal module. And in between the, the interface module to a terminal module or from terminal module to terminal module, you can go up to one meter of cabling, and it’s a ribbon cable. And we’ll show that here shortly. And then if you have up to we can do up to 20 push buttons, terminal modules, with a total of 21 push buttons. And then so from the first interface module all the way to the last push button, you can go up to 10 meters. And then it gives, again, 24 volt power supply for it. And we have, again, as I stated, as nonsafe, talking just PROFINET, and then the safety version, talking PROFISAFE on PROFINET. So serious act, we can go up on the the safety up to seal three and performance level e as an echo. We have, again, the the standard interface module without safety. You have the PLC, the interface module, and then the subsequent terminal modules for it. And then the cabling that goes from the interface module to out to the terminal modules is a simple ribbon cable that comes into the back of the terminal modules. The only tool that you need is simply it’s just a screwdriver, and, you, push it into the terminal module, push down. It uses, vampire connections, insulation displacement, vampire connections, and you push it down in. There’s no stripping of the wires. There’s no mix up. The indicator you can see on the wires here in a minute will show you that there’s a little red line that shows you, which way it, enters into the terminal, and then that’s it. It’s very straightforward. It’s, very simple with tools. And, as I stated, it’s, just like a normal push button that you’d put on, but then we’re gonna add, remove the contact block and add the terminal module or the interface module in the place of the contact block. Just to emphasize again, we can do PROFISAFE on, with a safety PLC and a safety controller, and we can give you all the safety, requirements for the either the ISO or the IEC specifications for safety out there in the field. Here’s some of the part numbers. First one, of course, is the interface module, and that has the ability to do PROFIsafe. It has also, additionally, four digital inputs, one digital output, and then one analog input. And we’ll talk about that a little bit more just in a few minutes. And then the non safe version, 24 volts. You have a, two versions of this one, one with just with just a standard, 24 volts input, but then there’s an additional one that has the four digital in, one digital out, and one analog in. So there’s two different part numbers. One where you don’t need the additional, digital inputs and outputs and analog, and then the and then the part number with the the additional inputs and outputs. But the safety one comes there’s no other version, just the one. Then you have what we call the terminal modules, and there’s three versions. One terminal module is just the command module only. It’s mounted with two mechanical signaling blocks to signal. So you have two contact blocks built in. Then you have one that’s a terminal module with the command, the terminal blocks, and then also an integrated LED. And then you can put what color you want the LED to be, and you can see there the the part number changed for red, blue, amber, so on. And then you have a just an LED module to where it’s no contactors. It’s just LED. And, I think with our demo we’re gonna show today, we’re just gonna show the contact block and LED module and only the LED module today. There’s some other, accessories with the safety. There’s a memory module to where that you, is all the configurations are put into the memory module, and something happens to that interface module. Everything’s put in there, the IP address, the configuration, and everything. If something gets broke and so forth or you have to replace it, you pull the memory module out, put the new terminal or interface module in, plug in the memory module, cycle the power, and it’s up and running. All the configurations, the IP address, everything’s already there. And then on the interface module, it does not come with an LED, so you’re required to buy this this, LED right here if you need it for it, and that’s what you use for the interface module. And then, of course, the ribbon cable that goes between the interface module to the terminal block or terminal module and terminal module and so forth come in five meter length and 10 meter length. K. So what’s it provide for you? Well, the benefits are, I’ll I’ll be very blunt. If it’s just one or two buttons on a panel, it won’t be that cost effective. Yes. We’re reducing the IO, the IO inputs and outputs, but for the savings, it’s not the best. Now when you get up to about three or four push buttons, then that cost saving is, very realized. Now when you go up to 20 push buttons, yes, you’re saving a lot of money, especially in the IO cards that you’re not gonna be required to have. And then, of course, all the wiring and the labor, getting it all wired up and doing all the loop checks to make sure that when you push this button, it’s wired into the right terminal block on the IO card, so on and so forth. So about, the break is about two to three push buttons to where it will become very cost effective for you to use it. But like I said yesterday, without PROFINET push buttons, it was all the wiring you brought across and putting them into all your IO cards and so forth. And now with PROFINET push buttons, all that goes away, and all you’re bringing across is an Ethernet cable and 24 volts positive and 24 volts negative across that hinge into the door. And that’s it. K. And then emphasizing again, we can do PROFIsafe and those, push buttons and estops. The estop can be part of your safety circuit and give you the, safety levels that you’re required from either sill and or performance level safeties depending on the specification, IEC, or ISO that you’re following within your plant. K? And then hardware configuration. Now this is where we step into reduction of engineering and helping you guys get going, quicker and making sure engineering is done properly. You know, back in the day, we’d wire up all the wires, coming from the push buttons, you know, a selector switch, a start button, stop button, indicator lights, and so forth. And and all those wires sometimes just, you know, the what we’re working with, all the wires look the same. You’ve put labels on them. You may have labeled it wrong, and you wired into an input card or an output card. So there’s some time where you’re over there doing some loop checks where you’re trying to say, yes. That’s coming into input byte dot bit, and that should be the selector switch. Well, with the PROFINET push buttons, we’re able to not have to worry about that, and we’re gonna demonstrate that just here in a minute. But you also have a full lineup of the push buttons coming into portal so that you can see the lineup and verify that it is the parts that you want. In TI portal, you can see that, of course, the first, button is the interface module, and then sequentially is the terminal modules that have either just contactors, LED and contactors, or just LEDs. And we’ll we’ll show that just here momentarily. But it’s all integrated into TIA portal. It has a visual representation of all the push buttons, and it’s simple and fast, to, configure. We’ll show you that here in just a moment. And there’s no addressing, for it. So some of the stuff that you have out there, you have addressing, making sure what the address is right, and so on. This is a standardized data management, and it’s extremely time saving and engineering saving for, the user. Shawn Tierney (Host): Well, let me ask you a question about that. If the snow addressing, do the items show up, in the order that they’re wired? In other words, you know, you’re daisy chasing the you’re you’re going cable to cable from device to device. Is that the order that they show up? Mark Berger (Siemens): That’s exactly right. Shawn Tierney (Host): Okay. Mark Berger (Siemens): So if you don’t know which ones are what, you just literally put run your hand from the interface module, follow that cable, and the next one that will be visually saw in portal will be the one that it lands on first. Perfect. And then there’s a cable that leaves that one and goes into the next one, daisy chained, and then that’s what’ll be represented in that lineup. And here in just a minute, we’ll we’ll show that. Alright. Thank you for that question. Okay. Now once I got it wired up, how do I know that I got it wired properly? And we’re gonna show that here in just a minute. But just graphically wise, you have the ability to see if it is all wired up. You do not need to plug it into the PLC. This all it needs is 24 volts. The PLC can come later and plugging it in later and so forth. There’s no programming. This all comes out of the box. So once you plug it in, if all on the backside looking at the terminal blocks and the daisy chain ribbon cable, if it’s all green, you wired it up properly, and it’s working properly. But then if you see a red light flashing either at the terminal module because that will that will bubble up to the terminal module. So if you have a problem somewhere pardon me, the interface module. If you have some problem with the terminal modules, a push button like number two or three or four, it will bubble up into the, interface module to let it know, hey. We got a problem. Can you look to see where it’s at? And as you see here, we have maybe a device that’s defective. And so it bubbles up into the interface module to let you know, and a red light lets you know that we have maybe a defective module. You know, something hammered it pretty hard, or, it may have been miswired. Then the second one down below, we’ve got a wiring error to where you don’t have the green lights on the back and everybody else’s there’s no green light shown. That means you have a wiring error. Or if everything works great, it’s green lights across, but then the next level of this is is my push button working? So then we you’ll push or actuate the push button or actuate the selector switch, and the green light will flash to let you know that that terminal module or interface module is working properly. And we’ve done our our, loop checks right there before we’ve even plugged it into the PLC or your programmer has come out and sat down and worked with it. We can prove that that panel is ready to roll and ready to go, and you can set it aside. And if you got four or five of the same panel, you can build them all up, power it up, verify that it’s all green lights across the board. It is. Great. Set it down. Build up another one and go on from there. So it shows you fast fault detection without any additional equipment or additional people to come in and help you show you that. When we used to do loop checks, usually had somebody push the button, then yell at the programmer, hey. Is this coming in at I zero dot zero? Yeah. I see it. Okay. Or then he pushed another one. Hey. Is this coming in on I 0.one? No. It’s coming in on i0. Three. So there was that two people and then more time to do that loop check or the ring out as some people have called it. So in this case, you don’t need to do that, and you’ll see why here in just a minute. And then, again, if we do have an interface module that, maybe it got short circuited or something hit it, it you just pull the ePROM out, plug it into the new one, bring in the ribbon cable, and cycle the power, and you’re up and running. Alright. And then this is just some of the handling options of how it handles the data, with the projects and so forth, with basic setups, options that you can be handling with this, filling bottles. What we wanna make sure to understand is that if maybe push buttons, you can pick push buttons to work with whatever project you want it to do. So if you have six push buttons out there, two of them are working on one, bottle filling, and then the rest of them are working on the labeling, you can separate those push buttons. Even though that they’re all tied together via PROFINET, you can use them in different applications across your machine. Shawn Tierney (Host): You’re saying if I have multiple CPUs, I could have some buttons in light work with CPU one, PLC one, and some work with PLC two? Mark Berger (Siemens): Yep. There’s handling there. There’s programming in the backside that needs to be done, but, yes, that can happen. Yep. Oh, alright. So conclusion, integrated into TI portal. We’re gonna show that here in a minute. So universal system, high flexibility with your digital in, digital outs, analogs, quick and easy installation, one man, one hand, no special tooling, and then substantially reducing the wiring and labor to get it going. And then, again, integrated safety if, required for the your time. So with that, let’s, switch over to TI portal. So I’ve already got a project started. I just called it project three. I’ve already got a PLC. I’ve got our, new g, s seven twelve hundred g two already in. And then what I’m gonna do is I’ve, already built up the panel. And, Shawn, if you wanna show your panel right here. Shawn Tierney (Host): Yeah. Let me go ahead and switch the camera over to mine. And so now everybody’s seeing my overhead. Now do you want me to turn it on at this point? It’s off. Yeah. Yeah. Mark Berger (Siemens): Let’s do it. Shawn Tierney (Host): Gonna turn it on, and all the lights came on. So we have some push buttons and pilot lights here, but the push buttons are illuminated, and now they’ve all gone off. Do you want me to show the back now? Mark Berger (Siemens): Yep. So what we did there is that we just showed that the LEDs are all working, and that’s at the initial powering up of the 24 volts. Now we’re gonna switch over and, you know, open up the cabinet and look inside, and now we’re looking on the backside. And if you remember in the PowerPoint, I said that we’d have all green lights, the everything’s wired properly. And as you look, all the terminal modules all have green lights, and so that means that’s all been wired properly. If you notice, you see a little red stripe on the ribbon cable. That’s a indication. Yep. To show you that. And then if you look on the on the out on the, the interface module, Shawn, there’s it says out right there at the bottom. Yeah. There’s a little dot, and that dot means that’s where the red stripe goes, coming out. So that little dot means that’s where the red stripe comes. Yep. Right there. And that’s how it comes out. And then if you look just to the left a little bit, there’s another, in, and there’d be a red dot underneath that ribbon cable showing you how the red the the red goes into it. Notice that everything’s clear, so you can see that the wire gets engaged properly all the way in. And then all you do is take a screwdriver and push down, and then the vent, comes in. The insulation displacement comes in and, and, makes the connections for you. So there’s no strip tie cable stripping tools or anything special for doing that. Another item, just while we’re looking, if you look in the bottom left hand corner of that terminal module, you see kind of a a t and then a circle and then another t. That’s an indicator to let you know that that’s two contactors and an LED that you have on the backside. Shawn Tierney (Host): We’re talking about right here? Mark Berger (Siemens): Yep. Yep. Right there. Shawn Tierney (Host): Okay. Mark Berger (Siemens): So that’s an indicator to tell you what type of terminal block it is a terminal, block that it is. That’s two contactors and LED. And then if you look at one in the bottom left hand corner, there’s just a circle. That means you just have an LED. So you have some indicators to show you what you’re looking at and so forth. So today, we’re just using the two, LED only, and then we’re doing the contactor and LED combination. I I don’t have one there on your demo that’s just the contactor. So Shawn Tierney (Host): Now you were telling me about these earlier. Yeah. Mark Berger (Siemens): So yeah. The so if you look there on that second row of the terminal blocks, you have a UV and an AI, and I’ll show that in the schematic here in just a little bit, but there, that is a 10 volt output. If you put a 250 ohm or 250 k ohm, potentiometer and then bring that signal back into AI, you have an analog set point that comes in for it that will automatically be scaled zero to 1,000 count or zero to 10 volts. Mhmm. And then you can use that for a speed reference for a VFD. And it’s already there. All you have to do, you don’t have to scale it or anything. You can put it towards, you know, okay. Zero to 1,000 count means zero to 500 PSI or or zero to 100 feet per second on a conveyor belt, and I’m I’m just pulling numbers out. But that’s the only real scalability scaling you have to do. So it’ll be a zero to 1,000 count is what you’ll see instead of, like, yep. Then you got four digital ins that you can use and then a one digital out. Now the four, I, kinda inquired wife just four, but let’s say that you have a four position joystick. You could wire all four positions into that interface module, and then the output could be something else for a local horn that you want or something to that case with it. So you in addition to the, push buttons, you also have a small, distribution IO block right there in the in your panel. Shawn Tierney (Host): Which is cool. Yeah. I mean, maybe yeah. Like you said, maybe you have something else on the panel that doesn’t fit in with, you know, this line of push buttons and pilot lights like a joystick. Right? And that makes a lot of sense. You were saying too, if I push the button, I can test to see if it’s working. Mark Berger (Siemens): Correct. So if you yep. Go right ahead. Shawn Tierney (Host): I’m pushing that middle one right there. You can see it blinking now. Mark Berger (Siemens): And that tells you that the contacts have been made, and it’s telling you that the contacts work properly. Shawn Tierney (Host): And now I’m pushing the one below it. So that shows me that everything’s working. The contacts are working, and we’re good to go. Mark Berger (Siemens): Yep. Everything’s done. We’ve done the loop checks. We know that this is ready to be plugged into the PLC and handed off to whomever is going to be, programming the PLC and bring it in, in which means that we’ll go to the next step in the TI portal. Shawn Tierney (Host): Yeah. Let me switch back to you, and we’re seeing your TI portal now. Mark Berger (Siemens): Awesome. Okay. So I’ve got the PLC. I’ve plugged it in to if if I needed an Ethernet switch or I’ve plugged it directly into the PLC. Now I have just built up that panel. I haven’t had anything, done with it for an IP address because it is a TCP IP protocol. So we need to do a IP address, but it’s on PROFINET. And then I’m gonna come here to online access, and I wanna see that I can see it out there that I’m talking to it. So I’m gonna do update accessible devices. It’s gonna reach out via my, Ethernet port on my laptop. And then there’s our g two PLC and its IP address. So that’s that guy right here. Mhmm. And then I have something out there called accessible devices, and then this is its MAC address. So what I and I just have those two items on the network, but, you know, you could have multiples as, you know, with GI portal. We can put an entire machine in one project. So I come here and drop that down, and I go to online diagnostics. I I go online with it, but I don’t have really a lot here to tell me what’s going on or anything yet. But I come here, and I say assign IP address. And I call one ninety two, one sixty eight, zero zero zero, and zero ten zero, and then our usual 255, two fifty five, two fifty five, and then I say assign IP address. Give it a second. It’s gonna go out and tell it, okay. You’re it. Now I wanna see if it took, and you look right there, it took. And I’m I’m kinda anal, so I kinda do it again just to verify. Yep. Everything’s done. It’s got an IP address. Now I’m gonna come up, and I’m going to go to my project, and I’m gonna switch this to new network view. Here’s my PLC. I’m gonna highlight my project. Now there’s two ways I can go about it, and I’m sure, Shawn, you’ve learned that Siemens allows you to kinda do it multiple ways. I could come in here and go into my field devices, and I could come into my commanding and interface modules, and I’d start building my push button station. But we’re gonna be a little oh and ah today. We’re gonna highlight the project. I’m gonna go to online, and I’m gonna come down here to hardware detection and do PROFINET devices from network. Brings up the screen to say, hey. I want you to go out and search for PROFINET industrial Ethernet. Come out via my, NIC card from my laptop, and I want you to start search. Shawn Tierney (Host): For those of you who watched my previous episodes doing the e t 200 I o, this is exactly the same process we used for that. Mark Berger (Siemens): Yep. And I found something out there that I know I gave the IP address, but it doesn’t have a PROFINET name yet. So that’s okay. I’ve I got the IP address. We’ll worry about the PROFINET name. So we’ll hide check mark this, and this could be multiple items. Shawn Tierney (Host): Mhmm. Mark Berger (Siemens): K. So now add device. Shawn Tierney (Host): And this is the sweet part. Mark Berger (Siemens): And right here, it’s done. It went out, interrogated the interface module, and said, okay. Are you there? Yep. I’m here. Here’s my IP address. And it also shared with it all of come in here, double click on it now. Shawn Tierney (Host): The real time saver. Yep. Mark Berger (Siemens): Yep. And then now here’s all the push buttons in your thing. And let me zoom that out. It’s at 200%. Let’s go out to a 100. And now it already interrogated the interface module and all the terminal modules to tell me what’s in my demo. Yep. And again, as you stated in your questions, how do I know which one’s the next one? You just saw the ribbon cable Mhmm. And then it brings you so forth and so on. So that’s done. We’re good. I’m gonna go back to my network view, and I’m gonna say, hey. I want you to communicate via PROFINET to there, which I’m done. And then it also gives you here’s the PLC that you’re gonna do because, you know, if we have a big project, we may have four or five of these stations, and you wanna know which PLC is the primary PLC on it. And then we’ve done that. I’m going to quickly just do a quick compile. And next, I’m gonna come here. I’m gonna click here. Now I could just do download and and let the PROFINET name, which is here, go into it. But I’m gonna right click, and I’m gonna say assign device name and say update list. It’s gonna go interrogate the network. Takes a second. No device name assigned. No PROFINET name. So this is how we do that time determinism with PROFINET. So I’m gonna highlight it, and I say assign the name, and it’s done. Close. So now it has a PROFINET name and IP address. So now I’m able to go in here and hit download and load. And we’re going to stop because we are adding hardware, so we are putting the CPU in stop and hit finish. Now I always make sure I’m starting the CPU back up and then hit finish. And then I’m gonna go online, go over here and show network view, and go online. And I got green balls and green check marks all over the board, so I’m excited. This works out. Everything’s done. But now what about the IO? So now your programmer is already talking to it, but now I need to know what the inputs and outputs are. So go back offline, double click here, and then I’m gonna just quickly look at a couple things. The interface modules IO tags are in a different spot than the terminal modules. So just a little note. It’s right here. If you double click on integrated I LED, you click here and then go to properties and say IO tags. There it lists all of the inputs and outputs. So it comes here. But if I do a terminal module, click here, then once you just click on it in general oops. Sorry. In general, it’s right here in the IO addressing. There’s where it starts start the bytes, but then I come here to tags, and then here’s the listing. So the the the programs automatically already allocated the byte and the bit for each of these guys. So if I click there, there, click there, there’s it there, onward and upward. Now notice that the byte so if I click on position four, it is three. So it’s one one less because the base zero versus here, it’s five. Just give me a little bit of a so if you look in here, all that starts at I four dot zero. I four dot zero. So k. So that’s there. So I’m gonna come here. I’m gonna go to the selector switch for this, and I’ve called it s s one, and that’s input two dot zero. Then I’m gonna click here, and I’m gonna call this green push button. Notice there’s two inputs because I have one contactor here, one contactor there, and 30 and 31. So then what I’m gonna do is that I’m going to go over here to the PLC, and I’m gonna go to and it’s updated my PLC tag table. There you go. It’s in there. So then I’m gonna grab that guy. I’m gonna because portal pushes you to use two monitors. I’m gonna come here, go to the main OB, and then I’m gonna just grab a normally open contact, drag it on, drop it, put it in there we go. And then I’m gonna grab selector switch and drop that right there, and grab green LED and drop that right there, and then close that out and compile. And everybody’s happy. I’m gonna download and say yes. Okay. And then I’m gonna go online. Alright. So it’s waiting in for me to switch that, and there you go. And if you wanna see my screen there, Shawn, that’s the green light is turned on. Shawn Tierney (Host): Yeah. Let me switch over to Okay. Bring up your, alright. And could you switch it back off now? Mark Berger (Siemens): Yeah. No problem. Yep. So there we go. We switch it off. We switch it on. Now I wanna show you something kinda cool. If I turn that off and I come back here and I go offline Mhmm. I have a indicator light that needs to flash to let the operator know that there’s something here I need you to attend to. So we used to put in some type of timer. Right? Mhmm. Shawn Tierney (Host): Mhmm. Mark Berger (Siemens): And so what we would do here instead of that, I’m gonna come back down here to my tab and go to the hardware config. I’m gonna double click here. I’m gonna go to module parameters, and I’m gonna drop this down, and I’m gonna put it at two hertz. Also, just to point out, I can also do a normally open contact and a normally closed contact and switch them. You see right here. Cool. And I can control the brightness of the LED if it has an LED, and it’s all hard coded into it. So once I’ve done that, do a quick compile. I’m I mean, you know, I’ve always compile and then do download. Mhmm. Mhmm. So we’re gonna download that and hit load and finish. K. Here we go. Turn that on, and now it’s flashing. Shawn Tierney (Host): That’s great. So you have a timer built in. If you need to flash, you don’t have to go get a clock bit or create your own timer. Plus, if it’s a button, you can change the contacts from normally open to normally closed. That is very cool. Mark Berger (Siemens): Yep. And that is PROFINET push buttons. As I stated let me quickly pull that up. Remember, you pointed out just a few minutes ago, here is the wiring diagram for that. So here’s the back of that with the terminal blocks. And you come down here, and it shows you that you just wire in that, variable resistor or a potentiometer. And you see m and you there’s the 10 volts, and then the signal comes into a. And then that guy is right here. Excellent. So if you come here, you go to properties and IO tags, and it comes in on I 60 fours and input and IO tags, and then I could call that a pot. Yeah. And now you have a potentiometer that you can use as a a speed reference for your VFD. That is very cool. Engineering efficiency, we reduced wiring. We don’t have all the IO cards that is required, and we have the diagnostics. Emphasize that each of these here, their names, you can change those if you would like because this is your diagnostic string. So if something goes wrong here, then it would come up and say commanding. So you double click here, and we go here to general, and it’ll say commanding and underscore LED module two, or you can you can call that start conveyor p b. And then that would change this. Now see this changed it. This would be your diagnostic string to let you know if if that button got damaged or is not working properly. Shawn Tierney (Host): You know, I wanted to ask you too. If I had, let’s say I needed two potentiometers on the front of the enclosure, could I put another interface module in the system? Even if it didn’t have any push buttons on it or pilots on it, could I just put it in there to grab, some more IO? Mark Berger (Siemens): Yep. Yes, sir. I have a customer that he uses these as small little IO blocks. Shawn Tierney (Host): Yeah. I mean, if you just needed a second pot, it might make sense to buy another interface module and bring it into that than buying an analog card. Right? Assuming the resolution and everything was app you know, correct for your application, but that’s very cool. I you know, it it really goes in line with all the videos we’ve done recently looking at e t 200 I o, all the different flavors and types. And when you walk through here, you know, I’m just so especially, thankful that it reads in all the push buttons and their positions and pilot lights. Because if you have this on your desk, you’re doing your first project, you can save a lot of dragging and dropping and searching through the hardware catalog just by reading it in just like we can read in a rack of, like, e t 200 SPIO. Mark Berger (Siemens): Yep. Engineering efficiency, reducing wiring, reducing time in front of the PC to get things up and running. You saw how quickly just a simple push button and a and, you know, again, a simple start and turn that on and off the races we went. Shawn Tierney (Host): Well, Mark, I really wanna thank you. Was there anything else that we wanted to cover before we close out the show? Mark Berger (Siemens): Nope. That’s just about it. I think, we got a little bit to have your your viewers, think about for it. So I appreciate the time, and I really appreciate you allowing me to show this. I think this is a a really engineering efficiency way of going about using our push buttons and and, making everybody’s projects in a timely manner and getting everything done and having cost savings with it. Shawn Tierney (Host): Well, and I wanna thank you for taking the time out of your busy day, not only to put together a little demo like you have for me to use here in the school, but also to come on and show our audience how to use this. And I wanna thank our audience. This was actually prompted from one of you guys out there at calling in or writing in. I think it was on YouTube somewhere and saying, hey. Could you cover the PROFINET push buttons from Siemens? I didn’t even know they had them. So thanks to the viewers out there for your feedback that helps guide me on what you wanna see. And, Mark, this would not be possible if it wasn’t for your expertise. Thank you for coming back on the show. I really appreciate it. Mark Berger (Siemens): Thank you, Shawn. All the best. Thank you. Shawn Tierney (Host): I hope you enjoyed that episode. And I wanna thank Mark for taking time out of his busy schedule to put together that demo and presentation for us and really bring us up to speed on Sirius X. And I wanna thank the user out there who put a comment on one of my previous videos that said, hey. Did you know Siemens has this? Because I wouldn’t have known that unless you said that. So thank you to all you. I try to read the comments every day or at least every two days, and so I appreciate you all wherever you are, whether you’re on YouTube, the automation blog, Spotify, iTunes, Google Podcasts, and wherever you’re listening to this, I just wanna thank you for tuning in. And now with next week being Thanksgiving, we’ll have a pause in the automation show, then we have some more shows in December, and we’re already filming episodes for next year. So I’m looking forward to, releasing all those for you. And if you didn’t know, I also do another podcast called the History of Automation. Right now, it’s only available on video platforms, so YouTube, LinkedIn, and the automation blog. Hopefully, someday we’ll also do it on, audio as well. But, we’re meeting with some of the really legends in automation who worked on some of the really, you know, just really original PLCs, original HMIs, up and through, like, more modern day systems. So it’s just been a blast having these folks on to talk about the history of automation. And so if you need something to listen to during Thanksgiving week or maybe during the holidays, check out the history of automation. Again, right now, it’s only available on YouTube, the automation blog, and LinkedIn, but I think you guys will enjoy that. And I wanna wish you guys, since I won’t be back next week, a very happy Thanksgiving. I wanna thank you always for tuning in and listening, and I also wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️ If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content
In today's Cloud Wars Minute, I delve into OpenAI's $38 billion partnership with AWS, giving Amazon a major role in powering and scaling OpenAI's AI workloads.Highlights0:03 — OpenAI and AWS have announced a multi‑year strategic partnership valued at $38 billion for AWS. This deal will enable AWS to provide the infrastructure necessary to support the operation and scaling of OpenAI's AI workloads. OpenAI is currently utilising computing resources through AWS, which include hundreds of thousands of NVIDIA GPUs and the capability to scale up to tens of millions of CPUs.01:02 — The infrastructure rollout for OpenAI includes architecture optimised for maximum AI processing efficiency and performance, with clusters designed to support a variety of workloads such as inference for ChatGPT and model training. This latest deal is yet another staggering example of the demand for AI services — a demand that companies like OpenAI must invest billions in to keep up with the pace.01:55 — OpenAI recently signed several significant deals with technology partners, including a remarkable $300 billion agreement with Oracle. While that figure might seem outrageous, it puts the $38 billion into a more relatable context. One thing is clear: wherever you stand in the AI revolution, whatever your role is — just make sure that you have one, because this unprecedented growth is touching every corner of the business world. Visit Cloud Wars for more.
** AWS re:Invent 2025 Dec 1-5, Las Vegas - Register Here! **Learn how Anyscale's Ray platform enables companies like Instacart to supercharge their model training while Amazon saves heavily by shifting to Ray's multimodal capabilities.Topics Include:Ray originated at UC Berkeley when PhD students spent more time building clusters than ML modelsAnyscale now launches 1 million clusters monthly with contributions from OpenAI, Uber, Google, CoinbaseInstacart achieved 10-100x increase in model training data using Ray's scaling capabilitiesML evolved from single-node Pandas/NumPy to distributed Spark, now Ray for multimodal dataRay Core transforms simple Python functions into distributed tasks across massive compute clustersHigher-level Ray libraries simplify data processing, model training, hyperparameter tuning, and model servingAnyscale platform adds production features: auto-restart, logging, observability, and zone-aware schedulingUnlike Spark's CPU-only approach, Ray handles both CPUs and GPUs for multimodal workloadsRay enables LLM post-training and fine-tuning using reinforcement learning on enterprise dataMulti-agent systems can scale automatically with Ray Serve handling thousands of requests per secondAnyscale leverages AWS infrastructure while keeping customer data within their own VPCsRay supports EC2, EKS, and HyperPod with features like fractional GPU usage and auto-scalingParticipants:Sharath Cholleti – Member of Technical Staff, AnyscaleSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
While all the focus is on the massive AI Data Center deals with OpenAI and Oracle, which could generate over $100 billion in revenue over ther course of 3 years , the near-term profit story is all about CPUs, PC/laptop stabilization, and EPYC. The shift to maximizing revenue growth—even with a potential 10% share dilution from the OpenAI equity award —is a big change for AMD investors. Chip Stock Investor breaks down the numbers, the massive GPU/MI400 series deployment risk , the surprising profit centers in Q3, and an updated Reverse DCF valuation. Join us on Discord with Semiconductor Insider, sign up on our website: www.chipstockinvestor.com/membershipSupercharge your analysis with AI! Get 15% of your membership with our special link here: https://fiscal.ai/csi/Sign Up For Our Newsletter: https://mailchi.mp/b1228c12f284/sign-up-landing-page-short-formChapters:0:00:00 Intro: The AMD AI Pivot and Q3 Earnings 0:00:58 Data Center Segment: CPU is the Biggest Profit Driver 0:01:54 CEO Lisa Su: Epyc CPU Revenue Triples YoY in Q30:03:36 The Big Shift: Share Dilution for OpenAI & MI400 GPU 0:05:32 Client & Gaming Segments Fixing Profit Margins 0:06:21 The MI400 Inflection Point and $100 Billion Revenue Potential 0:08:26 Updated Reverse DCF Valuation for AMD Stock 0:10:48 Our Final TakeIf you found this video useful, please make sure to like and subscribe!*********************************************************Affiliate links that are sprinkled in throughout this video. If something catches your eye and you decide to buy it, we might earn a little coffee money. Thanks for helping us (Kasey) fuel our caffeine addiction!Content in this video is for general information or entertainment only and is not specific or individual investment advice. Forecasts and information presented may not develop as predicted and there is no guarantee any strategies presented will be successful. All investing involves risk, and you could lose some or all of your principal.#amd #amdstock #semiconductors #chips #investing #stocks #finance #financeeducation #silicon #artificialintelligence #ai #financeeducation #chipstocks #finance #stocks #investing #investor #financeeducation #stockmarket #chipstockinvestor #fablesschipdesign #chipmanufacturing #semiconductormanufacturing #semiconductorstocks Nick and Kasey own shares of AMD
In this episode of Cybersecurity Today, host Jim Love dives into several shocking security lapses and emerging threats. Highlights include ransomware negotiators at Digital Mint accused of being behind attacks, a new AI vulnerability that exploits Windows' built-in stack, and a misuse of OpenAI's API for command and control in malware operations. Additionally, AMD confirms a flaw in its Zen 5 CPUs that could lead to predictable encryption keys, and the Louvre faces scrutiny after a major theft reveals poor password practices and maintenance failures. The episode underscores the importance of basic security measures like strong passwords and regular audits despite advanced technological systems in place. 00:00 Introduction and Sponsor Message 00:48 Ransomware Negotiators Turned Hackers 02:08 AI Stack Vulnerabilities in Windows 04:04 Backdoor Exploits OpenAI's API 05:24 AMD's Encryption Key Flaw 06:59 Louvre Heist and Security Lapses 08:24 Conclusion and Call to Action
The $5 trillion milestone signals the dawn of a new technological epoch. Nvidia's GPUs have replaced CPUs as the foundation of the future. Everything about computing is being rewritten.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle
Send us a textImagine if your computer could explore a landscape of possibilities all at once, using the same rules that make electrons behave in surprising ways. That's the mental pivot Farai, a quantum physicist and teacher, helps us make as we break down what quantum computing really is and where it actually wins. We trade hype for clarity, showing how superposition, entanglement, and interference become practical tools when classical methods hit walls.We walk through the real stakes: modeling complex materials to build safer batteries and corrosion-resistant coatings, accelerating drug discovery by simulating chemistry where properties emerge, and tackling massive optimization problems that govern airport gates, delivery routes, and supply chains. Farai explains why quantum machines are not replacements for CPUs or GPUs but new teammates in a hybrid stack, each part doing what it does best. The goal is targeted advantage, not universal speedups, and the payoff arrives when the search space explodes beyond classical reach.Along the way, we zoom out to nature as our design mentor. Bacteria that fix nitrogen more efficiently than factories, plants that capture sunlight better than our best solar cells, human brains that run powerful cognition on twenty watts—these examples aren't trivia; they are roadmaps for engineering. By learning from natural intelligence and combining it with quantum algorithms, we can cut energy waste, shorten R&D cycles, and unlock better outcomes across industry and public services. Farai also shares his work leading the Africa Quantum Consortium, proving that the next wave of innovation is global, collaborative, and grounded in education.If you care about the future of computing, climate tech, logistics, and medicine, this conversation will sharpen your lens. Listen, subscribe, and share with someone who still thinks quantum is just sci‑fi. Then tell us: which real-world problem would you optimize first?Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates, visit 5starbdm.com. And don't miss Grant McGaugh's new book, First Light — a powerful guide to igniting your purpose and building a BRAVE brand that stands out in a changing world. - https://5starbdm.com/brave-masterclass/ See you next time on Follow The Brand!
Episode 87: We revisit some discussion from last week as we've found more dodgy stuff Microsoft has done, before chatting about the current situation Intel is in with CPUs. They aren't anywhere near as competitive up against AMD now, as AMD were with Ryzen when Intel was dominant. (Note: This podcast was recorded before the recent AMD RDNA 2 driver decision, we'll discuss that in the future)CHAPTERS00:00 - Intro00:29 - Microsoft Does Dodgy Stuff Again11:34 - Intel CPUs when AMD is Dominant vs AMD CPUs when Intel is Dominant21:35 - The Discounts Aren't Enough34:53 - Platform Longevity is Crucial41:33 - Platform Support is Always Better1:04:34 - Updates From Our Boring LivesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
Sapphire ist zurück! Mit Mini-PCs! Denn auch wenn es vielen anders vorkommen mag, Mini-PCs von Sapphire hat es schon einmal gegeben. Umso schöner, dass der neue Edge AI richtig gut geworden ist, wie Fabian und Jan beim Blick auf den Test von Volker feststellen müssen. Weitere Themen im Podcast sind das anhaltendes "Chaos" um die Zukunft der Xbox bzw. die Ungewissheit, in der Microsoft Kunden derzeit lässt, neue X3D-CPUs der Serien Ryzen 7000 und Ryzen 9000 sowie die aktuelle Entwicklung der Grafikkartenpreise. Zum Schluss geht es um eure RAM-Ausstattung im Angesicht der "historischen Speicherknappheit" - die letzte Sonntagsfrage hat Antworten geliefert. Viel Spaß beim Zuhören!
Timestamps: 0:00 it's almost halloween i guess 0:19 AMD renames, rebadges older laptop CPUs 1:17 RedTiger-based Discord account hack 2:20 Australia sues Microsoft 3:31 War Thunder! 4:24 QUICK BITS INTRO 4:34 Fujitsu laptops with Blu-ray drives 5:21 Cooler Master walks back repair instructions 6:12 NHTSA investigate's Tesla's Mad Max mode 7:03 Microsoft files $4.7 in OpenAI losses under 'other' 8:03 Mushrooms as memristors! NEWS SOURCES: https://lmg.gg/mXOat Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, Ben Bajarin and Jay Goldberg discuss Intel's recent earnings report, highlighting a sense of stability in the market compared to previous downturns. They explore the demand for CPUs, particularly in the enterprise sector, and the implications of upcoming product launches. The conversation shifts to Intel's foundry developments, where they express optimism about new manufacturing processes and customer engagement. They also analyze the competitive landscape of AI compute infrastructure, particularly focusing on Amazon's challenges with its Tranium chips and the implications of Anthropic's partnership with Google. Finally, they delve into the future of AI agents, discussing the current limitations and potential advancements needed for these technologies to become viable.
Send us a textARM Ascends, Oil Drifts, Queens EnduresI open on macro static and shutdown fog, a strange steadiness where the market beat goes on. The Beige Book whispers fractures, three Fed districts up, five flat, four softening, a recalibration more than a roar. The feature turns to ARM, where the data center bottleneck is power, not code. ARM sells the blueprint, cutting CPU energy use perhaps by half, and the live question is simple: can it win 50 percent of data center CPUs. Then energy's riddle: oil sits near $56, where it was in 2005, with about 1.5 trillion barrels of proven reserves setting the rough scale. I close with Queens County, a Tony Soprano thrift that became my acid test and first great trade, the name fades, but the fuse remains.If this hit the mark, tap 5
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode, we're joined by Kunle Olukotun, professor of electrical engineering and computer science at Stanford University and co-founder and chief technologist at Sambanova Systems, to discuss reconfigurable dataflow architectures for AI inference. Kunle explains the core idea of building computers that are dynamically configured to match the dataflow graph of an AI model, moving beyond the traditional instruction-fetch paradigm of CPUs and GPUs. We explore how this architecture is well-suited for LLM inference, reducing memory bandwidth bottlenecks and improving performance. Kunle reviews how this system also enables efficient multi-model serving and agentic workflows through its large, tiered memory and fast model-switching capabilities. Finally, we discuss his research into future dynamic reconfigurable architectures, and the use of AI agents to build compilers for new hardware. The complete show notes for this episode can be found at https://twimlai.com/go/751.
-NVIDIA revealed its DGX Spark AI computer earlier this year and today is officially on for $3,999. Though relatively tiny, it hosts the company's entire AI platform including GPUs and CPUs, along with NVIDIA's AI software stack "into a system small enough for a lab or an office.” -Ofcom has slapped 4chan with a £20,000 fine, that's the equivalent of $26,700 here in the states, for failing to comply with the internet and telecommunications regulator's request for information under the UK's Online Safety Act of 2023. -Slack's new Slackbot is basically an AI chatbot like all the rest, but this one has been purpose-built to help with common work tasks. Folks can use natural language to converse with the bot and it can do stuff like whip up project plans, flag daily priorities and analyze reports. It can also help people find information when they only remember a few scant details. The company says it will "give every employee AI superpowers" so they can "drive productivity at AI speed." Learn more about your ad choices. Visit podcastchoices.com/adchoices
En este episodio exploramos cómo las GPUs, creadas para pintar píxeles en juegos como Quake o Doom, se convirtieron en el corazón de la inteligencia artificial moderna. Revisamos el papel de Nvidia en esta revolución, la diferencia entre CPUs y GPUs —explicada con peras y manzanas— y cómo el paralelismo masivo cambió para siempre la computación. Además, analizamos cómo la IA transformará los precios dinámicos, haciendo que cada producto o servicio pueda tener un costo distinto según la persona o el momento. Y cerramos con un avance sorprendente: científicos chinos logran revertir la vejez en primates, abriendo la puerta a posibles terapias para humanos. Learn more about your ad choices. Visit megaphone.fm/adchoices
Bad Crypto, Blood Thirsty Zombie CPUs, Y2K38, Park Mobile, Palo Alto, Redis, Red Hat, Deloitte, Aaran Leyland, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-518
Bad Crypto, Blood Thirsty Zombie CPUs, Y2K38, Park Mobile, Palo Alto, Redis, Red Hat, Deloitte, Aaran Leyland, and more on the Security Weekly News. Show Notes: https://securityweekly.com/swn-518
Bad Crypto, Blood Thirsty Zombie CPUs, Y2K38, Park Mobile, Palo Alto, Redis, Red Hat, Deloitte, Aaran Leyland, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-518
Bad Crypto, Blood Thirsty Zombie CPUs, Y2K38, Park Mobile, Palo Alto, Redis, Red Hat, Deloitte, Aaran Leyland, and more on the Security Weekly News. Show Notes: https://securityweekly.com/swn-518
Curious about what really goes on inside a cloud data center? In this episode, Lois Houston and Nikita Abraham chat with Principal OCI Instructor Orlando Gentil about how cloud data centers are transforming the way organizations manage technology. They explore the differences between traditional and cloud data centers, the roles of CPUs, GPUs, and RAM, and why operating systems and remote access matter more than ever. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Today, we're covering the fundamentals you need to be successful in a cloud environment. If you're new to cloud, coming from a SaaS environment, or planning to move from on-premises to the cloud, you won't want to miss this. With us today is Orlando Gentil, Principal OCI Instructor at Oracle University. Hi Orlando! Thanks for joining us. 01:01 Lois: So Orlando, we know that Oracle has been a pioneer of cloud technologies and has been pivotal in shaping modern cloud data centers, which are different from traditional data centers. For our listeners who might be new to this, could you tell us what a traditional data center is? Orlando: A traditional data center is a physical facility that houses an organization's mission critical IT infrastructure, including servers, storage systems, and networking equipment, all managed on site. 01:32 Nikita: So why would anyone want to use a cloud data center? Orlando: The traditional model requires significant upfront investment in physical hardware, which you are then responsible for maintaining along with the underlying infrastructure like physical security, HVAC, backup power, and communication links. In contrast, cloud data centers offer a more agile approach. You essentially rent the infrastructure you need, paying only for what you use. In the traditional data center, scaling resources up and down can be a slow and complex process. On cloud data centers, scaling is automated and elastic, allowing resources to adjust dynamically based on demand. This shift allows business to move their focus from the constant upkeep of infrastructure to innovation and growth. The move represents a shift from maintenance to momentum, enabling optimized costs and efficient scaling. This fundamental shift is how IT infrastructure is managed and consumed, and precisely what we mean by moving to the cloud. 02:39 Lois: So, when we talk about moving to the cloud, what does it really mean for businesses today? Orlando: Moving to the cloud represents the strategic transition from managing your own on-premise hardware and software to leveraging internet-based computing services provided by a third-party. This involves migrating your applications, data, and IT operations to a cloud environment. This transition typically aims to reduce operational overhead, increase flexibility, and enhance scalability, allowing organizations to focus more on their core business functions. 03:17 Nikita: Orlando, what's the “brain” behind all this technology? Orlando: A CPU or Central Processing Unit is the primary component that performs most of the processing inside the computer or server. It performs calculations handling the complex mathematics and logic that drive all applications and software. It processes instructions, running tasks, and operations in the background that are essential for any application. A CPU is critical for performance, as it directly impacts the overall speed and efficiency of the data center. It also manages system activities, coordinating user input, various application tasks, and the flow of data throughout the system. Ultimately, the CPU drives data center workloads from basic server operations to powering cutting edge AI applications. 04:10 Lois: To better understand how a CPU achieves these functions and processes information so efficiently, I think it's important for us to grasp its fundamental architecture. Can you briefly explain the fundamental architecture of a CPU, Orlando? Orlando: When discussing CPUs, you will often hear about sockets, cores, and threads. A socket refers to the physical connection on the motherboard where a CPU chip is installed. A single server motherboard can have one or more sockets, each holding a CPU. A core is an independent processing unit within a CPU. Modern CPUs often have multiple cores, enabling them to handle several instructions simultaneously, thus increasing processing power. Think of it as having multiple mini CPUs on a single chip. Threads are virtual components that allow a single CPU core to handle multiple sequence of instructions or threads concurrently. This technology, often called hyperthreading, makes a single core appear as two logical processors to the operating system, further enhancing efficiency. 05:27 Lois: Ok. And how do CPUs process commands? Orlando: Beyond these internal components, CPUs are also designed based on different instruction set architectures which dictate how they process commands. CPU architectures are primarily categorized in two designs-- Complex Instruction Set Computer or CISC and Reduced Instruction Set Computer or RISC. CISC processors are designed to execute complex instructions in a single step, which can reduce the number of instructions needed for a task, but often leads to a higher power consumption. These are commonly found in traditional Intel and AMD CPUs. In contrast, RISC processors use a simpler, more streamlined set of instructions. While this might require more steps for a complex task, each step is faster and more energy efficient. This architecture is prevalent in ARM-based CPUs. 06:34 Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification—now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details. 07:09 Nikita: Welcome back! We were discussing CISC and RISC processors. So Orlando, where are they typically deployed? Are there any specific computing environments and use cases where they excel? Orlando: On the CISC side, you will find them powering enterprise virtualization and server workloads, such as bare metal hypervisors in large databases where complex instructions can be efficiently processed. High performance computing that includes demanding simulations, intricate analysis, and many traditional machine learning systems. Enterprise software suites and business applications like ERP, CRM, and other complex enterprise systems that benefit from fewer steps per instruction. Conversely, RISC architectures are often preferred for cloud-native workloads such as Kubernetes clusters, where simpler, faster instructions and energy efficiency are paramount for distributed computing. Mobile device management and edge computing, including cell phones and IoT devices where power efficiency and compact design are critical. Cost optimized cloud hosting supporting distributed workloads where the cumulative energy savings and simpler design lead to more economical operations. The choice between CISC and RISC depends heavily on the specific workload and performance requirements. While CPUs are versatile generalists, handling a broad range of tasks, modern data centers also heavily rely on another crucial processing unit for specialized workloads. 08:54 Lois: We've spoken a lot about CPUs, but our conversation would be incomplete without understanding what a Graphics Processing Unit is and why it's important. What can you tell us about GPUs, Orlando? Orlando: A GPU or Graphics Processing Unit is distinct from a CPU. While the CPU is a generalist excelling at sequential processing and managing a wide variety of tasks, the GPU is a specialist. It is designed specifically for parallel compute heavy tasks. This means it can perform many calculations simultaneously, making it incredibly efficient for workloads like rendering graphics, scientific simulations, and especially in areas like machine learning and artificial intelligence, where massive parallel computation is required. In the modern data center, GPUs are increasingly vital for accelerating these specialized, data intensive workloads. 09:58 Nikita: Besides the CPU and GPU, there's another key component that collaborates with these processors to facilitate efficient data access. What role does Random Access Memory play in all of this? Orlando: The core function of RAM is to provide faster access to information in use. Imagine your computer or server needing to retrieve data from a long-term storage device, like a hard drive. This process can be relatively slow. RAM acts as a temporary high-speed buffer. When your CPU or GPU needs data, it first checks RAM. If the data is there, it can be accessed almost instantaneously, significantly speeding up operations. This rapid access to frequently used data and programming instructions is what allows applications to run smoothly and systems to respond quickly, making RAM a critical factor in overall data center performance. While RAM provides quick access to active data, it's volatile, meaning data is lost when power is off, or persistent data storage, the information that needs to remain available even after a system shut down. 11:14 Nikita: Let's now talk about operating systems in cloud data centers and how they help everything run smoothly. Orlando, can you give us a quick refresher on what an operating system is, and why it is important for computing devices? Orlando: At its core, an operating system, or OS, is the fundamental software that manages all the hardware and software resources on a computer. Think of it as a central nervous system that allows everything else to function. It performs several critical tasks, including managing memory, deciding which programs get access to memory and when, managing processes, allocating CPU time to different tasks and applications, managing files, organizing data on storage devices, handling input and output, facilitate communication between the computer and its peripherals, like keyboards, mice, and displays. And perhaps, most importantly, it provides the user interface that allows us to interact with the computer. 12:19 Lois: Can you give us a few examples of common operating systems? Orlando: Common operating system examples you are likely familiar with include Microsoft Windows and MacOS for personal computers, iOS and Android for mobile devices, and various distributions of Linux, which are incredibly prevalent in servers and increasingly in cloud environments. 12:41 Lois: And how are these operating systems specifically utilized within the demanding environment of cloud data centers? Orlando: The two dominant operating systems in data centers are Linux and Windows. Linux is further categorized into enterprise distributions, such as Oracle Linux or SUSE Linux Enterprise Server, which offer commercial support and stability, and community distributions, like Ubuntu and CentOS, which are developed and maintained by communities and are often free to use. On the other side, we have Windows, primarily represented by Windows Server, which is Microsoft's server operating system known for its robust features and integration with other Microsoft products. While both Linux and Windows are powerful operating systems, their licensing modes can differ significantly, which is a crucial factor to consider when deploying them in a data center environment. 13:43 Nikita: In what way do the licensing models differ? Orlando: When we talk about licensing, the differences between Linux and Windows become quite apparent. For Linux, Enterprise Distributions come with associated support fees, which can be bundled into the initial cost or priced separately. These fees provide access to professional support and updates. On the other hand, Community Distributions are typically free of charge, with some providers offering basic community-driven support. Windows server, in contrast, is a commercial product. Its license cost is generally included in the instance cost when using cloud providers or purchased directly for on-premise deployments. It's also worth noting that some cloud providers offer a bring your own license, or BYOL program, allowing organizations to use their existing Windows licenses in the cloud, which can sometimes provide cost efficiencies. 14:46 Nikita: Beyond choosing an operating system, are there any other important aspects of data center management? Orlando: Another critical aspect of data center management is how you remotely access and interact with your servers. Remote access is fundamental for managing servers in a data center, as you are rarely physically sitting in front of them. The two primary methods that we use are SSH, or secure shell, and RDP, remote desktop. Secure shell is widely used for secure command line access for Linux servers. It provides an encrypted connection, allowing you to execute commands, transfer files, and manage your servers securely from a remote location. The remote desktop protocol is predominantly used for graphical remote access to Windows servers. RDP allows you to see and interact with the server's desktop interface, just as if you were sitting directly in front of it, making it ideal for tasks that require a graphical user interface. 15:54 Lois: Thank you so much, Orlando, for shedding light on this topic. Nikita: Yeah, that's a wrap for today! To learn more about what we discussed, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we'll take a close look at how data is stored and managed. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 16:16 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Matching up the audio this week for a change of pace! That Snapdragon X2 Elite Extreme sometimes compares favorably, there's a new Kindle Scribe and you will never guess who's coming to the Intel investment party. Also Microsoft extends security updates for Windows 10 if you live in the right places, and the they're also looking into micro-channel cooling? All this and so much more!00:00 Intro00:44 Patreon02:33 Food with Josh05:58 Snapdragon X2 Elite Extreme benchmarks11:32 Qualcomm wins final battle with Arm over Oryon14:23 Amazon Kindle Scribe lineup now bigger, offers first color model18:10 LG has world's first 6K TB5 display21:54 Apple might invest in Intel?26:48 Intel 13th and 14th Gen price hike32:03 Microsoft gives in on Windows 10 at the 11th hour - sort of36:42 Microsoft also exploring tiny channels on CPUs for microfluidic cooling42:16 Podcast sponsor Zapier43:36 (In)Security Corner53:52 Gaming Quick Hits1:06:25 Picks of the Week1:23:57 Outro ★ Support this podcast on Patreon ★
News and Updates: Apple iOS 26 delivers one of the biggest iPhone upgrades in years. The new Liquid Glass interface adds a translucent, holographic look, while Spatial Scenes uses AI to turn photos into dynamic 3D wallpapers. Major app redesigns include a cleaner Camera for one-handed use, a simplified Photos layout, customizable Messages with polls and chat backgrounds, and an upgraded Lock Screen. New Battery Settings now estimate charging times and debut Adaptive Power Mode (on iPhone 15 Pro+). But the flashy Liquid Glass design has drawn complaints of eye strain, dizziness, and legibility issues, with Apple offering accessibility tweaks as workarounds. Intel + Nvidia struck a $5B partnership that could reshape PCs. Nvidia bought a 4–5% stake in Intel, and the two are co-developing hybrid CPUs with Nvidia GPU chiplets connected via NVLink. These SoCs could boost AI PCs, power slimmer gaming laptops, and bring workstation-level performance to mini desktops — potentially blurring the line between integrated and discrete graphics. Nvidia + OpenAI announced a massive $100B investment deal. Nvidia will fund the buildout of 10 gigawatts of AI data centers using its upcoming Vera Rubin chips, more than doubling today's top AI hardware. The arrangement lets Nvidia recycle investment into chip sales while giving OpenAI infrastructure to push toward “superintelligence.” The deal lifted Nvidia's market cap to nearly $4.5T, the largest in the world. SpaceX Starlink filed to launch up to 15,000 new satellites to supercharge its direct-to-cell service. The move follows a $17B spectrum deal with EchoStar and will boost capacity 20-fold, enabling LTE-like performance for calls and messaging in dead zones. T-Mobile remains the US launch partner, but CEO Elon Musk hinted SpaceX could eventually sell mobile service directly, competing with carriers. Microsoft is injecting Copilot into all Microsoft 365 accounts, unless you manually use the Customization feature to stop the auto install.
When we think about what separates winning traders from those who struggle, we usually picture strategies, indicators, or a bit of insider know-how. But what if the biggest edge has been sitting on your desk all along? In this episode, I sit down with Eddie Z, also known as Russ Hazelcorn, the founder of EZ Trading Computers and EZBreakouts. With more than 37 years of experience as a trader, stockbroker, technologist, and educator, Eddie has built his career around one mission: helping traders cut through noise, avoid expensive mistakes, and get the tools they need to stay competitive in a fast-moving market. Eddie breaks down the specs that actually matter when building a trading setup, from RAM to CPUs to data feeds, and exposes which so-called “upgrades” are nothing more than overpriced fluff. We also dig into the rise of AI-powered trading platforms and bots, and what traders can do today to prepare their machines for the next wave. As Eddie points out, a lagging system or a missed feed isn't just an inconvenience—it can be the difference between a profitable trade and a costly loss. Beyond the hardware, we explore the broader picture. Rising tariffs and global supply chain disruptions are already reshaping the way traders access technology, and Eddie shares practical steps to avoid being caught short. He also explains why many experienced traders overlook their machines as a “secret weapon” and how quick, targeted fixes can transform reliability and performance in under an hour. This conversation goes deeper than specs and gadgets. Eddie opens up about the philosophy behind the EZ-Factor, his unique approach that blends decades of Wall Street expertise with cutting-edge technology to simplify trading and help people succeed. We talk about his ventures, including EZ Trading Computers, trusted by over 12,000 traders, and EZBreakouts, which delivers actionable daily and weekly picks backed by years of experience. For traders looking to level up—whether you're just starting out or managing multiple screens in a professional setting—this episode is packed with insights that can help you sharpen your edge. Eddie's perspective is clear: the right machine, the right mindset, and the right knowledge can make trading not only more profitable, but, as he likes to put it, as “EZ” as possible. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Rate cut - rates up? Diet Stocks - losing weight Good news/bad news - all good for markets Bessent for Fed Chair and Treasury Secretary? PLUS we are now on Spotify and Amazon Music/Podcasts! Click HERE for Show Notes and Links DHUnplugged is now streaming live - with listener chat. Click on link on the right sidebar. Love the Show? Then how about a Donation? Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter Warm-Up - BRAND New server - all provisioned - Much faster DH Site - Need a new CTP stock! - New Clear Stocks! - To the Sky - Money Tree Market - Tik Tok news Markets - Rate cut - rates up - Diet Stocks - losing weight - Good news/bad news - all good for markets - StubHub IPO Update SELL Rosh Hashanah - Buy Yom Kippur? Vanguard Issues? Got a call this morning..Gent in NY... NEW CLEAR - On Fire! - Have you seen the returns on some of these stocks? - YTD - - URA (Uranium ETF) Up 75% -- SMR (NuScale) Up 164% - - OKLO (OKL) up 518% - - CCJ (Cameco) up 65% TikTok Nonsense - President Donald Trump said in an interview that aired Sunday that conservative media baron Rupert Murdoch and his son Lachlan are likely to be involved in the proposal to save TikTok in the United States. -Trump also said that Oracle executive chairman Larry Ellison and Dell Technologies CEO Michael Dell are also likely to be involved in the TikTok deal. More TikTok - White House Press Secretary Karoline Leavitt says TikTok's algorithm will be secured, retrained, and operated in the U.S. outside of Bytedance's control; Oracle (ORCL) will serve as Tiktok's security provider; President Trump will sign TikTok deal later this week - What does that mean and will it be the same TikTok. - Who is doing the retraining??????? SO MANY QUESTIONS MEME ALERT! - Eric Jackson, a hedge fund manager who partly contributed to the trading explosion in Opendoor, unveiled his new pick Monday — Better Home & Finance Holding Co. - Jackson said his firm holds a position in Better Home but didn't disclose its size. - Shares of Better Home soared 46.6% on Monday after Jackson touted the stock on X. At one point during the session, the stock more than doubled in price. - The New York-based mortgage lender jumped more than 36% last week. Intel - INTC getting even more money. - Now, NVDA pouring in $5B - Nvidia and Intel announced a partnership to jointly develop multiple generations of custom data center and PC products. Intel will manufacture new x86 CPUs customized for Nvidia's AI infrastructure, and also build system-on-chips (SoCs) for PCs that integrate Nvidia's RTX GPU chiplets. - Both the US Government and NVDA got BELOW market pricing on their shares. NVDA $$ - Nvidia is investing in OpenAI. On September 22, 2025, Nvidia announced a strategic partnership with OpenAI, which includes an investment of up to $100 billion - The agreement will help deploy at least 10 gigawatts of Nvidia systems, which will include millions of its GPUs. The first phase is scheduled to launch in the second half of 2026, using Nvidia's Vera Rubin platform. Autism Link - Shares of Kenvue (KVUE) are trading lower largely due to reports from the White House and HHS suggesting a forthcoming warning linking prenatal use of acetaminophen (Tylenol's active ingredient) to autism risk. - Investors are concerned that such a warning could lead to regulatory action, changes in labeling requirements, litigation risk, or reduced demand for one of KVUE's key products. It's estimated that Tylenol accounts for approximately 7-9% of KVUE's total revenue. - The company has strongly denied any scientific basis for the link, but the uncertainty itself is hurting sentiment. - Finally, this also comes on top of recent weak financial performance: KVUE posted a Q2 revenue decline of 4% and cut its full-year guidance on August 7. - - Lawsuits to follow... Pfizer
Elizabeth Figura is a Wine developer at Code Weavers. We discuss how Wine and Proton make it possible to run Windows applications on other operating systems. Related links WineHQ Proton Crossover Direct3D MoltenVK XAudio2 Mesa 3D Graphics Library Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Elizabeth Figuera. She's a wine developer at Code Weavers. And today we're gonna talk about what that is and, uh, all the work that goes into it. [00:00:09] Elizabeth: Thank you Jeremy. I'm glad to be here. What's Wine [00:00:13] Jeremy: I think the first thing we should talk about is maybe saying what Wine is because I think a lot of people aren't familiar with the project. [00:00:20] Elizabeth: So wine is a translation layer. in fact, I would say wine is a Windows emulator. That is what the name originally stood for. it re implements the entire windows. Or you say win 32 API. so that programs that make calls into the API, will then transfer that code to wine and and we allow that Windows programs to run on, things that are not windows. So Linux, Mac, os, other operating systems such as Solaris and BSD. it works not by emulating the CPU, but by re-implementing every API, basically from scratch and translating them to their equivalent or writing new code in case there is no, you know, equivalent. System Calls [00:01:06] Jeremy: I believe what you're doing is you're emulating system calls. Could you explain what those are and, and how that relates to the project? [00:01:15] Elizabeth: Yeah. so system call in general can be used, referred to a call into the operating system, to execute some functionality that's built into the operating system. often it's used in the context of talking to the kernel windows applications actually tend to talk at a much higher level, because there's so much, so much high level functionality built into Windows. When you think about, as opposed to other operating systems that we basically, we end up end implementing much higher level behavior than you would on Linux. [00:01:49] Jeremy: And can you give some examples of what some of those system calls would be and, I suppose how they may be higher level than some of the Linux ones. [00:01:57] Elizabeth: Sure. So of course you have like low level calls like interacting with a file system, you know, created file and read and write and such. you also have, uh, high level APIs who interact with a sound driver. [00:02:12] Elizabeth: There's, uh, one I was working on earlier today, called XAudio where you, actually, you know, build this bank of of sounds. It's meant to be, played in a game and then you can position them in various 3D space. And the, and the operating system in a sense will, take care of all of the math that goes into making that work. [00:02:36] Elizabeth: That's all running on your computer and. And then it'll send that audio data to the sound card once it's transformed it. So it sounds like it's coming from a certain space. a lot of other things like, you know, parsing XML is another big one. That there's a lot of things. The, there, the, the, the space is honestly huge [00:02:59] Jeremy: And yeah, I can sort of see how those might be things you might not expect to be done by the operating system. Like you gave the example of 3D audio and XML parsing and I think XML parsing in, in particular, you would've thought that that would be something that would be handled by the, the standard library of whatever language the person was writing their application as. [00:03:22] Jeremy: So that's interesting that it's built into the os. [00:03:25] Elizabeth: Yeah. Well, and languages like, see it's not, it isn't even part of the standard library. It's higher level than that. It's, you have specific libraries that are widespread but not. Codified in a standard, but in Windows you, in Windows, they are part of the operating system. And in fact, there's several different, XML parsers in the operating system. Microsoft likes to deprecate old APIs and make new ones that do the same thing very often. [00:03:53] Jeremy: And something I've heard about Windows is that they're typically very reluctant to break backwards compatibility. So you say they're deprecated, but do they typically keep all of them still in there? [00:04:04] Elizabeth: It all still It all still works. [00:04:07] Jeremy: And that's all things that wine has to implement as well to make sure that the software works as well. [00:04:14] Jeremy: Yeah. [00:04:14] Elizabeth: Yeah. And, and we also, you know, need to make it work. we also need to implement those things to make old, programs work because there is, uh, a lot of demand, at least from, at least from people using wine for making, for getting some really old programs, working from the. Early nineties even. What people run with Wine (Productivity, build systems, servers) [00:04:36] Jeremy: And that's probably a good, thing to talk about in terms of what, what are the types of software that, that people are trying to run with wine, and what operating system are they typically using? [00:04:46] Elizabeth: Oh, in terms of software, literally all kinds, any software you can imagine that runs on Windows, people will try to run it on wine. So we're talking games, office software productivity, software accounting. people will run, build systems on wine, build their, just run, uh, build their programs using, on visual studio, running on wine. people will run wine on servers, for example, like software as a service kind of things where you don't even know that it's running on wine. really super domain specific stuff. Like I've run astronomy, software, and wine. Design, computer assisted design, even hardware drivers can sometimes work unwind. There's a bit of a gray area. How games are different [00:05:29] Jeremy: Yeah, it's um, I think from. Maybe the general public, or at least from what I've seen, I think a lot of people's exposure to it is for playing games. is there something different about games versus all those other types of, productivity software and office software that, that makes supporting those different. [00:05:53] Elizabeth: Um, there's some things about it that are different. Games of course have gotten a lot of publicity lately because there's been a huge push, largely from valve, but also some other companies to get. A lot of huge, wide range of games working well under wine. And that's really panned out in the, in a way, I think, I think we've largely succeeded. [00:06:13] Elizabeth: We've made huge strides in the past several years. 5, 5, 10 years, I think. so when you talk about what makes games different, I think, one thing games tend to do is they have a very limited set of things they're working with and they often want to make things run fast, and so they're working very close to the me They're not, they're not gonna use an XML parser, for example. [00:06:44] Elizabeth: They're just gonna talk directly as, directly to the graphics driver as they can. Right. And, and probably going to do all their own sound design. You know, I did talk about that XAudio library, but a lot of games will just talk directly as, directly to the sound driver as Windows Let some, so this is a often a blessing, honestly, because it means there's less we have to implement to make them work. when you look at a lot of productivity applications, and especially, the other thing that makes some productivity applications harder is, Microsoft makes 'em, and They like to, make a library, for use in this one program like Microsoft Office and then say, well, you know, other programs might use this as well. Let's. Put it in the operating system and expose it and write an API for it and everything. And maybe some other programs use it. mostly it's just office, but it means that office relies on a lot of things from the operating system that we all have to reimplement. [00:07:44] Jeremy: Yeah, that's somewhat counterintuitive because when you think of games, you think of these really high performance things that that seem really complicated. But it sounds like from what you're saying, because they use the lower level primitives, they're actually easier in some ways to support. [00:08:01] Elizabeth: Yeah, certainly in some ways, they, yeah, they'll do things like re-implement the heap allocator because the built-in heap allocator isn't fast enough for them. That's another good example. What makes some applications hard to support (Some are hard, can't debug other people's apps) [00:08:16] Jeremy: You mentioned Microsoft's more modern, uh, office suites. I, I've noticed there's certain applications that, that aren't supported. Like, for example, I think the modern Adobe Creative Suite. What's the difference with software like that and does that also apply to the modern office suite, or is, or is that actually supported? [00:08:39] Elizabeth: Well, in one case you have, things like Microsoft using their own APIs that I mentioned with Adobe. That applies less, I suppose, but I think to some degree, I think to some degree the answer is that some applications are just hard and there's, and, and there's no way around it. And, and we can only spend so much time on a hard application. I. Debugging things. Debugging things can get very hard with wine. Let's, let me like explain that for a minute because, Because normally when you think about debugging an application, you say, oh, I'm gonna open up my debugger, pop it in, uh, break at this point, see what like all the variables are, or they're not what I expect. Or maybe wait for it to crash and then get a back trace and see where it crashed. And why you can't do that with wine, because you don't have the application, you don't have the symbols, you don't have your debugging symbols. You don't know anything about the code you're running unless you take the time to disassemble and decompile and read through it. And that's difficult every time. It's not only difficult, every time I've, I've looked at a program and been like, I really need to just. I'm gonna just try and figure out what the program is doing. [00:10:00] Elizabeth: It takes so much time and it is never worth it. And sometimes you have to, sometimes you have no other choice, but usually you end up, you ask to rely on seeing what calls it makes into the operating system and trying to guess which one of those is going wrong. Now, sometimes you'll get lucky and it'll crash in wine code, or sometimes it'll make a call into, a function that we don't implement yet, and we know, oh, we need to implement that function. But sometimes it does something, more obscure and we have to figure out, well, like all of these millions of calls it made, which one of them is, which one of them are we implementing incorrectly? So it's returning the wrong result or not doing something that it should. And, then you add onto that the. You know, all these sort of harder to debug things like memory errors that we could make. And it's, it can be very difficult and so sometimes some applications just suffer from those hard bugs. and sometimes it's also just a matter of not enough demand for something for us to spend a lot of time on it. [00:11:11] Elizabeth: Right. [00:11:14] Jeremy: Yeah, I can see how that would be really challenging because you're, like you were saying, you don't have the symbols, so you don't have the source code, so you don't know what any of this software you're supporting, how it was actually written. And you were saying that I. A lot of times, you know, there may be some behavior that's wrong or a crash, but it's not because wine crashed or there was an error in wine. [00:11:42] Jeremy: so you just know the system calls it made, but you don't know which of the system calls didn't behave the way that the application expected. [00:11:50] Elizabeth: Exactly. Test suite (Half the code is tests) [00:11:52] Jeremy: I can see how that would be really challenging. and wine runs so many different applications. I'm, I'm kind of curious how do you even track what's working and what's not as you, you change wine because if you support thousands or tens thousands of applications, you know, how do you know when you've got a, a regression or not? [00:12:15] Elizabeth: So, it's a great question. Um, probably over half of wine by like source code volume. I actually actually check what it is, but I think it's, i, I, I think it's probably over half is what we call is tests. And these tests serve two purposes. The one purpose is a regression test. And the other purpose is they're conformance tests that test, that test how, uh, an API behaves on windows and validates that we are behaving the same way. So we write all these tests, we run them on windows and you know, write the tests to check what the windows returns, and then we run 'em on wine and make sure that that matches. and we have just such a huge body of tests to make sure that, you know, we're not breaking anything. And that every, every, all the code that we, that we get into wine that looks like, wow, it's doing that really well. Nope, that's what Windows does. The test says so. So pretty much any code that we, any new code that we get, it has to have tests to validate, to, to demonstrate that it's doing the right thing. [00:13:31] Jeremy: And so rather than testing against a specific application, seeing if it works, you're making a call to a Windows system call, seeing how it responds, and then making the same call within wine and just making sure they match. [00:13:48] Elizabeth: Yes, exactly. And that is obviously, or that is a lot more, automatable, right? Because otherwise you have to manually, you know, there's all, these are all graphical applications. [00:14:02] Elizabeth: You'd have to manually do the things and make sure they work. Um, but if you write automateable tests, you can just run them all and the machine will complain at you if it fails it continuous integration. How compatibility problems appear to users [00:14:13] Jeremy: And because there's all these potential compatibility issues where maybe a certain call doesn't behave the way an application expects. What, what are the types of what that shows when someone's using software? I mean, I, I think you mentioned crashes, but I imagine there could be all sorts of other types of behavior. [00:14:37] Elizabeth: Yes, very much so. basically anything, anything you can imagine again is, is what will happen. You can have, crashes are the easy ones because you know when and where it crashed and you can work backwards from there. but you can also get, it can, it could hang, it could not render, right? Like maybe render a black screen. for, you know, for games you could very frequently have, graphical glitches where maybe some objects won't render right? Or the entire screen will be read. Who knows? in a very bad case, you could even bring down your system and we usually say that's not wine's fault. That's the graphics library's fault. 'cause they're not supposed to do that, uh, no matter what we do. But, you know, sometimes we have to work around that anyway. but yeah, there's, there's been some very strange and idiosyncratic bugs out there too. [00:15:33] Jeremy: Yeah. And like you mentioned that uh, there's so many different things that could have gone wrong that imagine's very difficult to find. Yeah. And when software runs through wine, I think, Performance is comparable to native [00:15:49] Jeremy: A lot of our listeners will probably be familiar with running things in a virtual machine, and they know that there's a big performance impact from doing that. [00:15:57] Jeremy: How does the performance of applications compare to running natively on the original Windows OS versus virtual machines? [00:16:08] Elizabeth: So. In theory. and I, I haven't actually done this recently, so I can't speak too much to that, but in theory, the idea is it's a lot faster. so there, there, is a bit of a joke acronym to wine. wine is not an emulator, even though I started out by saying wine is an emulator, and it was originally called a Windows emulator. but what this basically means is wine is not a CPU emulator. It doesn't, when you think about emulators in a general sense, they're often, they're often emulators for specific CPUs, often older ones like, you know, the Commodore emulator or an Amiga emulator. but in this case, you have software that's written for an x86 CPU. And it's running on an x86 CPU by giving it the same instructions that it's giving on windows. It's just that when it says, now call this Windows function, it calls us instead. So that all should perform exactly the same. The only performance difference at that point is that all should perform exactly the same as opposed to a, virtual machine where you have to interpret the instructions and maybe translate them to a different instruction set. The only performance difference is going to be, in the functions that we are implementing themselves and we try to, we try to implement them to perform. As well, or almost as well as windows. There's always going to be a bit of a theoretical gap because we have to translate from say, one API to another, but we try to make that as little as possible. And in some cases, the operating system we're running on is, is just better than Windows and the libraries we're using are better than Windows. [00:18:01] Elizabeth: And so our games will run faster, for example. sometimes we can, sometimes we can, do a better job than Windows at implementing something that's, that's under our purview. there there are some games that do actually run a little bit faster in wine than they do on Windows. [00:18:22] Jeremy: Yeah, that, that reminds me of how there's these uh, gaming handhelds out now, and some of the same ones, they have a, they either let you install Linux or install windows, or they just come with a pre-installed, and I believe what I've read is that oftentimes running the same game on both operating systems, running the same game on Linux, the battery life is better and sometimes even the performance is better with these handhelds. [00:18:53] Jeremy: So it's, it's really interesting that that can even be the case. [00:18:57] Elizabeth: Yeah, it's really a testament to the huge amount of work that's gone into that, both on the wine side and on the, side of the graphics team and the colonel team. And, and of course, you know, the years of, the years of, work that's gone into Linux, even before these gaming handhelds were, were even under consideration. Proton and Valve Software's role [00:19:21] Jeremy: And something. So for people who are familiar with the handhelds, like the steam deck, they may have heard of proton. Uh, I wonder if you can explain what proton is and how it relates to wine. [00:19:37] Elizabeth: Yeah. So, proton is basically, how do I describe this? So, proton is a sort of a fork, uh, although we try to avoid the term fork. It's a, we say it's a downstream distribution because we contribute back up to wine. so it is a, it is, it is a alternate distribution fork of wine. And it's also some code that basically glues wine into, an embedding application originally intended for steam, and developed for valve. it has also been used in, others, but it has also been used in other software. it, so where proton differs from wine besides the glue part is it has some, it has some extra hacks in it for bugs that are hard to fix and easy to hack around as some quick hacks for, making games work now that are like in the process of going upstream to wine and getting their code quality improved and going through review. [00:20:54] Elizabeth: But we want the game to work now, when we distribute it. So that'll, that'll go into proton immediately. And then once we have, once the patch makes it upstream, we replace it with the version of the patch from upstream. there's other things to make it interact nicely with steam and so on. And yeah, I think, yeah, I think that's, I got it. [00:21:19] Jeremy: Yeah. And I think for people who aren't familiar, steam is like this, um, I, I don't even know what you call it, like a gaming store and a [00:21:29] Elizabeth: store game distribution service. it's got a huge variety of games on it, and you just publish. And, and it's a great way for publishers to interact with their, you know, with a wider gaming community, uh, after it, just after paying a cut to valve of their profits, they can reach a lot of people that way. And because all these games are on team and, valve wants them to work well on, on their handheld, they contracted us to basically take their entire catalog, which is huge, enormous. And trying and just step by step. Fix every game and make them all work. [00:22:10] Jeremy: So, um, and I guess for people who aren't familiar Valve, uh, softwares the company that runs steam, and so it sounds like they've asked, uh, your company to, to help improve the compatibility of their catalog. [00:22:24] Elizabeth: Yes. valve contracted us and, and again, when you're talking about wine using lower level libraries, they've also contracted a lot of other people outside of wine. Basically, the entire stack has had a tremendous, tremendous investment by valve software to make gaming on Linux work. Well. The entire stack receives changes to improve Wine compatibility [00:22:48] Jeremy: And when you refer to the entire stack, like what are some, some of those pieces, at least at a high level. [00:22:54] Elizabeth: I, I would, let's see, let me think. There is the wine project, the. Mesa Graphics Libraries. that's a, that's another, you know, uh, open source, software project that existed, has existed for a long time. But Valve has put a lot of, uh, funding and effort into it, the Linux kernel in various different ways. [00:23:17] Elizabeth: the, the desktop, uh, environment and Window Manager for, um, are also things they've invested in. [00:23:26] Jeremy: yeah. Everything that the game needs, on any level and, and that the, and that the operating system of the handheld device needs. Wine's history [00:23:37] Jeremy: And wine's been going on for quite a while. I think it's over a decade, right? [00:23:44] Elizabeth: I believe. Oh, more than, oh, far more than a decade. I believe it started in 1990, I wanna say about 1995, mid nineties. I'm, I probably have that date wrong. I believe Wine started about the mid nineties. [00:24:00] Jeremy: Mm. [00:24:00] Elizabeth: it's going on for three decades at this rate. [00:24:03] Jeremy: Wow. Okay. [00:24:06] Jeremy: And so all this time, how has the, the project sort of sustained itself? Like who's been involved and how has it been able to keep going this long? [00:24:18] Elizabeth: Uh, I think as is the case with a lot of free software, it just, it just keeps trudging along. There's been. There's been times where there's a lot of interest in wine. There's been times where there's less, and we are fortunate to be in a time where there's a lot of interest in it. we've had the same maintainer for almost this entire, almost this entire existence. Uh, Alexander Julliard, there was one person starting who started, maintained it before him and, uh, left it maintainer ship to him after a year or two. Uh, Bob Amstat. And there has been a few, there's been a few developers who have been around for a very long time. a lot of developers who have been around for a decent amount of time, but not for the entire duration. And then a very, very large number of people who come and submit a one-off fix for their individual application that they want to make work. [00:25:19] Jeremy: How does crossover relate to the wine project? Like, it sounds like you had mentioned Valve software hired you for subcontract work, but crossover itself has been around for quite a while. So how, how has that been connected to the wine project? [00:25:37] Elizabeth: So I work for, so the, so the company I work for is Code Weavers and, crossover is our flagship software. so Code Weavers is a couple different things. We have a sort of a porting service where companies will come to us and say, can we port my application usually to Mac? And then we also have a retail service where Where we basically have our own, similar to Proton, but you know, older, but the same idea where we will add some hacks into it for very difficult to solve bugs and we have a, a nice graphical interface. And then, the other thing that we're selling with crossover is support. So if you, you know, try to run a certain application and you buy crossover, you can submit a ticket saying this doesn't work and we now have a financial incentive to fix it. You know, we'll try to, we'll try to fix your, we'll spend company resources to fix your bug, right? So that's been so, so code we v has been around since 1996 and crossover, I don't know the date, but it's crossover has been around for probably about two decades, if I'm not mistaken. [00:27:01] Jeremy: And when you mention helping companies port their software to, for example, MacOS. [00:27:07] Jeremy: Is the approach that you would port it natively to MacOS APIs or is it that you would help them get it running using wine on MacOS? [00:27:21] Elizabeth: Right. That's, so that's basically what makes us so unique among porting companies is that instead of rewriting their software, we just, we just basically stick it inside of crossover and, uh, and, and make it run. [00:27:36] Elizabeth: And the idea has always been, you know, the more we implement, the more we get correct, the, the more applications will, you know, work. And sometimes it works out that way. Sometimes not really so much. And there's always work we have to do to get any given application to work, but. Yeah, so it's, it's very unusual because we don't ask companies for any of their code. We don't need it. We just fix the windows API [00:28:07] Jeremy: And, and so in that case, the ports would be let's say someone sells a MacOS version of their software. They would bundle crossover, uh, with their software. [00:28:18] Elizabeth: Right? And usually when you do this, it doesn't look like there's crossover there. Like it just looks like this software is native, but there is soft, there is crossover under the hood. Loading executables and linked libraries [00:28:32] Jeremy: And so earlier we were talking about how you're basically intercepting the system calls that these binaries are making, whether that's the executable or the, the DLLs from Windows. Um, but I think probably a lot of our listeners are not really sure how that's done. Like they, they may have built software, but they don't know, how do I basically hijack, the system calls that this application is making. [00:29:01] Jeremy: So maybe you could talk a little bit about how that works. [00:29:04] Elizabeth: So there, so there's a couple steps to go into it. when you think about a program that's say, that's a big, a big file that's got all the machine code in it, and then it's got stuff at the beginning saying, here's how the program works and here's where in the file the processor should start running. that's, that's your EXE file. And then in your DLL files are libraries that contain shared code and you have like a similar sort of file. It says, here's the entry point. That runs this function, this, you know, this pars XML function or whatever have you. [00:29:42] Elizabeth: And here's this entry point that has the generate XML function and so on and so forth. And, and, then the operating system will basically take the EXE file and see all the bits in it. Say I want to call the pars XML function. It'll load that DLL and hook it up. So it, so the processor ends up just seeing jump directly to this pars XML function and then run that and then return and so on. [00:30:14] Elizabeth: And so what wine does, is it part of wine? That's part of wine is a library, is that, you know, the implementing that parse XML and read XML function, but part of it is the loader, which is the part of the operating system that hooks everything together. And when we load, we. Redirect to our libraries. We don't have Windows libraries. [00:30:38] Elizabeth: We like, we redirect to ours and then we run our code. And then when you jump back to the program and yeah. [00:30:48] Jeremy: So it's the, the loader that's a part of wine. That's actually, I'm not sure if running the executable is the right term. [00:30:58] Elizabeth: no, I think that's, I think that's a good term. It's, it's, it's, it starts in a loader and then we say, okay, now run the, run the machine code and it's executable and then it runs and it jumps between our libraries and back and so on. [00:31:14] Jeremy: And like you were saying before, often times when it's trying to make a system call, it ends up being handled by a function that you've written in wine. And then that in turn will call the, the Linux system calls or the MacOS system calls to try and accomplish the, the same result. [00:31:36] Elizabeth: Right, exactly. [00:31:40] Jeremy: And something that I think maybe not everyone is familiar with is there's this concept of user space versus kernel space. you explain what the difference is? [00:31:51] Elizabeth: So the way I would explain, the way I would describe a kernel is it's the part of the operating system that can do anything, right? So any program, any code that runs on your computer is talking to the processor, and the processor has to be able to do anything the computer can do. [00:32:10] Elizabeth: It has to be able to talk to the hardware, it has to set up the memory space. That, so actually a very complicated task has to be able to switch to another task. and, and, and, and basically talk to another program and. You have to have something there that can do everything, but you don't want any program to be able to do everything. Um, not since the, not since the nineties. It's about when we realized that we can't do that. so the kernel is a part that can do everything. And when you need to do something that requires those, those permissions that you can't give everyone, you have to talk to the colonel and ask it, Hey, can you do this for me please? And in a very restricted way where it's only the safe things you can do. And a degree, it's also like a library, right? It's the kernel. The kernels have always existed, and since they've always just been the core standard library of the computer that does the, that does the things like read and write files, which are very, very complicated tasks under the hood, but look very simple because all you say is write this file. And talk to the hardware and abstract away all the difference between different drivers. So the kernel is doing all of these things. So because the kernel is a part that can do everything and because when you think about the kernel, it is basically one program that is always running on your computer, but it's only one program. So when a user calls the kernel, you are switching from one program to another and you're doing a lot of complicated things as part of this. You're switching to the higher privilege level where you can do anything and you're switching the state from one program to another. And so it's a it. So this is what we mean when we talk about user space, where you're running like a normal program and kernel space where you've suddenly switched into the kernel. [00:34:19] Elizabeth: Now you're executing with increased privileges in a different. idea of the process space and increased responsibility and so on. [00:34:30] Jeremy: And, and so do most applications. When you were talking about the system calls for handling 3D audio or parsing XML. Are those considered, are those system calls considered part of user space and then those things call the kernel space on your behalf, or how, how would you describe that? [00:34:50] Elizabeth: So most, so when you look at Windows, most of most of the Windows library, the vast, vast majority of it is all user space. most of these libraries that we implement never leave user space. They never need to call into the kernel. there's the, there only the core low level stuff. Things like, we need to read a file, that's a kernel call. when you need to sleep and wait for some seconds, that's a kernel. Need to talk to a different process. Things that interact with different processes in general. not just allocate memory, but allocate a page of memory, like a, from the memory manager and then that gets sub allocated by the heap allocator. so things like that. [00:35:31] Jeremy: Yeah, so if I was writing an application and I needed to open a file, for example, does, does that mean that I would have to communicate with the kernel to, to read that file? [00:35:43] Elizabeth: Right, exactly. [00:35:46] Jeremy: And so most applications, it sounds like it's gonna be a mixture. You're gonna have a lot of things that call user space calls. And then a few, you mentioned more low level ones that are gonna require you to communicate with the kernel. [00:36:00] Elizabeth: Yeah, basically. And it's worth noting that in, in all operating systems, you're, you're almost always gonna be calling a user space library. That might just be a thin wrapper over the kernel call. It might, it's gonna do like just a little bit of work in end call the kernel. [00:36:19] Jeremy: [00:36:19] Elizabeth: In fact, in Windows, that's the only way to do it. Uh, in many other operating systems, you can actually say, you can actually tell the processor to make the kernel call. There is a special instruction that does this and just, and it'll go directly to the kernel, and there's a defined interface for this. But in Windows, that interface is not defined. It's not stable. Or backwards compatible like the rest of Windows is. So even if you wanted to use it, you couldn't. and you basically have to call into the high level libraries or low level libraries, as it were, that, that tell you that create a file. And those don't do a lot. [00:37:00] Elizabeth: They just kind of tweak their parameters a little and then pass them right down to the kernel. [00:37:07] Jeremy: And so wine, it sounds like it needs to implement both the user space calls of windows, but then also the, the kernel, calls as well. But, but wine itself does that, is that only in Linux user space or MacOS user space? [00:37:27] Elizabeth: Yes. This is a very tricky thing. but all of wine, basically all of what is wine runs in, in user space and we use. Kernel calls that are already there to talk to the colonel, to talk to the host Colonel. You have to, and you, you get, you get, you get the sort of second nature of thinking about the Windows, user space and kernel. [00:37:50] Elizabeth: And then there's a host user space and Kernel and wine is running all in user, in the user, in the host user space, but it's emulating the Windows kernel. In fact, one of the weirdest, trickiest parts is I mentioned that you can run some drivers in wine. And those drivers actually, they actually are, they think they're running in the Windows kernel. which in a sense works the same way. It has libraries that it can load, and those drivers are basically libraries and they're making, kernel calls and they're, they're making calls into the kernel library that does some very, very low level tasks that. You're normally only supposed to be able to do in a kernel. And, you know, because the kernel requires some privileges, we kind of pretend we have them. And in many cases, you're even the drivers are using abstractions. We can just implement those abstractions kind of over the slightly higher level abstractions that exist in user space. [00:39:00] Jeremy: Yeah, I hadn't even considered the being able to use hardware devices, but I, I suppose if in, in the end, if you're reproducing the kernel, then whether you're running software or you're talking to a hardware device, as long as you implement the calls correctly, then I, I suppose it works. [00:39:18] Elizabeth: Cause you're, you're talking about device, like maybe it's some kind of USB device that has drivers for Windows, but it doesn't for, for Linux. [00:39:28] Elizabeth: no, that's exactly, that's a, that's kind of the, the example I've used. Uh, I think there is, I think I. My, one of my best success stories was, uh, drivers for a graphing calculator. [00:39:41] Jeremy: Oh, wow. [00:39:42] Elizabeth: That connected via USB and I basically just plugged the windows drivers into wine and, and ran it. And I had to implement a lot of things, but it worked. But for example, something like a graphics driver is not something you could implement in wine because you need the graphics driver on the host. We can't talk to the graphics driver while the host is already doing so. [00:40:05] Jeremy: I see. Yeah. And in that case it probably doesn't make sense to do so [00:40:11] Elizabeth: Right? [00:40:12] Elizabeth: Right. It doesn't because, the transition from user into kernel is complicated. You need the graphics driver to be in the kernel and the real kernel. Having it in wine would be a bad idea. Yeah. [00:40:25] Jeremy: I, I think there's, there's enough APIs you have to try and reproduce that. I, I think, uh, doing, doing something where, [00:40:32] Elizabeth: very difficult [00:40:33] Jeremy: right. Poor system call documentation and private APIs [00:40:35] Jeremy: There's so many different, calls both in user space and in kernel space. I imagine the, the user space ones Microsoft must document to some extent, but, oh. Is that, is that a [00:40:51] Elizabeth: well, sometimes, [00:40:54] Jeremy: Sometimes. Okay. [00:40:55] Elizabeth: I think it's actually better now than it used to be. But some, here's where things get fun, because sometimes there will be, you know, regular documented calls. Sometimes those calls are documented, but the documentation isn't very good. Sometimes programs will just sort of look inside Microsoft's DLLs and use calls that they aren't supposed to be using. Sometimes they use calls that they are supposed to be using, but the documentation has disappeared. just because it's that old of an API and Microsoft hasn't kept it around. sometimes some, sometimes Microsoft, Microsoft own software uses, APIs that were never documented because they never wanted anyone else using them, but they still ship them with the operating system. there was actually a kind of a lawsuit about this because it is an antitrust lawsuit, because by shipping things that only they could use, they were kind of creating a trust. and that got some things documented. At least in theory, they kind of haven't stopped doing it, though. [00:42:08] Jeremy: Oh, so even today they're, they're, I guess they would call those private, private APIs, I suppose. [00:42:14] Elizabeth: I suppose. Uh, yeah, you could say private APIs. but if we want to get, you know, newer versions of Microsoft Office running, we still have to figure out what they're doing and implement them. [00:42:25] Jeremy: And given that they're either, like you were saying, the documentation is kind of all over the place. If you don't know how it's supposed to behave, how do you even approach implementing them? [00:42:38] Elizabeth: and that's what the conformance tests are for. And I, yeah, I mentioned earlier we have this huge body of conformance tests that double is regression tests. if we see an API, we don't know what to do with or an API, we do know, we, we think we know what to do with because the documentation can just be wrong and often has been. Then we write tests to figure out what it's supposed to behave. We kind of guess until we, and, and we write tests and we pass some things in and see what comes out and see what. The see what the operating system does until we figure out, oh, so this is what it's supposed to do and these are the exact parameters in, and, and then we, and, and then we implement it according to those tests. [00:43:24] Jeremy: Is there any distinction in approach for when you're trying to implement something that's at the user level versus the kernel level? [00:43:33] Elizabeth: No, not really. And like I, and like I mentioned earlier, like, well, I mean, a kernel call is just like a library call. It's just done in a slightly different way, but it's still got, you know, parameters in, it's still got a set of parameters. They're just encoded differently. And, and again, like the, the way kernel calls are done is on a level just above the kernel where you have a library, that just passes things through. Almost verbatim to the kernel and we implement that library instead. [00:44:10] Jeremy: And, and you've been working on i, I think, wine for over, over six years now. [00:44:18] Elizabeth: That sounds about right. Debugging and having broad knowledge of Wine [00:44:20] Jeremy: What does, uh, your, your day to day look like? What parts of the project do you, do you work on? [00:44:27] Elizabeth: It really varies from day to day. and I, I, a lot of people, a lot of, some people will work on the same parts of wine for years. Uh, some people will switch around and work on all sorts of different things. [00:44:42] Elizabeth: And I'm, I definitely belong to that second group. Like if you name an area of wine, I have almost certainly contributed a patch or two to it. there's some areas I work on more than others, like, 3D graphics, multimedia, a, I had, I worked on a compiler that exists, uh, socket. So networking communication is another thing I work a lot on. day to day, I kind of just get, I, I I kind of just get a bug for some program or another. and I take it and I debug it and figure out why the program's broken and then I fix it. And there's so much variety in that. because a bug can take so many different forms like I described, and, and, and the, and then the fix can be simple or complicated or, and it can be in really anywhere to a degree. [00:45:40] Elizabeth: being able to work on any part of wine is sometimes almost a necessity because if a program is just broken, you don't know why. It could be anything. It could be any sort of API. And sometimes you can hand the API to somebody who's got a lot of experience in that, but sometimes you just do whatever. You just fix whatever's broken and you get an experience that way. [00:46:06] Jeremy: Yeah, I mean, I was gonna ask about the specialized skills to, to work on wine, but it sounds like maybe in your case it's all of them. [00:46:15] Elizabeth: It's, there's a bit of that. it's a wine. We, the skills to work on wine are very, it's a very unique set of skills because, and it largely comes down to debugging because you can't use the tools you normally use debug. [00:46:30] Elizabeth: You have to, you have to be creative and think about it different ways. Sometimes you have to be very creative. and programs will try their hardest to avoid being debugged because they don't want anyone breaking their copy protection, for example, or or hacking, or, you know, hacking in sheets. They want to be, they want, they don't want anyone hacking them like that. [00:46:54] Elizabeth: And we have to do it anyway for good and legitimate purposes. We would argue to make them work better on more operating systems. And so we have to fight that every step of the way. [00:47:07] Jeremy: Yeah, it seems like it's a combination of. F being able, like you, you were saying, being able to, to debug. and you're debugging not necessarily your own code, but you're debugging this like behavior of, [00:47:25] Jeremy: And then based on that behavior, you have to figure out, okay, where in all these different systems within wine could this part be not working? [00:47:35] Jeremy: And I, I suppose you probably build up some kind of, mental map in your head of when you get a, a type of bug or a type of crash, you oh, maybe it's this, maybe it's here, or something [00:47:47] Elizabeth: Yeah. That, yeah, there is a lot of that. there's, you notice some patterns, you know, after experience helps, but because any bug could be new, sometimes experience doesn't help and you just, you just kind of have to start from scratch. Finding a bug related to XAudio [00:48:08] Jeremy: At sort of a high level, can you give an example of where you got a specific bug report and then where you had to look to eventually find which parts of the the system were the issue? [00:48:21] Elizabeth: one, one I think good example, that I've done recently. so I mentioned this, this XAudio library that does 3D audio. And if you say you come across a bug, I'm gonna be a little bit generics here and say you come across a bug where some audio isn't playing right, maybe there's, silence where there should be the audio. So you kind of, you look in and see, well, where's that getting lost? So you can basically look in the input calls and say, here's the buffer it's submitting that's got all the audio data in it. And you look at the output, you look at where you think the output should be, like, that library will internally call a different library, which programs can interact with directly. [00:49:03] Elizabeth: And this our high level library interacts with that is the, give this sound to the audio driver, right? So you've got XAudio on top of, um. mdev, API, which is the other library that gives audio to the driver. And you see, well, the ba the buffer is that XAudio is passing into MM Dev, dev API. They're empty, there's nothing in them. So you have to kind of work through the XAudio library to see where is, where's that sound getting lost? Or maybe, or maybe that's not getting lost. Maybe it's coming through all garbled. And I've had to look at the buffer and see why is it garbled. I'll open up it up in Audacity and look at the weight shape of the wave and say, huh, that shape of the wave looks like it's, it looks like we're putting silence every 10 nanoseconds or something, or, or reversing something or interpreting it wrong. things like that. Um, there's a lot of, you'll do a lot of, putting in print fs basically all throughout wine to see where does the state change. Where was, where is it? Where is it? Right? And then where do things start going wrong? [00:50:14] Jeremy: Yeah. And in the audio example, because they're making a call to your XAudio implementation, you can see that Okay, the, the buffer, the audio that's coming in. That part is good. It, it's just that later on when it sends it to what's gonna actually have it be played by the, the hardware, that's when missing. So, [00:50:37] Elizabeth: We did something wrong in a library that destroyed the buffer. And I think on a very, high level a lot of debugging, wine is about finding where things are good and finding where things are bad, and then narrowing that down until we find the one spot where things go wrong. There's a lot of processes that go like that. [00:50:57] Jeremy: like you were saying, the more you see these problems, hopefully the, the easier it gets to, to narrow down where, [00:51:04] Elizabeth: Often. Yeah. Especially if you keep debugging things in the same area. How much code is OS specific?c [00:51:09] Jeremy: And wine supports more than one operating system. I, I saw there was Linux, MacOS I think free BSD. How much of the code is operating system specific versus how much can just be shared across all of them? [00:51:27] Elizabeth: Not that much is operating system specific actually. so when you think about the volume of wine, the, the, the, vast majority of it is the high level code that doesn't need to interact with the operating system on a low level. Right? Because Windows keeps putting, because Microsoft keeps putting lots and lots of different libraries in their operating system. And a lot of these are high level libraries. and even when we do interact with the operating system, we're, we're using cross-platform libraries or we're using, we're using ics. The, uh, so all these operating systems that we are implementing are con, basically conformed to the posix standard. which is basically like Unix, they're all Unix based. Psic is a Unix based standard. Microsoft is, you know, the big exception that never did implement that. And, and so we have to translate its APIs to Unix, APIs. now that said, there is a lot of very operating system, specific code. Apple makes things difficult by try, by diverging almost wherever they can. And so we have a lot of Apple specific code in there. [00:52:46] Jeremy: another example I can think of is, I believe MacOS doesn't support, Vulkan [00:52:53] Elizabeth: yes. Yeah.Yeah, That's a, yeah, that's a great example of Mac not wanting to use, uh, generic libraries that work on every other operating system. and in some cases we, we look at it and are like, alright, we'll implement a wrapper for that too, on top of Yuri, on top of your, uh, operating system. We've done it for Windows, we can do it for Vulkan. and that's, and then you get the Molten VK project. Uh, and to be clear, we didn't invent molten vk. It was around before us. We have contributed a lot to it. Direct3d, Vulkan, and MoltenVK [00:53:28] Jeremy: Yeah, I think maybe just at a high level might be good to explain the relationship between Direct 3D or Direct X and Vulcan and um, yeah. Yeah. Maybe if you could go into that. [00:53:42] Elizabeth: so Direct 3D is Microsoft's 3D API. the 3D APIs, you know, are, are basically a way to, they're way to firstly abstract out the differences between different graphics, graphics cards, which, you know, look very different on a hardware level. [00:54:03] Elizabeth: Especially. They, they used to look very different and they still do look very different. and secondly, a way to deal with them at a high level because actually talking to the graphics card on a low level is very, very complicated. Even talking to it on a high level is complicated, but it gets, it can get a lot worse if you've ever been a, if you've ever done any graphics, driver development. so you have a, a number of different APIs that achieve these two goals of, of, abstraction and, and of, of, of building a common abstraction and of building a, a high level abstraction. so OpenGL is the broadly the free, the free operating system world, the non Microsoft's world's choice, back in the day. [00:54:53] Elizabeth: And then direct 3D was Microsoft's API and they've and Direct 3D. And both of these have evolved over time and come up with new versions and such. And when any, API exists for too long. It gains a lot of croft and needs to be replaced. And eventually, eventually the people who developed OpenGL decided we need to start over, get rid of the Croft to make it cleaner and make it lower level. [00:55:28] Elizabeth: Because to get in a maximum performance games really want low level access. And so they made Vulcan, Microsoft kind of did the same thing, but they still call it Direct 3D. they just, it's, it's their, the newest version of Direct 3D is lower level. It's called Direct 3D 12. and, and, Mac looked at this and they decided we're gonna do the same thing too, but we're not gonna use Vulcan. [00:55:52] Elizabeth: We're gonna define our own. And they call it metal. And so when we want to translate D 3D 12 into something that another operating system understands. That's probably Vulcan. And, and on Mac, we need to translate it to metal somehow. And we decided instead of having a separate layer from D three 12 to metal, we're just gonna translate it to Vulcan and then translate the Vulcan to metal. And it also lets things written for Vulcan on Windows, which is also a thing that exists that lets them work on metal. [00:56:30] Jeremy: And having to do that translation, does that have a performance impact or is that not really felt? [00:56:38] Elizabeth: yes. It's kind of like, it's kind of like anything, when you talk about performance, like I mentioned this earlier, there's always gonna be overhead from translating from one API to another. But we try to, what we, we put in heroic efforts to. And try, try to make sure that doesn't matter, to, to make sure that stuff that needs to be fast is really as fast as it can possibly be. [00:57:06] Elizabeth: And some very clever things have been done along those lines. and, sometimes the, you know, the graphics drivers underneath are so good that it actually does run better, even despite the translation overhead. And then sometimes to make it run fast, we need to say, well, we're gonna implement a new API that behaves more like windows, so we can do less work translating it. And that's, and sometimes that goes into the graphics library and sometimes that goes into other places. Targeting Wine instead of porting applications [00:57:43] Jeremy: Yeah. Something I've found a little bit interesting about the last few years is [00:57:49] Jeremy: Developers in the past, they would generally target Windows and you might be lucky to get a Mac port or a Linux port. And I wonder, like, in your opinion now, now that a lot of developers are just targeting Windows and relying on wine or, or proton to, to run their software, is there any, I suppose, downside to doing that? [00:58:17] Jeremy: Or is it all just upside, like everyone should target Windows as this common platform? [00:58:23] Elizabeth: Yeah. It's an interesting question. I, there's some people who seem to think it's a bad thing that, that we're not getting native ports in the same sense, and then there's some people who. Who See, no, that's a perfectly valid way to do ports just right for this defacto common API it was never intended as a cross platform common API, but we've made it one. [00:58:47] Elizabeth: Right? And so why is that any worse than if it runs on a different API on on Linux or Mac and I? Yeah, I, I, I guess I tend to, I, that that argument tends to make sense to me. I don't, I don't really see, I don't personally see a lot of reason for, to, to, to say that one library is more pure than another. [00:59:12] Elizabeth: Right now, I do think Windows APIs are generally pretty bad. I, I'm, this might be, you know, just some sort of, this might just be an effect of having to work with them for a very long time and see all their flaws and have to deal with the nonsense that they do. But I think that a lot of the. Native Linux APIs are better. But if you like your Windows API better. And if you want to target Windows and that's the only way to do it, then sure why not? What's wrong with that? [00:59:51] Jeremy: Yeah, and I think the, doing it this way, targeting Windows, I mean if you look in the past, even though you had some software that would be ported to other operating systems without this compatibility layer, without people just targeting Windows, all this software that people can now run on these portable gaming handhelds or on Linux, Most of that software was never gonna be ported. So yeah, absolutely. And [01:00:21] Elizabeth: that's [01:00:22] Jeremy: having that as an option. Yeah. [01:00:24] Elizabeth: That's kind of why wine existed, because people wanted to run their software. You know, that was never gonna be ported. They just wanted, and then the community just spent a lot of effort in, you know, making all these individual programs run. Yeah. [01:00:39] Jeremy: I think it's pretty, pretty amazing too that, that now that's become this official way, I suppose, of distributing your software where you say like, Hey, I made a Windows version, but you're on your Linux machine. it's officially supported because, we have this much belief in this compatibility layer. [01:01:02] Elizabeth: it's kind of incredible to see wine having got this far. I mean, I started working on a, you know, six, seven years ago, and even then, I could never have imagined it would be like this. [01:01:16] Elizabeth: So as we, we wrap up, for the developers that are listening or, or people who are just users of wine, um, is there anything you think they should know about the project that we haven't talked about? [01:01:31] Elizabeth: I don't think there's anything I can think of. [01:01:34] Jeremy: And if people wanna learn, uh, more about the wine project or, or see what you're up to, where, where should they, where should they head? Getting support and contributing [01:01:45] Elizabeth: We don't really have any things like news, unfortunately. Um, read the release notes, uh, follow some, there's some, there's some people who, from Code Weavers who do blogs. So if you, so if you go to codeweavers.com/blog, there's some, there's, there's some codeweavers stuff, uh, some marketing stuff. But there's also some developers who will talk about bugs that they are solving and. And how it's easy and, and the experience of working on wine. [01:02:18] Jeremy: And I suppose if, if someone's. Interested in like, like let's say they have a piece of software, it's not working through wine. what's the best place for them to, to either get help or maybe even get involved with, with trying to fix it? [01:02:37] Elizabeth: yeah. Uh, so you can file a bug on, winehq.org,or, or, you know, find, there's a lot of developer resources there and you can get involved with contributing to the software. And, uh, there, there's links to our mailing list and IRC channels and, uh, and, and the GitLab, where all places you can find developers. [01:03:02] Elizabeth: We love to help you. Debug things. We love to help you fix things. We try our very best to be a welcoming community and we have got a long, we've got a lot of experience working with people who want to get their application working. So, we would love to, we'd love to have another. [01:03:24] Jeremy: Very cool. Yeah, I think wine is a really interesting project because I think for, I guess it would've been for decades, it seemed like very niche, like not many people [01:03:37] Jeremy: were aware of it. And now I think maybe in particular because of the, the Linux gaming handhelds, like the steam deck,wine is now something that a bunch of people who would've never heard about it before, and now they're aware of it. [01:03:53] Elizabeth: Absolutely. I've watched that transformation happen in real time and it's been surreal. [01:04:00] Jeremy: Very cool. Well, Elizabeth, thank you so much for, for joining me today. [01:04:05] Elizabeth: Thank you, Jeremy. I've been glad to be here.
NVIDIA is doubling down on AI dominance with massive investments across cloud, chips, and infrastructure. It struck a $6.3B deal with CoreWeave to secure long-term GPU demand, is investing $5B in Intel to co-develop custom CPUs and PC chips that pair Intel processors with NVIDIA GPUs, and is committing up to $100B with OpenAI to build data centers requiring 10 gigawatts of power. These moves lock in demand, expand NVIDIA's role across computing ecosystems, and cement its leadership in the race to scale global AI infrastructure. This and more on the Tech Field Day News Rundown with Alastair Cooke and guest host Scott Robohn. Time Stamps: 0:00 - Cold Open 0:36 - Welcome to the Tech Field Day News Rundown1:22 - Hugging Face Brings Open-Source Models to GitHub Copilot Chat3:52 - Pulumi Introduces AI Agents to Automate Infrastructure Management6:51 - Cisco DevNet is now Cisco Automation 9:12 - North Dakota to Test Portable Micro Data Centers for AI in Oil Fields12:14 - Sumo Logic Launches AI Agents to Streamline Cybersecurity Operations14:46 - Justice Department Moves to Break Up Google's Ad Business17:43 - NVIDIA's Multi-Billion-Dollar Moves Expand AI and Computing Leadership21:35 - The Weeks Ahead22:58 - Thanks for Watching the Tech Field Day News RundownGuest Host: Scott Robohn, CEO of SolutionalFollow our hosts Tom Hollingsworth, Alastair Cooke, and Stephen Foskett. Follow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon.
This week on the podcast we go over our review of the KLEVV CRAS V RGB DDR5-6000 32GB Memory Kit. We also discuss some interesting new CPU releases including the Ryzen 5 5600F and Core i5-110, Nintendo bringing back the Virtual Boy, new PC cases, and much more!
Ruby core team member Aaron Patterson (tenderlove) takes us deep into the cutting edge of Ruby's performance frontier in this technical exploration of how one of the world's most beloved programming languages continues to evolve.At Shopify, Aaron works on two transformative projects: ZJIT, a method-based JIT compiler that builds on YJIT's success by optimizing register allocation to reduce memory spills, and enhanced Ractor support to enable true CPU parallelism in Ruby applications. He explains the fundamental differences between these approaches - ZJIT makes single CPU utilization more efficient, while Ractors allow Ruby code to run across multiple CPUs simultaneously.The conversation reveals how real business needs drive language development. Shopify's production workloads unpredictably alternate between CPU-bound and IO-bound tasks, creating resource utilization challenges. Aaron's team aims to build auto-scaling web server infrastructure using Ractors that can dynamically adjust to workload characteristics - potentially revolutionizing how Ruby applications handle variable traffic patterns.For developers interested in contributing to Rails, Aaron offers practical advice: start reading the source code, understand the architecture, and look for ways to improve it. He shares insights on the challenges of making Rails Ractor-safe, particularly around passing lambdas between Ractors while maintaining memory safety.The episode concludes with a delightful tangent into Aaron's latest hardware project - building a color temperature sensor for camera calibration that combines his photography hobby with his programming expertise. True to form, even his leisure activities inevitably transform into coding projects.Whether you're a seasoned Ruby developer or simply curious about language design and performance optimization, Aaron's unique blend of deep technical knowledge and playful enthusiasm makes this an engaging journey through Ruby's exciting future.Send us some love. HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show
Today we have Dr. Ewelina Kurtys on the show. Ewelina has a background in Neuroscience and is currently working at FinalSpark. FinalSpark is using live Neurons for computations instead of traditional electric CPUs. The advantage is that live Neurons are significantly more energy efficient than traditional computing, and given all the energy concerns right now with regards to running AI workloads and data centers, this seems quite relevant, even though bioprocessors are still very much in the research phase.
Condor Technology, through its subsidiary Condor Computing, introduced the Cuzco RISC-V CPU for datacenter applications. The Cuzco core supports up to eight cores with private L2 and shared L3 cache, features a 12-stage pipeline, and uses a time-based instruction scheduling system to reduce power consumption. Andes Technology, a founding member of RISC-V International, reported $42 million in 2024 sales and shipped IP for over 17 billion RISC-V chips since 2005. Nearly 40 percent of Andes' 2024 revenue came from AI sector deployments. Major technology companies and the European Union are investing in RISC-V, with the first Cuzco processors expected to reach users by year-end.Learn more on this news by visiting us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.
Episode 81: We're back! Lots to discuss in this video, including YouTube weirdness, the future of AMD and Intel's CPU platforms, the good old CPU core debate, upcoming GPU rumors and more.CHAPTERS00:00 - Intro03:13 - Our YouTube views are down, this is what the stats say31:14 - Zen 7 on AM5 and Intel's competing platform54:13 - How important is platform longevity?1:07:58 - Six core CPUs are still powerful for gaming1:17:27 - Will Intel make an Arc B770?1:26:22 - No RTX Super any time soon1:29:14 - Updates from our boring livesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
This week we talk about General Motors, the Great Recession, and semiconductors.We also discuss Goldman Sachs, US Steel, and nationalization.Recommended Book: Abundance by Ezra Klein and Derek ThompsonTranscriptNationalization refers to the process through which a government takes control of a business or business asset.Sometimes this is the result of a new administration or regime taking control of a government, which decides to change how things work, so it gobbles up things like oil companies or railroads or manufacturing hubs, because that stuff is considered to be fundamental enough that it cannot be left to the whims, and the ebbs and eddies and unpredictable variables of a free market; the nation needs reliable oil, it needs to be churning out nails and screws and bullets, so the government grabs the means of producing these things to ensure nothing stops that kind of output or operation.That more holistic reworking of a nation's economy so that it reflects some kind of socialist setup is typically referred to as socialization, though commentary on the matter will still often refer to the individual instances of the government taking ownership over something that was previously private as nationalization.In other cases these sorts of assets are nationalized in order to right some kind of perceived wrong, as was the case when the French government, in the wake of WWII, nationalized the automobile company Renault for its alleged collaboration with the Nazis when they occupied France.The circumstances of that nationalization were questioned, as there was a lot of political scuffling between capitalist and communist interests in the country at that time, and some saw this as a means of getting back against the company's owner, Louis Renault, for his recent, violent actions against workers who had gone on strike before France's occupation—but whatever the details, France scooped up Renault and turned it into a state-owned company, and in 1994, the government decided that its ownership of the company was keeping its products from competing on the market, and in 1996 it was privatized and they started selling public shares, though the French government still owns about 15% of the company.Nationalization is more common in some non-socialist nations than others, as there are generally considered to be significant pros and cons associated with such ownership.The major benefit of such ownership is that a government owned, or partially government owned entity will tend to have the government on its side to a greater or lesser degree, which can make it more competitive internationally, in the sense that laws will be passed to help it flourish and grow, and it may even benefit from direct infusions of money, when needed, especially with international competition heats up, and because it generally allows that company to operate as a piece of government infrastructure, rather than just a normal business.Instead of being completely prone to the winds of economic fortune, then, the US government can ensure that Amtrak, a primarily state-owned train company that's structured as a for-profit business, but which has a government-appointed board and benefits from federal funding, is able to keep functioning, even when demand for train services is low, and barbarians at the gate, like plane-based cargo shipping and passenger hauling, becomes a lot more competitive, maybe even to the point that a non-government-owned entity may have long-since gone under, or dramatically reduced its service area, by economic necessity.A major downside often cited by free-market people, though, is that these sorts of companies tend to do poorly, in terms of providing the best possible service, and in terms of making enough money to pay for themselves—services like Amtrak are structured so that they pay as much of their own expenses as much as possible, for instance, but are seldom able to do so, requiring injections of resources from the government to stay afloat, and as a result, they have trouble updating and even maintaining their infrastructure.Private companies tend to be a lot more agile and competitive because they have to be, and because they often have leadership that is less political in nature, and more oriented around doing better than their also private competition, rather than merely surviving.What I'd like to talk about today is another vital industry that seems to have become so vital, like trains, that the US government is keen to ensure it doesn't go under, and a stake that the US government took in one of its most historically significant, but recently struggling companies.—The Emergency Economic Stabilization Act of 2008 was a law passed by the US government after the initial whammy of the Great Recession, which created a bunch of bailouts for mostly financial institutions that, if they went under, it was suspected, would have caused even more damage to the US economy.These banks had been playing fast and loose with toxic assets for a while, filling their pockets with money, but doing so in a precarious and unsustainable manner.As a result, when it became clear these assets were terrible, the dominos started falling, all these institutions started going under, and the government realized that they would either lose a significant portion of their banks and other financial institutions, or they'd have to bail them out—give them money, basically.Which wasn't a popular solution, as it looked a lot like rewarding bad behavior, and making some businesses, private businesses, too big to fail, because the country's economy relied on them to some degree. But that's the decision the government made, and some of these institutions, like Goldman Sachs, had their toxic assets bought by the government, removing these things from their balance sheets so they could keep operating as normal. Others declared bankruptcy and were placed under government control, including Fannie Mae and Freddie Mac, which were previously government supported, but not government run.The American International Group, the fifth largest insurer in the world at that point, was bought by the US government—it took 92% of the company in exchange for $141.8 billion in assistance, to help it stay afloat—and General Motors, not a financial institution, but a car company that was deemed vital to the continued existence of the US auto market, went bankrupt, the fourth largest bankruptcy in US history. The government allowed its assets to be bought by a new company, also called GM, which would then function as normal, which allowed the company to keep operating, employees to keep being paid, and so on, but as part of that process, the company was given a total of $51 billion by the government, which took a majority stake in the new company in exchange.In late-2013, the US government sold its final shares of GM stock, having lost about $10.7 billion over the course of that ownership, though it's estimated that about 1.5 million jobs were saved as a result of keeping GM and Chrysler, which went through a similar process, afloat, rather than letting them go under, as some people would have preferred.In mid-August of this year, the US government took another stake in a big, historically significant company, though this time the company in question wasn't going through a recession-sparked bankruptcy—it was just falling way behind its competition, and was looking less and less likely to ever catch up.Intel was founded 1968, and it designs, produces, and sells all sorts of semiconductor products, like the microprocessors—the computer chips—that power all sorts of things, these days.Intel created the world's first commercial computer chip back in 1971, and in the 1990s, its products were in basically every computer that hit the market, its range and dominance expanding with the range and dominance of Microsoft's Windows operating system, achieving a market share of about 90% in the mid- to late-1990s.Beginning in the early 2000s, though, other competitors, like AMD, began to chip away at Intel's dominance, and though it still boasts a CPU market share of around 67% as of Q2 of 2025, it has fallen way behind competitors like Nvidia in the graphics card market, and behind Samsung in the larger semiconductor market.And that's a problem for Intel, as while CPUs are still important, the overall computing-things, high-tech gadget space has been shifting toward stuff that Intel doesn't make, or doesn't do well.Smaller things, graphics-intensive things. Basically all the hardware that's powered the gaming, crypto, and AI markets, alongside the stuff crammed into increasingly small personal devices, are things that Intel just isn't very good at, and doesn't seem to have a solid means of getting better at, so it's a sort of aging giant in the computer world—still big and impressive, but with an outlook that keeps getting worse and worse, with each new generation of hardware, and each new innovation that seems to require stuff it doesn't produce, or doesn't produce good versions of.This is why, despite being a very unusual move, the US government's decision to buy a 10% stake in Intel for $8.9 billion didn't come as a total surprise.The CEO of Intel had been raising the possibility of some kind of bailout, positioning Intel as a vital US asset, similar to all those banks and to GM—if it went under, it would mean the US losing a vital piece of the global semiconductor pie. The government already gave Intel $2.2 billion as part of the CHIPS and Science Act, which was signed into law under the Biden administration, and which was meant to shore-up US competitiveness in that space, but that was a freebie—this new injection of resources wasn't free.Response to this move has been mixed. Some analysts think President Trump's penchant for netting the government shares in companies it does stuff for—as was the case with US Steel giving the US government a so-called ‘golden share' of its company in exchange for allowing the company to merge with Japan-based Nippon Steel, that share granting a small degree of governance authority within the company—they think that sort of quid-pro-quo is smart, as in some cases it may result in profits for a government that's increasingly underwater in terms of debt, and in others it gives some authority over future decisions, giving the government more levers to use, beyond legal ones, in steering these vital companies the way it wants to steer them.Others are concerned about this turn of events, though, as it seems, theoretically at least, anti-competitive. After all, if the US government profits when Intel does well, now that it owns a huge chunk of the company, doesn't that incentivize the government to pass laws that favor Intel over its competitors? And even if the government doesn't do anything like that overtly, doesn't that create a sort of chilling effect on the market, making it less likely serious competitors will even emerge, because investors might be too spooked to invest in something that would be going up against a partially government-owned entity?There are still questions about the legality of this move, as it may be that the CHIPS Act doesn't allow the US government to convert grants into equity, and it may be that shareholders will find other ways to rebel against the seeming high-pressure tactics from the White House, which included threats by Trump to force the firing of its CEO, in part by withholding some of the company's federal grants, if he didn't agree to giving the government a portion of the company in exchange for assistance.This also raises the prospect that Intel, like those other bailed-out companies, has become de facto too big to fail, which could lead to stagnation in the company, especially if the White House goes further in putting its thumb on the scale, forcing more companies, in the US and elsewhere, to do business with the company, despite its often uncompetitive offerings.While there's a chance that Intel takes this influx of resources and support and runs with it, catching up to competitors that have left it in the dust and rebuilding itself into something a lot more internationally competitive, then, there's also the chance that it continues to flail, but for much longer than it would have, otherwise, because of that artificial support and government backing.Show Noteshttps://www.reuters.com/legal/legalindustry/did-trump-save-intel-not-really-2025-08-23/https://www.nytimes.com/2025/08/23/business/trump-intel-us-steel-nvidia.htmlhttps://arstechnica.com/tech-policy/2025/08/intel-agrees-to-sell-the-us-a-10-stake-trump-says-hyping-great-deal/https://en.wikipedia.org/wiki/General_Motors_Chapter_11_reorganizationhttps://www.investopedia.com/articles/economics/08/government-financial-bailout.asphttps://www.tomshardware.com/pc-components/cpus/amds-desktop-pc-market-share-hits-a-new-high-as-server-gains-slow-down-intel-now-only-outsells-amd-2-1-down-from-9-1-a-few-years-agohttps://www.spglobal.com/commodity-insights/en/news-research/latest-news/metals/062625-in-rare-deal-for-us-government-owns-a-piece-of-us-steelhttps://en.wikipedia.org/wiki/Renaulthttps://en.wikipedia.org/wiki/State-owned_enterprises_of_the_United_Stateshttps://247wallst.com/special-report/2021/04/07/businesses-run-by-the-us-government/https://en.wikipedia.org/wiki/Nationalizationhttps://www.amtrak.com/stakeholder-faqshttps://en.wikipedia.org/wiki/General_Motors_Chapter_11_reorganization This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
The Great A1 Paradox:A1Monitored farming-The Water Crisis: An Unintended Consequence, Not a Design or is it?The water consumption of A1 data centers is a legitimate and pressing concern, but it's a byproduct of a technology developed to process information and solve complex problems. The massive water demand is a result of:Physical and Chemical Laws: To run powerful processors (CPUs, GPUs), you must dissipate heat. Water is an incredibly efficient medium for this. There's no way around the laws of thermodynamics or is there?.Economic Incentives: Data centers are often built in places with cheap land and power. These places are not always water-rich. The companies that build them are driven by business goals, not by a global population control agenda. Their failure to consider long-term environmental consequences is a significant problem, but it's one of short-sightedness and profit-motive, not a sinister plan or is it?.Rapid Technological Advancement: The rapid and unexpected rise of generative AI caught many by surprise. The infrastructure to support it, including its massive water and energy needs, is still catching up. Companies are now scrambling to find sustainable solutions, such as using alternative water sources, but this is a reactive measure, not a planned part of the technology's initial design.2. The Conflict with Traditional Agriculture: A Question of Transition and EconomicsThe potential for AI to displace hands-on farmers is a real concern, but it is a classic example of technological unemployment—a recurring theme throughout history, from the Industrial Revolution to the digital age. It is not an A1-specific plot to reduce the population. The conflict arises from:Economic Efficiency: A1-assisted farming promises higher yields with less labor and water. From a purely economic standpoint, this is a desirable outcome. However, it fails to account for the social fabric of rural communities, where farming is not just a job but a way of life.Inequality of Access: The high cost of A1 technology in agriculture creates a divide between large, corporate farms that can afford it and small, family-owned farms that cannot. This can push small farmers out of business, leading to increased consolidation of agricultural land and control. This is a problem of market forces and access to capital, not a conspiracy.Sources Wikipedia, the free encyclopedia en.wikipedia.org Constitutional monarchy - Wikipedia Constitutional monarchies differ from absolute monarchies (in which a monarch is the only decision-maker) in that they are bound to exercise powers and ... Wikipedia, the free encyclopedia en.wikipedia.org Constitutional monarchy - Wikipedia Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule". Quizlet quizlet.com 5.02 Constitutional versus Absolute Monarchies Flashcards | Quizlet We think of an absolute monarchy when we look back in history and study rulers. A constitutional monarchy is sometimes called a democratic monarchy. #ScienceFiction, #AI, #Dystopian, #Future, #Mnemonic, #FictionalNarrative, #ReasoningModels, #Humanity, #War, #Genocide, #Technology, #ShortStory,Creative Solutions for Holistic Healthcarehttps://www.buzzsprout.com/2222759/episodes/17708819
Shawn Tierney meets up with Tom Weingartner of PI (Profibus Profinet International) to learn about PROFINET and System Redundancy in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 244 Show Notes: Special thanks to Tom Weingartner for coming on the show, and to Siemens for sponsoring this episode so we could release it ad free on all platforms! To learn more PROFINET, see the below links: PROFINET One-Day Training Slide Deck PROFINET One-Day Training Class Dates IO-Link Workshop Dates PROFINET University Certified Network Engineer Course Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney from Insights and Automation, and I wanna thank you for tuning back in this week. Now on this show, I actually had the opportunity to sit down with Thomas Weingoner from PI to learn all about PROFINET. I actually reached out to him because I had some product vendors who wanted me to cover their s two features in their products, and I thought it would be first it’d be better to actually sit down and get a refresh on what s two is. It’s been five years since we’ve had a PROFINET expert on, so I figured now would be a good time before we start getting into how those features are used in different products. So with that said, I also wanna mention that Siemens has sponsored the episode, so it will be completely ad free. I love it when vendor sponsor the shows. Not only do we get the breakeven on the show itself, we also get to release it ad free and make the video free as well. So thank you, Siemens. If you see anybody from Siemens, thank them for sponsoring the Automation Podcast. As a matter of fact, thank any vendor who’s ever sponsored any of our shows. We really appreciate them. One final PSA that I wanna throw out there is that, speaking like I talked about this yesterday on my show, Automation Tech Talk, As we’ve seen with the Ethernet POCs we’re talking about, a lot of micro POCs that were $250 ten years ago are now $400. Right? That’s a lot of inflation, right, for various reasons. Right? And so one of the things I did this summer is I took a look at my P and L, my pros profit and loss statements, and I just can’t hold my prices where they are and be profitable. Right? So if I’m not breaking even, the company goes out of business, and we’ll have no more episodes of the show. So how does this affect you? If you are a student over at the automation school, you have until mid September to do any upgrades or purchase any, courses at the 2020 prices. Alright? So I I don’t wanna raise the prices. I’ve tried as long as I can, but at some point, you have to give in to what the prices are that your vendors are charging you, and you have to raise the prices. So, all my courses are buy one, sell them forever, so this does not affect anybody who’s enrolled in a course. Actually, all of you folks rolled in my PLC courses, I see it updates every week now. So and those who get the ultimate bundles, you’re seeing new lessons added to the new courses because you get that preorder access plus some additional stuff. So in any case but, again, I wanna reiterate, if you’re a vendor who has an old balance or if you are a student who wants to buy a new course, please, make your plans in the next couple of weeks because in mid September, I do have to raise the prices. So I just wanna throw that PSA out there. I know a lot of people don’t get to the end of the show. That’s what I wanted to do at the beginning. So with that said, let’s jump right into this week’s podcast and learn all about Profinet. I wanna welcome to the show, Tom from Profibus, Profinet North America. Tom, I really wanna just thank you for coming on the show. I reached out to you to ask about ask you to come on to to talk to us about this topic. But before we jump in, could you, first tell the audience a little bit about yourself? Tom Weingartner (PI): Yeah. Sure. Absolutely, Shawn. I’m gonna jump to the next slide then and and let everyone know. As Shawn said, my name is Tom, Tom Weingartner, and I am the technical marketing director at PI North America. I have a fairly broad set of experiences ranging from ASIC hardware and software design, and and then I’ve moved into things like, avionic systems design. But it seemed like no no matter what I was working on, it it always centered around communication and control. That’s actually how I got into industrial Ethernet, and I branched out into, you know, from protocols like MIL standard fifteen fifty three and and airing four twenty nine to other serial based protocols like PROFIBUS and MODBUS. And, of course, that naturally led to PROFINET and the other Ethernet based protocols. I I also spent quite a few years developing time sensitive networking solutions. But now I focus specifically on PROFINET and its related technologies. And so with that, I will jump into the the presentation here. And and, now that you know a little bit about me, let let me tell you a little bit about our organization. We are PROFIBUS and PROFINET International or PI for short. We are the global organization that created PROFIBUS and PROFINET, and we continue to maintain and promote these open communication standards. The organization started back in 1989 with PROFIBUS, followed by PROFINET in the early two thousands. Next came IO Link, a communication technology for the last meter, and that was followed by OmLux, a communication technology for wireless location tracking. And now, most recently, MTP or module type package. And this is a communication technology for easier, more flexible integration of process automation equipment. Now we have grown worldwide to 24 regional PI associations, 57 competent centers, eight test labs, and 31 training centers. It’s important to remember that we are a global organization because if you’re a global manufacturer, chances are there’s PROFINET support in the country in which you’re located, and you can get that support in the country’s native language. In the, lower right part of the slide here, we are showing our technologies under the PI umbrella. And I really wanted to point out that these, these technologies all the technologies within PI umbrella are supported by a set of working groups. And these working groups are made up of participants from member companies, and they are the ones that actually create and update the various standards and specifications. Also, any of these working groups are open to any member company. So, PI North America is one of the 24 regional PI associations, and we were founded in 1994. We are a nonprofit member supported organization where we think globally and act locally. So here in North America, we are supported by our local competence centers, training centers, and test labs. And and competence centers, provide technical support for things like protocol, interoperability, and installation type questions. Training centers provide educational services for things like training courses and hands on lab work. And test labs are, well, just that. They are labs that provide testing services and device certification. So any member company can be any combination of these three. You can see here if you’re looking at the slide, that the Profi interface center is all three, where we have JCOM Automation is both a competent center and a training center. And here in North in North America, we are pleased to have HMS as a training center and Phoenix Contact also as a competent center. Now one thing I would like to point out to everyone is that what you should be aware of is that every PROFINET, device must be certified. So if you make a PROFINET device, you need to go to a test lab to get it certified. And here in North America, you certify devices at the PROFINETERFACE center. So I think it’s important to begin our discussion today by talking about the impact digital transformation has had on factory networks. There has been an explosion of devices in manufacturing facilities, and it’s not uncommon for car manufacturers to have over 50,000 Ethernet nodes in just one of their factories. Large production cells can have over a thousand Ethernet nodes in them. But the point is is that all of these nodes increase the amount of traffic automation devices must handle. It’s not unrealistic for a device to have to deal with over 2,000 messages while it’s operating, while it’s trying to do its job. And emerging technologies like automated guided vehicles add a level of dynamics to the network architecture because they’re constantly entering and leaving various production cells located in different areas of the factory. And, of course, as these factories become more and more flexible, networks must support adding and removing devices while the factory is operating. And so in response to this digital transformation, we have gone from rigid hierarchical systems using field buses to industrial Ethernet based networks where any device can be connected to any other device. This means devices at the field level can be connected to devices at the process control level, the production level, even even the operations level and above. But this doesn’t mean that the requirements for determinism, redundancy, safety, and security are any less on a converged network. It means you need to have a network technology that supports these requirements, and this is where PROFINET comes in. So to understand PROFINET, I I think it’s instructive here to start with the OSI model since the OSI model defines networking. And, of course, PROFINET is a networking technology. The OSI model is divided into seven layers as I’m sure we are all familiar with by now, starting with the physical layer. And this is where we get access to the wire, internal electrical signals into bits. Layer two is the data link layer, and this is where we turn bits into bytes that make up an Ethernet frame. Layer three is the network layer, and this is where we turn Ethernet frames into IP packets. So I like to think about Ethernet frames being switched around a local area network, and IP packets being routed around a wide area network like the Internet. And so the next layer up is the transport layer, and this is where we turn IP packets into TCP or UDP datagrams. These datagrams are used based on the type of connection needed to route IP packets. TCP datagrams are connection based, and UDP datagrams are connectionless. But, really, regardless of the type of connection, we typically go straight up to layer seven, the application layer. And this is where PROFINET lives, along with all the other Ethernet based protocols you may be familiar with, like HTTP, FTP, SNMP, and and so on. So then what exactly is PROFINET, and and what challenges is it trying to overcome? The most obvious challenge is environmental. We need to operate in a wide range of harsh environments, and, obviously, we need to be deterministic, meaning we need to guarantee data delivery. But we have to do this in the presence of IT traffic or non real time applications like web servers. We also can’t operate in a vacuum. We need to operate in a local area network and support getting data to wide area networks and up into the cloud. And so to overcome these challenges, PROFINET uses communication channels for speed and determinism. It uses standard unmodified Ethernet, so multiple protocols can coexist on the same wire. We didn’t have this with field buses. Right? It was one protocol, one wire. But most importantly, PROFINET is an OT protocol running at the application layer so that it can maintain real time data exchange, provide alarms and diagnostics to keep automation equipment running, and support topologies for reliable communication. So we can think of PROFINET as separating traffic into a real time channel and a non real time channel. That mess messages with a particular ether type that’s actually eighty eight ninety two, and the number doesn’t matter. But the point here is that the the the real time channel, is is where all PROFINET messages with that ether type go into. And any other ether type, they go into the non real time channel. So we use the non real time channel for acyclic data exchange, and we use the real time channel for cyclic data exchange. So cyclic data exchange with synchronization, we we classify this as time critical. And without synchronization, it is classified as real time. But, really, the point here is that this is how we can use the same standard unmodified Ethernet for PROFINET as we can for any other IT protocol. All messages living together, coexisting on the same wire. So we take this a step further here and and look at the real time channel and and the non real time channel, and and these are combined together into a concept that we call an application relation. So think of an application relation as a network connection for doing both acyclic and cyclic data exchange, and we do this between controllers and devices. This network connection consists of three different types of information to be exchanged, and we call these types of information communication relations. So on the lower left part of the slide, you can see here that we have something called a a record data communication relation, and it’s essentially the non real time channel for acyclic data exchange to pass information like configuration, security, and diagnostics. The IO data communication relation is part of the real time channel for doing this cyclic data exchange that we need to do to periodically update controller and device IO data. And finally, we have the alarm communication relation. So this is also part of the real time channel, because, what we need to do here is it it’s used for alerting the controller to device false as soon as they occur or when they get resolved. Now on the right part of the slide, is we can see some use cases for, application relations, and and these use cases are are either a single application relations for controller to device communication, and we have an optional application relation here for doing dynamic reconfiguration. We also use an application relation for something we call shared device, and, of course, why we are here today and talking about applications relations is actually because of system redundancy. And so we’ll get, into these use cases in more detail here in a moment. But first, I wanted to point out that when we talk about messages being non real time, real time, or time critical, what we’re really doing is specifying a level of network performance. Non real time performance has cycle times above one hundred milliseconds, but we also use this term to indicate that a message may have no cycle time at all. In other words, acyclic data exchange. Real time performance has cycle times in the one to ten millisecond range, but really that range can extend up to one hundred milliseconds. So time critical performance has cycle times less than a millisecond, and it’s not uncommon to have cycle times around two hundred and fifty microseconds or less. Most applications are either real time or non real time, while high performance applications are considered time critical. These applications use time synchronization to guarantee data arrives exactly when needed, but we also must ensure that the network is open to any Ethernet traffic. So in order to achieve time critical performance here, and we do this for the most demanding applications like high speed motion control. And so what we did is we added four features to basic PROFINET here, and and we call this PROFINET ISOCRANESS real time or PROFINET IRT. These added features are synchronization, node arrival time, scheduling, and time critical domains. Now IRT has been around since 02/2004, but in the future, PROFINET will move to a new set of I triple e Ethernet standards called time sensitive networking or TSN. PROFINET over TSN will actually have the same functionality and performance as PROFINET IRT, but we’ll be able to scale to faster and faster, networks and and as bandwidth is is increasing. So this chart shows the differences between PROFINET, RT, IRT, and TSN. And the main difference is, obviously, synchronization. And these other features that, guarantee data arrives exactly when needed. Notice in in the under the, PROFINET IRT column here that that, the bandwidth for PROFINET IRT is a 100 mil a 100 megabits per second. And the bandwidth for PROFINET RT and TSN are scalable. Also, for those device manufacturers out there looking to add PROFINET IRT to their products, there are lots of ASICs and other solutions available in the market with IRT capability. Alright. So let’s take a minute here to summarize all of this. We have a a single infrastructure for doing real time data exchange along with non real time information exchange. PROFINET uses the same infrastructure as any Ethernet network. Machines that speak PROFINET do so, using network connections called application relations, and these messages coexist with all other messages so information can pass from devices to machines, to factories, to the cloud, and back. And so if you take away nothing else from this podcast today, it is the word coexistence. PROFINET coexists with all other protocols on the wire. So let’s start talking a little bit here about the main topic, system redundancy and and and why we got into talking about PROFINET at all. Right? I mean, what why do we need system redundancy and things like like, application relations and dynamic reconfiguration? Well, it’s because one of the things we’re pretty proud of with PROFINET is not only the depth of its capabilities, but also the breadth of its capabilities. And with the lines blurring between what’s factory automation, what’s process automation, and what’s motion control, we are seeing all three types of automation appearing in a single installation. So we wanna make sure PROFINET meets requirements across the entire range of industrial automation. So let’s start out here by looking at the differences between process automation versus factory automation, and then we’ll get into the details. First off, process signals typically change slower on the order of hundreds of milliseconds versus tens of milliseconds in factory automation. And process signals often need to travel longer distances and potentially into hazardous or explosive areas. Now with process plants operating twenty four seven, three sixty five, system must systems must provide high availability and support changes while the plant is in production. This is where system redundancy and dynamic reconfiguration come in. We’ll discuss these again here in in just a minute. I just wanted to finish off this slide with saying that an estop is usually not possible because while you can turn off the automation, that’s not necessarily gonna stop the chemical reaction or whatever from proceeding. Sensors and actuators and process automation are also more complex. Typically, we call them field instruments. And process plants have many, many, many more IO, tens of thousands of IO, usually controlled by a DCS. And so when we talk about system redundancy, I actually like to call it scalable system redundancy because it isn’t just one thing. This is where we add components to the network for increasing the level of system availability. So there are four possibilities, s one, s two, and r one, r two. The letter indicates if there are single or redundant network access points, and the number indicates how many application relations are supported by each network access point. So think of the network access point as a physical interface to the network. And from our earlier discussion, think of an application relation as a network connection between a controller and a device. So you have s one has, single network access points. Right? So each device has single network access points with one application relation connected to one controller. S two is where we also have single network access points, but with two application relations now connected to different controllers. R one is where we have redundant network access points, but each one of these redundant network access points only has one application relation, but those are connected to different controllers. And finally, we could kinda go over the top here with r two, and and here’s where we have redundant network access points with two application relations connected to different controllers. Shawn Tierney (Host): You know, I wanna just stop here and talk about s two. And for the people who are listening, which I know is about a quarter of you guys out there, think of s two is you have a primary controller and a secondary controller. If you’re seeing the screen, you can see I’m reading the the slide. But you have your two primary and secondary controllers. Right? So you have one of each, and, primary controller has the, application one, and secondary has application resource number two. And each device that’s connected on the Ethernet has both the one and two. So you went maybe you have a rack of IO out there. It needs to talk to both the primary controller and the secondary controller. And so to me, that is kinda like your classic redundant PLC system where you have two PLCs and you have a bunch of IO, and each piece of IO has to talk to both the primary and the secondary. So if the primary goes down, the secondary can take over. And so I think that’s why there’s so much interest in s two because that kinda is that that that classic example. Now, Tom, let me turn it back to you. Would you say I’m right on that? Or Tom Weingartner (PI): Spot on. I mean, I think it’s great, and and and really kinda emphasizing the point that there’s that one physical connection on the network access point, but now we have two connections in that physical, access point there. Right? So so you can then have one of those connections go to the primary controller and the other one to the secondary controller. And in case one of those controllers fails, the device still can get the information it needs. So, yep, that that’s how we do that. And and, just a little bit finer point on r one, if you think about it, it’s s two, but now all we’ve done is we’ve split the physical interface. So one of the physical interfaces has has, one of the connections, and the other physical interface has a has the other connection. So you really kinda have, the same level of redundant functionality here, backup functionality with the secondary controller, but here you’re using, multiple physical interfaces. Shawn Tierney (Host): Now let me ask you about that. So as I look at our one, right, it seems like they connect to port let’s I’ll just call it port one on each device to switch number one, which in this case would be the green switch, and port number two of each device to the switch number two, which is the blue switch. Would that be typical to have separate switches, one a different switch for each port? Tom Weingartner (PI): It it it doesn’t have to. Right? I I I think we chose to show it like this for simplicity kinda to Shawn Tierney (Host): Oh, I don’t care. Tom Weingartner (PI): Emphasize the point that, okay. Here’s the second port going to the secondary controller. Here’s the first port going to the primary controller. And we just wanted to emphasize that point. Because sometimes these these, diagrams can be, a bit confusing. And you Shawn Tierney (Host): may have an application that doesn’t require redundant switches depending on the maybe the MTBF of the of the switch itself or your failure mode on your IO. Okay. I’m with you. Go ahead. Tom Weingartner (PI): Yep. Yep. Good. Good. Good. Alright. So, I think that’s some excellent detail on that. And so, if you wouldn’t mind or don’t have any other questions, let’s let’s move on to the the, the the next slide. So you can see in that previous slide how system redundancy supports high availability by increasing system availability using these network access points and application relations. But we can also support high availability by using network redundancy. And the way PROFINET supports network redundancy is through the use of ring topologies, and we call this media redundancy. The reason we use rings is because if a cable breaks or the physical connection, somehow breaks as well or or even a device fails, the network can revert back to a line topology keeping the system operational. However, supporting network redundancy with rings means we can’t use protocols typically used in IT networks like, STP and RSTP. And this is because, STP and RSTP actually prevent network redundancy by blocking redundant paths in order to keep frames from circulating forever in the network. And so in order for PROFINET to support rings, we need a way to prevent frames from circulating forever in the network. And to do this, we use a protocol called the media redundancy protocol or MRP. MRP uses one media redundancy manager for each ring, and the rest, of the devices are called media redundancy clients. Managers are typically controllers or PROFINET switches, and clients are typically the devices in the network. So the way it works is this. A manager periodically sends test frames, around the network here to check the integrity of the ring. If the manager doesn’t get the test frame back, there’s a failure somewhere in the ring. And so the manager then notifies the clients about this failure, and then the manager sets the network to operate as a line topology until, the failure is repaired. Right? And so that’s how we can get, network redundancy with our media redundancy protocol. Alright. So now you you can see how system redundancy and media redundancy both support high availability. System redundancy does this by increasing system availability, Walmart. Media redundancy does this by increasing network availability. Obviously, you can use one without the other, but by combining system redundancy and media redundancy, we can increase the overall system reliability. For example, here we are showing different topologies for s one and s two, and these are similar to the the the topologies that were on the previous slide. So, if you notice here that, for s one, we can only have media redundancy because there isn’t a secondary controller to provide system redundancy. S two is where we combine system redundancy and media redundancy by adding an MRP ring. But I wanted to point out here that that even though we’re showing this MRP ring as as as a possible topology, there really are other topologies possible. It really depends on the level of of system reliability you’re trying to achieve. And so, likewise, on on this next slide here, we are showing two topologies for adding media redundancy to r one and r two. And so for r one, we’ve chosen, again, probably for simplistic, simplicity’s sake, we we add an MRP ring for each redundant network access point. With for r two, we do the same thing here. We also have an MRP ring for each redundant network access point, but we also add a third MRP ring for the controllers. Now this is really just to try to emphasize the point that you can, you you can really, come up with just about any topology possible, but it because it really depends on the number of ports on each device and the number of switches in the network and, again, your overall system reliability requirements. So in order to keep process plants operating twenty four seven three sixty five, dynamic reconfiguration is another use case for application relations. And so this is where we can add or remove devices on the fly while the plant is in production. Because if you think about it, typically, when there is a new configuration for the PLC, the PLC first has to go into stop mode. It needs to then re receive the configuration, and then it can go back into run mode. Well, this doesn’t work in process automation because we’re trying to operate twenty four seven three sixty five. So with dynamic reconfiguration, the controller continues operating with its current application relation while it sets up a new application relation. Right? I mean, again, it’s it’s really trying to get this a a new network connection established. So then the the the controller then switches over to the new application relation after the new configuration is validated. Once we have this validation and the configuration’s good, the controller removes the old application relations and continues operating all while staying in run mode. Pretty handy pretty handy stuff here for for supporting high availability. Now one last topic regarding system redundancy and dynamic reconfiguration, because these two PROFINET capabilities are compatible with a new technology called single pair Ethernet, and this provides power and data over just two wires. This version of Ethernet is now part of the I triple e eight zero two dot three standard referred to as 10 base t one l. So 10 base t one l is the non intrinsically saved version of two wire Ethernet. To support intrinsic safety, 10 base t one l was enhanced by an additional standard called Ethernet APL or advanced physical layer. So when we combine PROFINET with this Ethernet APL version of 10 base t one l, we simply call it PROFINET over APL. It not only provides power and data over the same two wires, but also supports long cable runs up to a kilometer, 10 megabit per second communication speeds, and can be used in all hazardous areas. So intrinsic safety is achieved by ensuring both the Ethernet signals and power on the wire are within explosion safe levels. And even with all this, system redundancy and dynamic reconfiguration work seamlessly with this new technology we call PROFINET over APL. Now one thing I’d like to close with here is a is a final thought regarding a new technology I think I think everyone should become aware of here. I mean, it’s emerging in the market. It’s it’s quite new, and it’s a technology called MTP or module type package. And so this is a technology being applied first here to, use cases considered to be a hybrid of both process automation and factory automation. So what MTP does is it applies OPC UA information models to create standardized, non proprietary application level descriptions for automation equipment. And so what these descriptions do is they simplify the communication, between equipment and the control system, and it does this by modularizing the process into more manageable pieces. So really, the point is to construct a factory with modular equipment to simplify integration and allow for better flexibility should changes be required. Now with the help of the process orchestration layer and this OPC UA connectivity, MTP enabled equipment can plug and operate, reducing the time to commission a process or make changes to that process. This is pretty cutting edge stuff. I think you’re gonna find and hear a lot more about NTP in the near future. Alright. So it’s time to wrap things up with a summary of all the resources you can use to learn even more about PROFINET. One of the things you can do here is you can get access to the PROFINET one day training class slide deck by going to profinet2025.com, entering your email, and downloading the slides in PDF format. And what’s really handy is that all of the links in the PDF are live, so information is just a click away. We also have our website, us.profinet.com. It has white papers, application stories, webinars, and documentation, including access to all of the standards and specifications. This is truly your one stop shop for locating everything about PROFINET. Now we do our PROFINET one day training classes and IO link workshops all over The US and parts of Canada. So if you are interested in attending one of these, you can always find the next city we are going to by clicking on the training links at the bottom of the slide. Shawn Tierney (Host): Hey, guys. Shawn here. I just wanted to jump in for a minute for the audio audience to give you that website. It’s us.profinet.com/0dtc or oscardeltatangocharlie. So that’s the website. And I also went and pulled up the website, which if you’re watching, you can see here. But for those listening, these one day PROFINET courses are coming to Phoenix, Arizona, August 26, Minneapolis, Minnesota, September 10, Newark and New York City, September 25, Greenville, South Carolina, October 7, Detroit, Michigan, October 23, Portland, Oregon, November 4, and Houston, Texas, November 18. So with that said, let’s jump back into the show. Tom Weingartner (PI): Alan, one of our most popular resources is Profinet University. This website structures information into little courses, and you can proceed through them at your own pace. You can go lesson by lesson, or you can jump around. You can even decide which course to take based on a difficulty tag. Definitely make sure to check out this resource. We do have lots of great, webinars on on the, on on the website, and they’re archived on the website. Now some of these webinars, they they rehash what we covered today, but in other cases, they expand on what we covered today. But in either case, make sure you share these webinars with your colleagues, especially if they’re interested in any one of the topics that we have listed on the slide. And finally, the certified network engineer course is the next logical step if you would like to dive deeper into the technical details of PROFINET. It is a week long in Johnson City, Tennessee, and it features hands on lab work. And if you would like us to provide training to eight or more students, we can even come to your site. If you would like more details about any of this, please head to the website to learn more. And with that, Chai, I think that is, my last slide and, covered the topics that I think we wanted some to cover today. Shawn Tierney (Host): Yeah. And I just wanna point out that to you guys, this, training goes out through all around The US. I definitely recommend getting up there. If you’re using PROFINET and you wanna get some training, they usually fill the room, like, you know, 50 to a 100 people. And, it’s you know, they do this every year. So check those dates out. If you need to get some hands on with PROFINET, I would definitely check out those. And, of course, we’ll have all the links in the description. I also wanna thank Tom for that slide. Really defining s one versus s two versus r one and r two. You know, a lot of people say we have s two compatibility. A matter of fact, we’re gonna be looking at some products that have s two compatibility here in the future. And, you know, just trying to understand what that means. Right? You know, when somebody just says s two, it’s like, what does that mean? So I really if that slide really doesn’t for you guys listening, I thought that slide really kinda lays it out, kinda gives you, like, alright. This is what it means. And, so in in in my from my perspective, that’s like it’s you’re supporting redundant controllers. Right? And so if you have an s two setup of redundant, seamless controllers that or CPUs, then you’ll be that product will support that. And that’s important. Right? Because if you had a product that didn’t support it, it’s not gonna work with your application. So I thought that and the the Ethernet APL is such a big deal in process because I you know, the the distance, right, and the fact that it’s it’s, intrinsically safe and supports all those zones and and areas and whatnot, that is, and everybody everybody all the instrumentation people are all over. Right? The, the, the Rosemonts, the fishes, the, the endless houses, everybody is is on that working group. We’ve covered that on the news show many times, and, just very interesting to see where that goes, but I think it’s gonna take over that part of the industry. So, but, Tom, was there anything else you want to cover in today’s show? Tom Weingartner (PI): No. I I think that that really, puts puts a a fine finale on on on this here. I I do wanted to maybe emphasize that, you you know, that point about network redundancy being compatible with, system redundancy. So, you know, you can really hone in on what your system reliability requirements are. And and also with with this this, PROFINET over APL piece of it, completely compatible with with PROFINET, in in of itself. And and, also, you don’t have to worry about it not supporting, system redundancy or or anything of of the like, whether, you know, you you wanted to get, redundant even redundant devices out there. So, that’s that’s, I think that’s that’s about it. Shawn Tierney (Host): Alright. Well, I again, thank you so much for coming on. We look forward to trying out some of these s two profanet devices in the near future. But with that, I I really wanted to have you on first to kinda lay the groundwork for us, and, really appreciate it. Tom Weingartner (PI): No problem. Thank you for having me. Shawn Tierney (Host): Well, I hope you guys enjoyed that episode. I did. I enjoyed sitting down with Tom, getting up to date on all those different products, and it’s great to know they have all these free hands on training days coming across United States. And, you know, what a great refresher from the original 2020 presentation that we had somebody from Siemens do. So I really appreciate Tom coming on. And speaking of Siemens, so thankful they sponsored this episode so we could release it ad free and make the video free to everybody. Please, if you see Siemens or any of the vendors who sponsor our episodes, please tell them to thank you from us. It really helps us keep the show going. Speaking of keeping the show going, just a reminder, if you’re a student or a vendor, price increases will hit mid September. So if you’re a student, you wanna buy another course, now is the time to do it. If you’re a vendor and you have a existing balance, you will want to schedule those podcasts before mid September or else you’ll be subject to the price increase. So with that said, I also wanna remind you I have a new podcast, automation tech talk. I’m reusing the old automation new news headlines podcast. So if you already subscribed to that, you’re just gonna get in the new the new show for free. It’s also on the automation blog, on YouTube, on LinkedIn. So I’m doing it as a live stream every lunchtime, just talking about what I learned, in that last week, you know, little tidbits here and there. And I wanna hear from you guys too. A matter of fact, I already had Giovanni come on and do an interview with me. So at one point, I’ll schedule that as a lunchtime podcast for automation tech talk. Again, it still shows up as automation news headlines, I think. So at some point, I’ll have to find time to edit that to change the name. But in any case, with that, I think I’ve covered everything. I wanna thank you guys for tuning in. Really appreciate you. You’re the best audience in the podcast world or the video world, you know, whatever you wanna look at it as, but I really appreciate you all. Please feel free to send me emails, write to me, leave comments. I love to hear from you guys, and I just wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️ If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content
Host: Sebastian HassingerGuest: Andrew Dzurak (CEO, Diraq)In this enlightening episode, Sebastian Hassinger interviews Professor Andrew Dzurak. Andrew is the CEO and co-founder of Diraq and concurrently a Scientia Professor in Quantum Engineering at UNSW Sydney, an ARC Laureate Fellow and a Member of the Executive Board of the Sydney Quantum Academy. Diraq is a quantum computing startup pioneering silicon spin qubits, based in Australia. The discussion delves into the technical foundations, manufacturing breakthroughs, scalability, and future roadmap of silicon-based quantum computers—all with an industrial and commercial focus.Key Topics and Insights1. What Sets Diraq ApartDiraq's quantum computers use silicon spin qubits, differing from the industry's more familiar modalities like superconducting, trapped ion, or neutral atom qubits.Their technology leverages quantum dots—tiny regions where electrons are trapped within modified silicon transistors. The quantum information is encoded in the spin direction of these trapped electrons—a method with roots stretching over two decades1.2. Manufacturing & ScalabilityDiraq modifies standard CMOS transistors, making qubits that are tens of nanometers in size, compared to the much larger superconducting devices. This means millions of qubits can fit on a single chip.The company recently demonstrated high-fidelity qubit manufacturing on standard 300mm wafers at commercial foundries (GlobalFoundries, IMEC), matching or surpassing previous experimental results—all fidelity metrics above 99%.3. Architectural InnovationsDiraq's chips integrate both quantum and conventional classical electronics side by side, using standard silicon design toolchains like Cadence. This enables leveraging existing chip design and manufacturing expertise, speeding progress towards scalable quantum chips.Movement of electrons (and thus qubits) across the chip uses CMOS bucket-brigade techniques, similar to charge-coupled devices. This means fast (
This week on the podcast we go over our reviews of the Valkyrie V360 Lite Liquid CPU Cooler and the Acer FA200 Gen4 Solid State Drive. We also discuss AMD possibly coming out with dual 3D V-Cache CPUs, AMD's AM6 socket, a $100 price tag for GTA 6, and all of the Battlefield 6 news!
“If we build it, will they come?” Jensen said: “If we don't build it, they can't come.” “You have to believe in what you believe, and you have to pursue that belief.” “This is at the core of our company.” The “big bet” Jensen Huang made 30 years ago: by inventing the technology AND the market at the same time, Jensen aimed to expand, augment, and accelerate general purpose computing CPUs with specialized algorithms for the video game niche. Jensen Huang had the foresight three decades ago to create CUDA, a compatible accelerated computing architecture that became the pillars for AI advancement today. The visualized hardware platform invented in 1994 demanded that Nvidia grow other parts of the “flywheel”: developer ecosystem, install base, and the subsequent demand for GPUs invented by Nvidia. Read it as a 5-min blog Watch it as a 12-min video ©Joanne Z. Tan all rights reserved. Please don't forget to like it, comment, or better, SHARE IT WITH OTHERS! - To stay in the loop, subscribe to our Newsletter (About 10 Plus Brand: In addition to the “whole 10 yards” of brand building, digital marketing, and content creation for business and personal brands. To contact us: 1-888-288-4533.) - Visit our Websites: https://10plusbrand.com/ https://10plusprofile.com/ Phone: 888-288-4533 - Find us online by clicking or follow these hashtags: #10PlusBrand #10PlusPodcast #JoanneZTan #10PlusInterviews #BrandDNA #BeYourOwnBrand #StandForSomething #SuperBowlTVCommercials #PoemsbyJoanneTan #GenuineVideo #AIXD #AI Experience Design #theSecondRenaissance #2ndRenaissance
Not only do we never underestimate the power of sunglasses, we bring you another show after a "sick" week off. We've got some external storage to review, Threadripper high-wattage benchmarks, and some Zen time on top of all the other high quality news items and spontaneous commentary you know you want. And need. Topics below.Timestamps:00:00 Intro01:04 Patreon1:35 Food with Josh03:24 Next-gen Radeon may have 96 CUs, 384-bit memory14:18 Threadripper PRO 9995WX's insane Cinebench score (and power draw)17:57 AM5 motherboards revised for Zen 6 CPUs?22:55 We mention an exhaustive study of AMD memory speeds28:30 NVIDIA adding native RISC-V support to CUDA30:19 Each of us blocks Wi-Fi in our own special way33:49 MAINGEAR goes retro39:34 Self-destructing SSDs42:03 Belkin notifies users that Wemo products will be bricked45:22 (In)Security Corner1:01:26 Gaming Quick Hits1:12:00 Crucial X10 Portable SSD review1:16:52 Picks of the Week1:26:42 Outro ★ Support this podcast on Patreon ★
Any donation is greatly appreciated! 47e6GvjL4in5Zy5vVHMb9PQtGXQAcFvWSCQn2fuwDYZoZRk3oFjefr51WBNDGG9EjF1YDavg7pwGDFSAVWC5K42CBcLLv5U OR DONATE HERE: https://www.monerotalk.live/donate TODAY'S SHOW: In this episode of Monero Talk, legal expert Zach Shapiro joins Douglas Tuman to discuss U.S. cryptocurrency legislation, the legal challenges facing privacy tech, and the philosophical divide between building unstoppable systems versus working within regulatory frameworks. Shapiro, who runs a crypto-focused law firm and is involved with the Bitcoin Policy Institute and Peer-to-Peer Rights Foundation, outlines recent bills in Congress—including the Clarity Act and Genius Act—and their implications for developers and privacy advocates. He and Doug debate Bitcoin vs. Monero, focusing on fungibility and censorship resistance, with Shapiro defending Bitcoin's legal positioning and Doug championing Monero's privacy features. The episode also covers ongoing cases like Tornado Cash, the status of Samurai Wallet, and efforts to repeal New York's restrictive BitLicense. TIMESTAMPS: (00:02:12) – Introduction to Zach's background and involvement with the Bitcoin Policy Institute, Peer-to-Peer Rights Foundation. (00:08:13) – Zach's perspective on various technologies: Bitcoin, stablecoins, DAOs. (00:12:21) – Debate on fungibility: Bitcoin vs Monero. (00:17:09) – Is Bitcoin functionally fungible? Legal and policy perspectives. (00:20:00) – Cash vs Bitcoin legal treatment in cases of stolen funds. (00:28:57) – Mining decentralization: ASICs, CPUs, regulatory capture. (00:33:18) – Zach's overall take on Monero vs Bitcoin. (00:36:15) – Explanation of 3 key crypto-related bills (Genius Act, Clarity Act, Anti-CBDC Bill) (00:43:23) – Implications of Section 110 for privacy developers. (00:46:25) – Concerns over Genius Act enabling “backdoor CBDC.” (00:53:00) – What would Satoshi think about current crypto laws and stablecoins? (00:58:02) – Genius Act's effect on algorithmic stablecoins (likely banned). (01:02:12) – Genius Act vs Clarity Act: Pros and cons for Monero. (01:06:01) – Eliminating capital gains for crypto use — is it possible? (01:07:50) – Comments on the Bank Secrecy Act, impact of Calirty Act for Monero, NY's BitLicense, and Monero exchange bans. (01:11:18) - Closing Remarks GUEST LINKS: https://x.com/zackbshapiro Purchase Cafe & tip the farmers w/ XMR! https://gratuitas.org/ Purchase a plug & play Monero node at https://moneronodo.com SPONSORS: Cakewallet.com, the first open-source Monero wallet for iOS. You can even exchange between XMR, BTC, LTC & more in the app! Monero.com by Cake Wallet - ONLY Monero wallet (https://monero.com/) StealthEX, an instant exchange. Go to (https://stealthex.io) to instantly exchange between Monero and 450 plus assets, w/o having to create an account or register & with no limits. WEBSITE: https://www.monerotopia.com CONTACT: monerotalk@protonmail.com ODYSEE: https://odysee.com/@MoneroTalk:8 TWITTER: https://twitter.com/monerotalk FACEBOOK: https://www.facebook.com/MoneroTalk HOST: https://twitter.com/douglastuman INSTAGRAM: https://www.instagram.com/monerotalk TELEGRAM: https://t.me/monerotopia MATRIX: https://matrix.to/#/%23monerotopia%3Amonero.social MASTODON: @Monerotalk@mastodon.social MONERO.TOWN: https://monero.town/u/monerotalkAny donation is greatly appreciated!Any donation is greatly appreciated!
After what seems like ages, but was actually only a week off, we are BACK. Enjoy what some have called "PCPer's greatest podcast episode of all time, even if it was kind of a slow news cycle". It's the energy, really. Have some news on AMD Threadrippers, Intel ARC, depressing Microsoft news and even Google Earth!00:00 Intro (with show and tell)04:29 Patreon05:57 Food with Josh07:49 AMD launches Ryzen Threadripper PRO 9000 and Radeon AI PRO 900013:38 Next Intel desktop CPUs to offer more cores, more lanes, 100W less power?21:37 Intel Arc A750 LE goes EOL22:51 TPU does some interesting IPC testing with recent GPU architectures26:35 PSA: Newest Steam overlay lets you track generated frames27:11 A new Sound Blaster31:12 A trio of Microsoft stories - mostly depressing40:28 Google Earth turns 2042:50 Podcast sponsor44:23 (In)Security Corner56:22 Gaming Quick Hits1:03:48 Picks of the Week1:16:48 Outro (it just sort of ends) ★ Support this podcast on Patreon ★
The future of AI isn't coming; it's already here. With NVIDIA's recent announcement of forthcoming 600kW+ racks, alongside the skyrocketing power costs of inference-based AI workloads, now's the time to assess whether your data center is equipped to meet these demands. Fortunately, two-phase direct-to-chip liquid cooling is prepared to empower today's AI boom—and accommodate the next few generations of high-powered CPUs and GPUs. Join Accelsius CEO Josh Claman and CTO Dr. Richard Bonner as they walk through the ways in which their NeuCool™ 2P D2C technology can safely and sustainably cool your data center. During the webinar, Accelsius leadership will illustrate how NeuCool can reduce energy savings by up to 50% vs. traditional air cooling, drastically slash operational overhead vs. single-phase direct-to-chip, and protect your critical infrastructure from any leak-related risks. While other popular liquid cooling methods carry require constant oversight or designer fluids to maintain peak performance, two-phase direct-to-chip technologies require less maintenance and lower flow rates to achieve better results. Beyond a thorough overview of NeuCool, viewers will take away these critical insights: The deployment of Accelsius' Co-Innovation Labs—global hubs enabling data center leaders to witness NeuCool's thermal performance capabilities in real-world settings Our recent testing at 4500W of heat capture—the industry record for direct-to-chip liquid cooling How Accelsius has prioritized resilience and stability in the midst of global supply chain uncertainty Our upcoming launch of a multi-rack solution able to cool 250kW across up to four racks Be sure to join us to discover how two-phase direct-to-chip cooling is enabling the next era of AI.
John is joined by Spencer Collins, Executive Vice President and Chief Legal Officer of Arm Holdings, the UK-based semiconductor design firm known for powering over 99% of smartphones globally with its energy-efficient CPU designs. They discuss the legal challenges that arise from Arm's unique position in the semiconductor industry. Arm has a unique business model, centered on licensing intellectual property rather than manufacturing processors. This model is evolving as Arm considers moving “up the stack,” potentially entering into processor production to compete more directly in the AI hardware space. Since its $31 billion acquisition by SoftBank in 2016, Arm has seen tremendous growth, culminating in an IPO in 2023 at a $54 billion valuation and its market value nearly doubling since.AI is a major strategic focus for Arm, as its CPUs are increasingly central to AI processing in cloud and edge environments. Arm's high-profile AI projects include Nvidia's Grace Hopper superchip and Microsoft's new AI server chips, both of which rely heavily on Arm CPU cores. Arm is positioned to be a key infrastructure player in AI's future based on its broad customer base, the low power consumption of its semiconductors, and their extensive security features. Nvidia's proposed $40 billion acquisition of ARM collapsed due to regulatory pushback in the U.S., Europe, and China. This led SoftBank to pivot to taking 10% of Arm public. Arm is now aggressively strengthening its intellectual property strategy, expanding patent filings, and upgrading legal operations to better protect its innovations in the AI space.Spencer describes his own career path—from law firm M&A work to a leadership role at SoftBank's Vision Fund, where he worked on deals like the $7.7 billion Uber investment—culminating in his current post. He suggests that general counsel for major tech firms must be intellectually agile, invest in best-in-class advisors, and maintain geopolitical awareness to navigate today's rapidly changing legal and regulatory landscape.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi
A handheld Xbox that's really an ROG Ally with a new Ryzen processor?? An LCD that actually NEEDS bright sunlight like a Game Boy Color?? (Oh, and Josh's legendary food segment.) There's some EVGA sad news mixed in there with a cool new GOG feature and too many security stories.Timestamps:00:00 Intro00:39 Patreon01:20 Food with Josh03:30 ASUS ROG Xbox Ally handhelds have new AMD Ryzen Z2 processors06:51 Nintendo sold a record number of Switch 2 consoles08:37 NVIDIA N1X competitive with high-end mobile CPUs?12:38 Samsung now selling 3GB GDDR7 modules16:27 Apple uses car model years now, and Tahoe is their last OS supporting Intel22:01 EVGA motherboards have issues with RTX 50 GPUs?27:48 Josh talks about a new PNY flash drive30:01 (in)Security Corner54:07 Gaming Quick Hits1:00:46 Eazeye Monitor 2.0 - an RLCD monitor review1:11:53 Picks of the Week1:33:21 Outro ★ Support this podcast on Patreon ★