POPULARITY
In der heutigen Folge sprechen die Finanzjournalisten Nando Sommerfeldt und Holger Zschäpitz über Anthropics Charme-Offensive, AMDs zweischneidigen Meta-Deal und den ersten Paypal-Bieter. Außerdem geht es um Thomson Reuters, FactSet, Salesforce, DocuSign, Intuit, Workday, Nvidia, HP, Fresenius Medical Care, MTU Aero Engines, VW, BMW, Xtrackers CSI300 Swap ETF (WKN: DBX0M2), HSBC Hang Seng Tech UCITS ETF (WKN: A2QHV0), Deka MSCI China (WKN: ETFL32), iShares China Large Cap UCITS ETF (WKN: A0DK6Z), Invesco MSCI China Technology All Shares Stock Connect UCITS ETF (WKN: A3CMY8), UBS Solactive China Technology UCITS ETF (WKN: A2QJ9G), Kweichow Moutai, Invesco S&P 500 ETF (WKN: A1CYW7), UBS Core MSCI World (WKN: A2PK5J), Xtrackers Dax (WKN: DBX1DA), Amundi Core Stoxx Europe 600 (WKN: LYX0Q0), SPDR MSCI All Country World (WKN: A1JJTC) Wir freuen uns an Feedback über aaa@welt.de. Noch mehr "Alles auf Aktien" findet Ihr bei WELTplus und Apple Podcasts – inklusive aller Artikel der Hosts und AAA-Newsletter. Hier bei WELT: https://www.welt.de/podcasts/alles-auf-aktien/plus247399208/Boersen-Podcast-AAA-Bonus-Folgen-Jede-Woche-noch-mehr-Antworten-auf-Eure-Boersen-Fragen.html. Der Börsen-Podcast Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte! https://linktr.ee/alles_auf_aktien Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html
This week on The Beat, CTSNet Editor-in-Chief Joel Dunning spoke with Dr. Puja Khaitan, thoracic consultant at Sheikh Shakhbout Medical City, Abu Dhabi, UAE, and Founder and Congress Chair of the Emirates International Thoracic Surgery Congress, about thoracic surgery in the United Arab Emirates (UAE). Chapters 00:00 Intro 02:35 JANS 1, Therapy & Risk in Idiopathic Pulm Fibrosis 04:18 JANS 2, Sex-Related Treatment Effects 06:03 JANS 3, Endoscopic vs Open RAH in CABG 08:14 JANS 4, Textbook Outcome in MV Surgery 11:16 Career Center 12:08 Video 1, AMDS to FET Conversion 13:13 Video 2, Constrictive Pericarditis & Pericardiectomy 14:42 Video 3, Right Axillary Thoracotomy 16:17 Dr. Khaitan, Thoracic Surgery in the UAE 23:46 Upcoming Events 24:46 Instructional Video Competition They discussed her professional background and training, as well as the differences in cases between the UAE and United States. They also delved into research in the UAE, the state of thoracic hospitals, general surgical residency programs, and the future of fellowships in the country. Joel also highlights recent JANS articles on a large retrospective propensity-weighted cohort study on antifibrotic therapy and lung cancer risk in patients with idiopathic pulmonary fibrosis, sample size considerations to assess sex-related treatment effects, endoscopic or open radial artery harvest in coronary artery bypass surgery, and results from the Netherlands Heart Registration on mitral valve surgery. In addition, Joel explores a safe and reproducible redo aortic surgery approach on AMDS to frozen elephant trunk conversion, an approach to constrictive pericarditis and pericardiectomy from diagnosis to definitive surgical, and right axillary thoracotomy. Before closing, Joel highlights upcoming events in CT surgery. JANS Items Mentioned 1.) Antifibrotic Therapy and Lung Cancer Risk in Patients With Idiopathic Pulmonary Fibrosis: A Large Retrospective Propensity-Weighted Cohort Study 2.) Sample Size Considerations to Assess Sex-Related Treatment Effects 3.) Endoscopic or Open Radial Artery Harvest in Coronary Artery Bypass Surgery 4.) Textbook Outcome in Mitral Valve Surgery—Results from the Netherlands Heart Registration CTSNet Content Mentioned 1.) AMDS to Frozen Elephant Trunk Conversion: A Safe and Reproducible Redo Aortic Surgery Approach 2.) From Diagnosis to Definitive Surgical Therapy: An Approach to Constrictive Pericarditis and Pericardiectomy 3.) Right Axillary Thoracotomy: A Minimally Invasive Gateway to Multiple Defects Other Items Mentioned 1.) Emirates International Thoracic Surgery Congress 2.) Instructional Video Competition 3.) Career Center 4.) CTSNet Events Calendar Disclaimer The information and views presented on CTSNet.org represent the views of the authors and contributors of the material and not of CTSNet. Please review our full disclaimer page here.
Infrastructure was passé…uncool. Difficult to get dollars from Private Equity and Growth funds, and almost impossible to get a VC fund interested. Now?! Now, it's cool. Infrastructure seems to be having a Renaissance, a full on Rebirth, not just fueled by commercial interests (e.g. advent of AI), but also by industrial policy and geopolitical considerations. In this episode of Tech Deciphered, we explore what's cool in the infrastructure spaces, including mega trends in semiconductors, energy, networking & connectivity, manufacturing Navigation: Intro We're back to building things Why now: the 5 forces behind the renaissance Semiconductors: compute is the new oil Networking & connectivity: digital highways get rebuilt Energy: rebuilding the power stack (not just renewables) Manufacturing: the return of “atoms + bits” Wrap: what it means for startups, incumbents, and investors Conclusion Our co-hosts: Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmitt Nuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedro Our show: Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Nuno Gonçalves Pedro Introduction Welcome to episode 73 of Tech Deciphered, Infrastructure, the Rebirth or Renaissance. Infrastructure was passé, it wasn’t cool, but all of a sudden now everyone’s talking about network, talking about compute and semiconductors, talking about logistics, talking about energy. What gives? What’s happened? It was impossible in the past to get any funds, venture capital, even, to be honest, some private equity funds or growth funds interested in some of these areas, but now all of a sudden everyone thinks it’s cool. The infrastructure seems to be having a renaissance, a full-on rebirth. In this episode, we will explore in which cool ways the infrastructure spaces are moving and what’s leading to it. We will deep dive into the forces that are leading us to this. We will deep dive into semiconductors, networking and connectivity, energy, manufacturing, and then we’ll wrap up. Bertrand, so infrastructure is cool now. Bertrand Schmitt We're back to building things Yes. I thought software was going to eat the world. I cannot believe it was then, maybe even 15 years ago, from Andreessen, that quote about software eating the world. I guess it’s an eternal balance. Sometimes you go ahead of yourself, you build a lot of software stack, and at some point, you need the hardware to run this software stack, and there is only so much the bits can do in a world of atoms. Nuno Gonçalves Pedro Obviously, we’ve gone through some of this before. I think what we’re going through right now is AI is eating the world, and because AI is eating the world, it’s driving a lot of this infrastructure building that we need. We don’t have enough energy to be consumed by all these big data centers and hyperscalers. We need to be innovative around network as well because of the consumption in terms of network bandwidth that is linked to that consumption as well. In some ways, it’s not software eating the world, AI is eating the world. Because AI is eating the world, we need to rethink everything around infrastructure and infrastructure becoming cool again. Bertrand Schmitt There is something deeper in this. It’s that the past 10, even 15 years were all about SaaS before AI. SaaS, interestingly enough, was very energy-efficient. When I say SaaS, I mean cloud computing at large. What I mean by energy-efficient is that actually cloud computing help make energy use more efficient because instead of companies having their own separate data centers in many locations, sometimes poorly run from an industrial perspective, replace their own privately run data center with data center run by the super scalers, the hyperscalers of the world. These data centers were run much better in terms of how you manage the coolings, the energy efficiency, the rack density, all of this stuff. Actually, the cloud revolution didn’t increase the use of electricity. The cloud revolution was actually a replacement from your private data center to the hyperscaler data center, which was energy efficient. That’s why we didn’t, even if we are always talking about that growth of cloud computing, we were never feeling the pinch in term of electricity. As you say, we say it all changed because with AI, it was not a simple “Replacement” of locally run infrastructure to a hyperscaler run infrastructure. It was truly adding on top of an existing infrastructure, a new computing infrastructure in a way out of nowhere. Not just any computing infrastructure, an energy infrastructure that was really, really voracious in term of energy use. Nuno Gonçalves Pedro There was one other effect. Obviously, we’ve discussed before, we are in a bubble. We won’t go too much into that today. But the previous big bubble in tech, which is in the late ’90s, there was a lot of infrastructure built. We thought the internet was going to take over back then. It didn’t take over immediately, but there was a lot of network connectivity, bandwidth built back in the day. Companies imploded because of that as well, or had to restructure and go in their chapter 11. A lot of the big telco companies had their own issues back then, etc., but a lot of infrastructure was built back then for this advent of the internet, which would then take a long time to come. In some ways, to your point, there was a lot of latent supply that was built that was around that for a while wasn’t used, but then it was. Now it’s been used, and now we need new stuff. That’s why I feel now we’re having the new moment of infrastructure, new moment of moving forward, aligned a little bit with what you just said around cloud computing and the advent of SaaS, but also around the fact that we had a lot of buildup back in the late ’90s, early ’90s, which we’re now still reaping the benefits on in today’s world. Bertrand Schmitt Yeah, that’s actually a great point because what was built in the late ’90s, there was a lot of fibre that was built. Laying out the fibre either across countries, inside countries. This fibre, interestingly enough, you could just change the computing on both sides of the fibre, the routing, the modems, and upgrade the capacity of the fibre. But the fibre was the same in between. The big investment, CapEx investment, was really lying down that fibre, but then you could really upgrade easily. Even if both ends of the fibre were either using very old infrastructure from the ’90s or were actually dark and not being put to use, step by step, it was being put to use, equipment was replaced, and step by step, you could keep using more and more of this fibre. It was a very interesting development, as you say, because it could be expanded over the years, where if we talk about GPUs, use for AI, GPUs, the interesting part is actually it’s totally the opposite. After a few years, it’s useless. Some like Google, will argue that they can depreciate over 5, 6 years, even some GPUs. But at the end of the day, the difference in perf and energy efficiency of the GPUs means that if you are energy constrained, you just want to replace the old one even as young as three-year-old. You have to look at Nvidia increasing spec, generation after generation. It’s pretty insane. It’s usually at least 3X year over year in term of performance. Nuno Gonçalves Pedro At this moment in time, it’s very clear that it’s happening. Why now: the 5 forces behind the renaissance Maybe let’s deep dive into why it’s happening now. What are the key forces around this? We’ve identified, I think, five forces that are particularly vital that lead to the world we’re in right now. One we’ve already talked about, which is AI, the demand shock and everything that’s happened because of AI. Data centers drive power demand, drive grid upgrades, drive innovative ways of getting energy, drive chips, drive networking, drive cooling, drive manufacturing, drive all the things that we’re going to talk in just a bit. One second element that we could probably highlight in terms of the forces that are behind this is obviously where we are in terms of cost curves around technology. Obviously, a lot of things are becoming much cheaper. The simulation of physical behaviours has become a lot more cheap, which in itself, this becomes almost a vicious cycle in of itself, then drives the adoption of more and more AI and stuff. But anyway, the simulation is becoming more and more accessible, so you can do a lot of simulation with digital twins and other things off the real world before you go into the real world. Robotics itself is becoming, obviously, cheaper. Hardware, a lot of the hardware is becoming cheaper. Computer has become cheaper as well. Obviously, there’s a lot of cost curves that have aligned that, and that’s maybe the second force that I would highlight. Obviously, funds are catching up. We’ll leave that a little bit to the end. We’ll do a wrap-up and talk a little bit about the implications to investors. But there’s a lot of capital out there, some capital related to industrial policy, other capital related to private initiative, private equity, growth funds, even venture capital, to be honest, and a few other elements on that. That would be a third force that I would highlight. Bertrand Schmitt Yes. Interestingly enough, in terms of capital use, and we’ll talk more about this, but some firms, if we are talking about energy investment, it was very difficult to invest if you are not investing in green energy. Now I think more and more firms and banks are willing to invest or support different type of energy infrastructure, not just, “Green energy.” That’s an interesting development because at some point it became near impossible to invest more in gas development, in oil development in the US or in most Western countries. At least in the US, this is dramatically changing the framework. Nuno Gonçalves Pedro Maybe to add the two last forces that I think we see behind the renaissance of what’s happening in infrastructure. They go hand in hand. One is the geopolitics of the world right now. Obviously, the world was global flat, and now it’s becoming increasingly siloed, so people are playing it to their own interests. There’s a lot of replication of infrastructure as well because people want to be autonomous, and they want to drive their own ability to serve end consumers, businesses, etc., in terms of data centers and everything else. That ability has led to things like, for example, chips shortage. The fact that there are semiconductors, there are shortages across the board, like memory shortages, where everything is packed up until 2027 of 2028. A lot of the memory that was being produced is already spoken for, which is shocking. There’s obviously generation of supply chain fragilities, obviously, some of it because of policies, for example, in the US with tariffs, etc, security of energy, etc. Then the last force directly linked to the geopolitics is the opposite of it, which is the policy as an accelerant, so to speak, as something that is accelerating development, where because of those silos, individual countries, as part their industrial policy, then want to put capital behind their local ecosystems, their local companies, so that their local companies and their local systems are for sure the winners, or at least, at the very least, serve their own local markets. I think that’s true of a lot of the things we’re seeing, for example, in the US with the Chips Act, for semiconductors, with IGA, IRA, and other elements of what we’ve seen in terms of practices, policies that have been implemented even in Europe, China, and other parts of the world. Bertrand Schmitt Talking about chips shortages, it’s pretty insane what has been happening with memory. Just the past few weeks, I have seen a close to 3X increase in price in memory prices in a matter of weeks. Apparently, it started with a huge order from OpenAI. Apparently, they have tried to corner the memory market. Interestingly enough, it has flat-footed the entire industry, and that includes Google, that includes Microsoft. There are rumours of their teams now having moved to South Korea, so they are closer to the action in terms of memory factories and memory decision-making. There are rumours of execs who got fired because they didn’t prepare for this type of eventuality or didn’t lock in some of the supply chain because that memory was initially for AI, but obviously, it impacts everything because factories making memories, you have to plan years in advance to build memories. You cannot open new lines of manufacturing like this. All factories that are going to open, we know when they are going to open because they’ve been built up for years. There is no extra capacity suddenly. At the very best, you can change a bit your line of production from one type of memory to another type. But that’s probably about it. Nuno Gonçalves Pedro Just to be clear, all these transformations we’re seeing isn’t to say just hardware is back, right? It’s not just hardware. There’s physicality. The buildings are coming back, right? It’s full stack. Software is here. That’s why everything is happening. Policy is here. Finance is here. It’s a little bit like the name of the movie, right? Everything everywhere all at once. Everything’s happening. It was in some ways driven by the upper stacks, by the app layers, by the platform layers. But now we need new infrastructure. We need more infrastructure. We need it very, very quickly. We need it today. We’re already lacking in it. Semiconductors: compute is the new oil Maybe that’s a good segue into the first piece of the whole infrastructure thing that’s driving now the most valuable company in the world, NVIDIA, which is semiconductors. Semiconductors are driving compute. Semis are the foundation of infrastructure as a compute. Everyone needs it for every thing, for every activity, not just for compute, but even for sensors, for actuators, everything else. That’s the beginning of it all. Semiconductor is one of the key pieces around the infrastructure stack that’s being built at scale at this moment in time. Bertrand Schmitt Yes. What’s interesting is that if we look at the market gap of Semis versus software as a service, cloud companies, there has been a widening gap the past year. I forgot the exact numbers, but we were talking about plus 20, 25% for Semis in term of market gap and minus 5, minus 10 for SaaS companies. That’s another trend that’s happening. Why is this happening? One, because semiconductors are core to the AI build-up, you cannot go around without them. But two, it’s also raising a lot of questions about the durability of the SaaS, a software-as-a-service business model. Because if suddenly we have better AI, and that’s all everyone is talking about to justify the investment in AI, that it keeps getting better, and it keeps improving, and it’s going to replace your engineers, your software engineers. Then maybe all of this moat that software companies built up over the years or decades, sometimes, might unravel under the pressure of newly coded, newly built, cheaper alternatives built from the ground up with AI support. It’s not just that, yes, semiconductors are doing great. It’s also as a result of that AI underlying trend that software is doing worse right now. Nuno Gonçalves Pedro At the end of the day, this foundational piece of infrastructure, semiconductor, is obviously getting manifest to many things, fabrication, manufacturing, packaging, materials, equipment. Everything’s being driven, ASML, etc. There are all these different players around the world that are having skyrocket valuations now, it’s because they’re all part of the value chain. Just to be very, very clear, there’s two elements of this that I think are very important for us to remember at this point in time. One, it’s the entire value chains are being shifted. It’s not just the chips that basically lead to computing in the strict sense of it. It’s like chips, for example, that drive, for example, network switching. We’re going to talk about networking a bit, but you need chips to drive better network switching. That’s getting revolutionised as well. For example, we have an investment in that space, a company called the eridu.ai, and they’re revolutionising one of the pieces around that stack. Second part of the puzzle, so obviously, besides the holistic view of the world that’s changing in terms of value change, the second piece of the puzzle is, as we discussed before, there’s industrial policy. We already mentioned the CHIPS Act, which is something, for example, that has been done in the US, which I think is 52 billion in incentives across a variety of things, grants, loans, and other mechanisms to incentivise players to scale capacity quick and to scale capacity locally in the US. One of the effects of that now is obviously we had the TSMC, US expansion with a factory here in the US. We have other levels of expansion going on with Intel, Samsung, and others that are happening as we speak. Again, it’s this two by two. It’s market forces that drive the need for fundamental shifts in the value chain. On the other industrial policy and actual money put forward by states, by governments, by entities that want to revolutionise their own local markets. Bertrand Schmitt Yes. When you talk about networking, it makes me think about what NVIDIA did more than six years ago when they acquired Mellanox. At the time, it was largest acquisition for NVIDIA in 2019, and it was networking for the data center. Not networking across data center, but inside the data center, and basically making sure that your GPUs, the different computers, can talk as fast as possible between each of them. I think that’s one piece of the puzzle that a lot of companies are missing, by the way, about NVIDIA is that they are truly providing full systems. They are not just providing a GPU. Some of their competitors are just providing GPUs. But NVIDIA can provide you the full rack. Now, they move to liquid-cool computing as well. They design their systems with liquid cooling in mind. They have a very different approach in the industry. It’s a systematic system-level approach to how do you optimize your data center. Quite frankly, that’s a bit hard to beat. Nuno Gonçalves Pedro For those listening, you’d be like, this is all very different. Semiconductors, networking, energy, manufacturing, this is all different. Then all of a sudden, as Bertrand is saying, well, there are some players that are acting across the stack. Then you see in the same sentence, you’re talking about nuclear power in Microsoft or nuclear power in Google, and you’re like, what happened? Why are these guys in the same sentence? It’s like they’re tech companies. Why are they talking about energy? It’s the nature of that. These ecosystems need to go hand in hand. The value chains are very deep. For you to actually reap the benefits of more and more, for example, semiconductor availability, you have to have better and better networking connectivity, and you have to have more and more energy at lower and lower costs, and all of that. All these things are intrinsically linked. That’s why you see all these big tech companies working across stack, NVIDIA being a great example of that in trying to create truly a systems approach to the world, as Bertrand was mentioning. Networking & connectivity: digital highways get rebuilt On the networking and connectivity side, as we said, we had a lot of fibre that was put down, etc, but there’s still more build-out needs to be done. 5G in terms of its densification is still happening. We’re now starting to talk, obviously, about 6G. I’m not sure most telcos are very happy about that because they just have been doing all this CapEx and all this deployment into 5G, and now people already started talking about 6G and what’s next. Obviously, data center interconnect is quite important, and all the hubbing that needs to happen around data centers is very, very important. We are seeing a lot movements around connectivity that are particularly important. Network gear and the emergence of players like Broadcom in terms of the semiconductor side of the fence, obviously, Cisco, Juniper, Arista, and others that are very much present in this space. As I said, we made an investment on the semiconductor side of networking as well, realizing that there’s still a lot of bottlenecks happening there. But obviously, the networking and connectivity stack still needs to be built at all levels within the data centers, outside of the data centers in terms of last mile, across the board in terms of fibre. We’re seeing a lot of movements still around the space. It’s what connects everything. At the end of the day, if there’s too much latency in these systems, if the bandwidths are not high enough, then we’re going to have huge bottlenecks that are going to be put at the table by a networking providers. Obviously, that doesn’t help anyone. If there’s a button like anywhere, it doesn’t work. All of this doesn’t work. Bertrand Schmitt Yes. Interestingly enough, I know we said for this episode, we not talk too much about space, but when you talk about 6G, it make me think about, of course, Starlink. That’s really your last mile delivery that’s being built as well. It’s a massive investment. We’re talking about thousands of satellites that are interconnected between each other through laser system. This is changing dramatically how companies can operate, how individuals can operate. For companies, you can have great connectivity from anywhere in the world. For military, it’s the same. For individuals, suddenly, you won’t have dead space, wide zones. This is also a part of changing how we could do things. It’s quite important even in the development of AI because, yes, you can have AI at the edge, but that interconnect to the rest of the system is quite critical. Having that availability of a network link, high-quality network link from anywhere is a great combo. Nuno Gonçalves Pedro Then you start seeing regions of the world that want to differentiate to attract digital nomads by saying, “We have submarine cables that come and hub through us, and therefore, our connectivity is amazing.” I was just in Madeira, and they were talking about that in Portugal. One of the islands of Portugal. We have some Marine cables. You have great connectivity. We’re getting into that discussion where people are like, I don’t care. I mean, I don’t know. I assume I have decent connectivity. People actually care about decent connectivity. This discussion is not just happening at corporate level, at enterprise level? Etc. Even consumers, even people that want to work remotely or be based somewhere else in the world. It’s like, This is important Where is there a great connectivity for me so that I can have access to the services I need? Etc. Everyone becomes aware of everything. We had a cloud flare mishap more recently that the CEO had to jump online and explain deeply, technically and deeply, what happened. Because we’re in their heads. If Cloudflare goes down, there’s a lot of websites that don’t work. All of this, I think, is now becoming du jour rather than just an afterthought. Maybe we’ll think about that in the future. Bertrand Schmitt Totally. I think your life is being changed for network connectivity, so life of individuals, companies. I mean, everything. Look at airlines and ships and cruise ships. Now is the advent of satellite connectivity. It’s dramatically changing our experience. Nuno Gonçalves Pedro Indeed. Energy: rebuilding the power stack (not just renewables) Moving maybe to energy. We’ve talked about energy quite a bit in the past. Maybe we start with the one that we didn’t talk as much, although we did mention it, which was, let’s call it the fossil infrastructure, what’s happening around there. Everyone was saying, it’s all going to be renewables and green. We’ve had a shift of power, geopolitics. Honestly, I the writing was on the wall that we needed a lot more energy creation. It wasn’t either or. We needed other sources to be as efficient as possible. Obviously, we see a lot of work happening around there that many would have thought, Well, all this infrastructure doesn’t matter anymore. Now we’re seeing LNG terminals, pipelines, petrochemical capacity being pushed up, a lot of stuff happening around markets in terms of export, and not only around export, but also around overall distribution and increases and improvements so that there’s less leakage, distribution of energy, etc. In some ways, people say, it’s controversial, but it’s like we don’t have enough energy to spare. We’re already behind, so we need as much as we can. We need to figure out the way to really extract as much as we can from even natural resources, which In many people’s mind, it’s almost like blasphemous to talk about, but it is where we are. Obviously, there’s a lot of renaissance also happening on the fossil infrastructure basis, so to speak. Bertrand Schmitt Personally, I’m ecstatic that there is a renaissance going regarding what is called fossil infrastructure. Oil and gas, it’s critical to humanity well-being. You never had growth of countries without energy growth and nothing else can come close. Nuclear could come close, but it takes decades to deploy. I think it’s great. It’s great for developed economies so that they do better, they can expand faster. It’s great for third-world countries who have no realistic other choice. I really don’t know what happened the past 10, 15 years and why this was suddenly blasphemous. But I’m glad that, strangely, thanks to AI, we are back to a more rational mindset about energy and making sure we get efficient energy where we can. Obviously, nuclear is getting a second act. Nuno Gonçalves Pedro I know you would be. We’ve been talking about for a long time, and you’ve been talking about it in particular for a very long time. Bertrand Schmitt Yes, definitely. It’s been one area of interest of mine for 25 years. I don’t know. I’ve been shocked about what happened in Europe, that willingness destruction of energy infrastructure, especially in Germany. Just a few months ago, they keep destroying on live TV some nuclear station in perfect working condition and replacing them with coal. I’m not sure there is a better definition of insanity at this stage. It looks like it’s only the Germans going that hardcore for some reason, but at least the French have stopped their program of decommissioning. America, it seems to be doing the same, so it’s great. On top of it, there are new generations that could be put to use. The Chinese are building up a very large nuclear reactor program, more than 100 reactors in construction for the next 10 years. I think everybody has to catch up because at some point, this is the most efficient energy solution. Especially if you don’t build crazy constraints around the construction of these nuclear reactors. If we are rational about permits, about energy, about safety, there are great things we could be doing with nuclear. That might be one of the only solution if we want to be competitive, because when energy prices go down like crazy, like in China, they will do once they have reach delivery of their significant build-up of nuclear reactors, we better be ready to have similar options from a cost perspective. Nuno Gonçalves Pedro From the outside, at the very least, nuclear seems to be probably in the energy one of the areas that’s more being innovated at this moment in time. You have startups in the space, you have a lot really money going into it, not just your classic industrial development. That’s very exciting. Moving maybe to the carbonization and what’s happening. The CCUS, and for those who don’t know what it is, carbon capture, utilization, and storage. There’s a lot of stuff happening around that space. That’s the area that deals with the ability to capture CO₂ emissions from industrial sources and/or the atmosphere and preventing their release. There’s a lot of things happening in that space. There’s also a lot of things happening around hydrogen and geothermal and really creating the ability to storage or to store, rather, energy that then can be put back into the grids at the right time. There’s a lot of interesting pieces happening around this. There’s some startup movement in the space. It’s been a long time coming, the reuse of a lot of these industrial sources. Not sure it’s as much on the news as nuclear, and oil and gas, but certainly there’s a lot of exciting things happening there. Bertrand Schmitt I’m a bit more dubious here, but I think geothermal makes sense if it’s available at reasonable price. I don’t think hydrogen technology has proven its value. Concerning carbon capture, I’m not sure how much it’s really going to provide in terms of energy needs, but why not? Nuno Gonçalves Pedro Fuels niche, again, from the outside, we’re not energy experts, but certainly, there are movements in the space. We’ll see what’s happening. One area where there’s definitely a lot of movement is this notion of grid and storage. On the one hand, that transmission needs to be built out. It needs to be better. We’ve had issues of blackouts in the US. We’ve had issues of blackouts all around the world, almost. Portugal as well, for a significant part of the time. The ability to work around transmission lines, transformers, substations, the modernization of some of this infrastructure, and the move forward of it is pretty critical. But at the other end, there’s the edge. Then, on the edge, you have the ability to store. We should have, better mechanisms to store energy that are less leaky in terms of energy storage. Obviously, there’s a lot of movement around that. Some of it driven just by commercial stuff, like Tesla a lot with their storage stuff, etc. Some of it really driven at scale by energy players that have the interest that, for example, some of the storage starts happening closer to the consumption as well. But there’s a lot of exciting things happening in that space, and that is a transformative space. In some ways, the bottleneck of energy is also around transmission and then ultimately the access to energy by homes, by businesses, by industries, etc. Bertrand Schmitt I would say some of the blackout are truly man-made. If I pick on California, for instance. That’s the logical conclusion of the regulatory system in place in California. On one side, you limit price that energy supplier can sell. The utility company can sell, too. On the other side, you force them to decommission the most energy-efficient and least expensive energy source. That means you cap the revenues, you make the cost increase. What is the result? The result is you cannot invest anymore to support a grid and to support transmission. That’s 100% obvious. That’s what happened, at least in many places. The solution is stop crazy regulations that makes no economic sense whatsoever. Then, strangely enough, you can invest again in transmission, in maintenance, and all I love this stuff. Maybe another piece, if we pick in California, if you authorize building construction in areas where fires are easy, that’s also a very costly to support from utility perspective, because then you are creating more risk. You are forced buy the state to connect these new constructions to the grid. You have more maintenance. If it fails, you can create fire. If you create fire, you have to pay billions of fees. I just want to highlight that some of this is not a technological issue, is not per se an investment issue, but it’s simply the result of very bad regulations. I hope that some will learn, and some change will be made so that utilities can do their job better. Nuno Gonçalves Pedro Then last, but not the least, on the energy side, energy is becoming more and more digitally defined in some ways. It’s like the analogy to networks that they’ve become more, and more software defined, where you have, at the edge is things like smart meters. There’s a lot of things you can do around the key elements of the business model, like dynamic pricing and other elements. Demand response, one of the areas that I invested in, I invest in a company called Omconnect that’s now merged with what used to be Google Nest. Where to deploy that ability to do demand response and also pass it to consumers so that consumers can reduce their consumption at times where is the least price effective or the less green or the less good for the energy companies to produce energy. We have other things that are happening, which are interesting. Obviously, we have a lot more electric vehicles in cars, etc. These are also elements of storage. They don’t look like elements of storage, but the car has electricity in it once you charge it. Once it’s charged, what do you do with it? Could you do something else? Like the whole reverse charging piece that we also see now today in mobile devices and other edge devices, so to speak. That also changes the architecture of what we’re seeing around the space. With AI, there’s a lot of elements that change around the value chain. The ability to do forecasting, the ability to have, for example, virtual power plans because of just designated storage out there, etc. Interesting times happening. Not sure all utilities around the world, all energy providers around the world are innovating at the same pace and in the same way. But certainly just looking at the industry and talking to a lot of players that are CEOs of some of these companies. That are leading innovation for some of these companies, there’s definitely a lot more happening now in the last few years than maybe over the last few decades. Very exciting times. Bertrand Schmitt I think there are two interesting points in what you say. Talking about EVs, for instance, a Cybertruck is able to send electricity back to your home if your home is able to receive electricity from that source. Usually, you have some changes to make to the meter system, to your panel. That’s one great way to potentially use your car battery. Another piece of the puzzle is that, strangely enough, most strangely enough, there has been a big push to EV, but at the same time, there has not been a push to provide more electricity. But if you replace cars that use gasoline by electric vehicles that use electricity, you need to deliver more electricity. It doesn’t require a PhD to get that. But, strangely enough, nothing was done. Nuno Gonçalves Pedro Apparently, it does. Bertrand Schmitt I remember that study in France where they say that, if people were all to switch to EV, we will need 10 more nuclear reactors just on the way from Paris to Nice to the Côte d’Azur, the French Rivière, in order to provide electricity to the cars going there during the summer vacation. But I mean, guess what? No nuclear plant is being built along the way. Good luck charging your vehicles. I think that’s another limit that has been happening to the grid is more electric vehicles that require charging when the related infrastructure has not been upgraded to support more. Actually, it has quite the opposite. In many cases, we had situation of nuclear reactors closing down, so other facilities closing down. Obviously, the end result is an increase in price of electricity, at least in some states and countries that have not sold that fully out. Nuno Gonçalves Pedro Manufacturing: the return of “atoms + bits” Moving to manufacturing and what’s happening around manufacturing, manufacturing technology. There’s maybe the case to be made that manufacturing is getting replatformed, right? It’s getting redefined. Some of it is very obvious, and it’s already been ongoing for a couple of decades, which is the advent of and more and more either robotic augmented factories or just fully roboticized factories, where there’s very little presence of human beings. There’s elements of that. There’s the element of software definition on top of it, like simulation. A lot of automation is going on. A lot of AI has been applied to some lines in terms of vision, safety. We have an investment in a company called Sauter Analytics that is very focused on that from the perspective of employees and when they’re still humans in the loop, so to speak, and the ability to really figure out when people are at risk and other elements of what’s happening occurring from that. But there’s more than that. There’s a little bit of a renaissance in and of itself. Factories are, initially, if we go back a couple of decades ago, factories were, and manufacturing was very much defined from the setup. Now it’s difficult to innovate, it’s difficult to shift the line, it’s difficult to change how things are done in the line. With the advent of new factories that have less legacy, that have more flexible systems, not only in terms of software, but also in terms of hardware and robotics, it allows us to, for example, change and shift lines much more easily to different functions, which will hopefully, over time, not only reduce dramatically the cost of production. But also increase dramatically the yield, it increases dramatically the production itself. A lot of cool stuff happening in that space. Bertrand Schmitt It’s exciting to see that. One thing this current administration in the US has been betting on is not just hoping for construction renaissance. Especially on the factory side, up of factories, but their mindset was two things. One, should I force more companies to build locally because it would be cheaper? Two, increase output and supply of energy so that running factories here in the US would be cheaper than anywhere else. Maybe not cheaper than China, but certainly we get is cheaper than Europe. But three, it’s also the belief that thanks to AI, we will be able to have more efficient factories. There is always that question, do Americans to still keep making clothes, for instance, in factories. That used to be the case maybe 50 years ago, but this move to China, this move to Bangladesh, this move to different places. That’s not the goal. But it can make sense that indeed there is ability, thanks to robots and AI, to have more automated factories, and these factories could be run more efficiently, and as a result, it would be priced-competitive, even if run in the US. When you want to think about it, that has been, for instance, the South Korean playbook. More automated factories, robotics, all of this, because that was the only way to compete against China, which has a near infinite or used to have a near infinite supply of cheaper labour. I think that all of this combined can make a lot of sense. In a way, it’s probably creating a perfect storm. Maybe another piece of the puzzle this administration has been working on pretty hard is simplifying all the permitting process. Because a big chunk of the problem is that if your permitting is very complex, very expensive, what take two years to build become four years, five years, 10 years. The investment mass is not the same in that situation. I think that’s a very important part of the puzzle. It’s use this opportunity to reduce regulatory state, make sure that things are more efficient. Also, things are less at risk of bribery and fraud because all these regulations, there might be ways around. I think it’s quite critical to really be careful about this. Maybe last piece of the puzzle is the way accounting works. There are new rules now in 2026 in the US where you can fully depreciate your CapEx much faster than before. That’s a big win for manufacturing in the US. Suddenly, you can depreciate much faster some of your CapEx investment in manufacturing. Nuno Gonçalves Pedro Just going back to a point you made and then moving it forward, even China, with being now probably the country in the world with the highest rate of innovation and take up of industrial robots. Because of demographic issues a little bit what led Japan the first place to be one of the real big innovators around robots in general. The fact that demographics, you’re having an aging population, less and less children. How are you going to replace all these people? Moving that into big winners, who becomes a big winner in a space where manufacturing is fundamentally changing? Obviously, there’s the big four of robots, which is ABB, FANUC, KUKA, and Yaskawa. Epson, I think, is now in there, although it’s not considered one of the big four. Kawasaki, Denso, Universal Robots. There’s a really big robotics, industrial robotic companies in the space from different origins, FANUC and Yaskawa, and Epson from Japan, KUKA from Germany, ABB from Switzerland, Sweden. A lot of now emerging companies from China, and what’s happening in that space is quite interesting. On the other hand, also, other winners will include players that will be integrators that will build some of the rest of the infrastructure that goes into manufacturing, the Siemens of the world, the Schneider’s, the Rockwell’s that will lead to fundamental industrial automation. Some big winners in there that whose names are well known, so probably not a huge amount of surprises there. There’s movements. As I said, we’re still going to see the big Chinese players emerging in the world. There are startups that are innovating around a lot of the edges that are significant in this space. We’ll see if this is a space that will just be continued to be dominated by the big foreign robotics and by a couple of others and by the big integrators or not. Bertrand Schmitt I think you are right to remind about China because China has been moving very fast in robotics. Some Chinese companies are world-class in their use of robotics. You have this strange mix of some older industries where robotics might not be so much put to use and typically state-owned, versus some private companies, typically some tech companies that are reconverting into hardware in some situation. That went all in terms of robotics use and their demonstrations, an example of what’s happening in China. Definitely, the Chinese are not resting. Everyone smart enough is playing that game from the Americans, the Chinese, Japanese, the South Koreans. Nuno Gonçalves Pedro Exciting things are manufacturing, and maybe to bring it all together, what does it mean for all the big players out there? If we talk with startups and talk about startups, we didn’t mention a ton of startups today, right? Maybe incumbent wind across the board. But on a more serious note, we did mention a few. For example, in nuclear energy, there’s a lot of startups that have been, some of them, incredibly well-funded at this moment in time. Wrap: what it means for startups, incumbents, and investors There might be some big disruptions that will come out of startups, for example, in that space. On the chipset side, we talked about the big gorillas, the NVIDIAs, AMDs, Intel, etc., of the world. But we didn’t quite talk about the fact that there’s a lot of innovation, again, happening on the edges with new players going after very large niches, be it in networking and switching. Be it in compute and other areas that will need different, more specialized solutions. Potentially in terms of compute or in terms of semiconductor deployments. I think there’s still some opportunities there, maybe not to be the winner takes all thing, but certainly around a lot of very significant niches that might grow very fast. Manufacturing, we mentioned the same. Some of the incumbents seem to be in the driving seat. We’ll see what happens if some startups will come in and take some of the momentum there, probably less likely. There are spaces where the value chains are very tightly built around the OEMs and then the suppliers overall, classically the tier one suppliers across value chains. Maybe there is some startup investment play. We certainly have played in the couple of the spaces. I mentioned already some of them today, but this is maybe where the incumbents have it all to lose. It’s more for them to lose rather than for the startups to win just because of the scale of what needs to be done and what needs to be deployed. Bertrand Schmitt I know. That’s interesting point. I think some players in energy production, for instance, are moving very fast and behaving not only like startups. Usually, it’s independent energy suppliers who are not kept by too much regulations that get moved faster. Utility companies, as we just discussed, have more constraints. I would like to say that if you take semiconductor space, there has been quite a lot of startup activities way more than usual, and there have been some incredible success. Just a few weeks ago, Rock got more or less acquired. Now, you have to play games. It’s not an outright acquisition, but $20 billion for an IP licensing agreement that’s close to an acquisition. That’s an incredible success for a company. Started maybe 10 years ago. You have another Cerebras, one of the competitor valued, I believe, quite a lot in similar range. I think there is definitely some activity. It’s definitely a different game compared to your software startup in terms of investment. But as we have seen with AI in general, the need for investment might be larger these days. Yes, it might be either traditional players if they can move fast enough, to be frank, because some of them, when you have decades of being run as a slow-moving company, it’s hard to change things. At the same time, it looks like VCs are getting bigger. Wall Street is getting more ready to finance some of these companies. I think there will be opportunities for startups, but definitely different types of startups in terms of profile. Nuno Gonçalves Pedro Exactly. From an investor standpoint, I think on the VC side, at least our core belief is that it’s more niche. It’s more around big niches that need to be fundamentally disrupted or solutions that require fundamental interoperability and integration where the incumbents have no motivation to do it. Things that are a little bit more either packaging on the semiconductor side or other elements of actual interoperability. Even at the software layer side that feeds into infrastructure. If you’re a growth investor, a private equity investor, there’s other plays that are available to you. A lot of these projects need to be funded and need to be scaled. Now we’re seeing projects being funded even for a very large, we mentioned it in one of the previous episodes, for a very large tech companies. When Meta, for example, is going to the market to get funding for data centers, etc. There’s projects to be funded there because just the quantum and scale of some of these projects, either because of financial interest for specifically the tech companies or for other reasons, but they need to be funded by the market. There’s other place right now, certainly if you’re a larger private equity growth investor, and you want to come into the market and do projects. Even public-private financing is now available for a lot of things. Definitely, there’s a lot of things emanating that require a lot of funding, even for large-scale projects. Which means the advent of some of these projects and where realization is hopefully more of a given than in other circumstances, because there’s actual commercial capital behind it and private capital behind it to fuel it as well, not just industrial policy and money from governments. Bertrand Schmitt There was this quite incredible stat. I guess everyone heard about that incredible growth in GDP in Q3 in the US at 4.4%. Apparently, half of that growth, so around 2.2% point, has been coming from AI and related infrastructure investment. That’s pretty massive. Half of your GDP growth coming from something that was not there three years ago or there, but not at this intensity of investment. That’s the numbers we are talking about. I’m hearing that there is a good chance that in 2026, we’re talking about five, even potentially 6% GDP growth. Again, half of it potentially coming from AI and all the related infrastructure growth that’s coming with AI. As a conclusion for this episode on infrastructure, as we just said, it’s not just AI, it’s a whole stack, and it’s manufacturing in general as well. Definitely in the US, in China, there is a lot going on. As we have seen, computing needs connectivity, networks, need power, energy and grid, and all of this needs production capacity and manufacturing. Manufacturing can benefit from AI as well. That way the loop is fully going back on itself. Infrastructure is the next big thing. It’s an opportunity, probably more for incumbents, but certainly, as usual, with such big growth opportunities for startups as well. Thank you, Nuno. Nuno Gonçalves Pedro Thank you, Bertrand.
GTA 100 Bucks? It could Happen, here's How The best gaming podcast #548GTA 6 may be 100% bucks, but how, why, what would that do to the market? Lets discuss that and AMDs new tech for consoles. We break down the biggest industry stories of the week—price changes, restructures, cancellations, and layoffs—plus what it all means for players, devs, and the games in your backlog. No fluff, just facts and context you can use.Join this channel to get access to perks:https://www.youtube.com/channel/UC5zKbGokI0oI6SeZrHTfJjA/joinEach Friday ACG and some pals Silver, Rej, Abssi, and Jonny from https://www.twitch.tv/jonnyplayslive get together to discuss games, life, books, movies and everything else. New home of the ACG Best Gaming PodcastFollow me on Twitter for reviews and info @jeremypenter-JOIN the ACG Reddit https://www.reddit.com/r/ACGVids/ https://www.patreon.com/AngryCentaurGaming
Der chinesische Online-Konzern JD.com steht kurz vor der Übernahme von Media Markt und Saturn bzw. deren Mutterkonzern Ceconomy und damit Europas größter Elektronikeinzelhandelskette mit 1030 Geschäften. Die nächsten drei Jahre soll sich zwar nichts ändern, es soll keine betriebsbedingten Kündigungen geben, die Strukturen sollen erhalten bleiben und die Familie Kellerhals soll eine Sperrminorität behalten. Wie es aber dann weitergeht, wird sich zeigen. Von AMD gibt es jetzt etwas ganz großes: Threadripper 9000 sind erschienen und getestet! Als HEDT (High End Desktop) mit bis zu 64 Kernen und 128 Threads, mehr als der Standard-Heimuser brauchen oder auch nur sinnvoll nutzen kann. Aber schon auch echt geil :D Und nein, auch Battlefield 6 wird so eine CPU nicht auslasten können, auch wenn es bestimmt ziemlich anspruchsvoll sein wird. Die Entwickler bei DICE, Motive und Criterion würden sich zwar sicher über solche Monster-Maschinen freuen, damit aber Endnutzer trotzdem an Mods für "Portal" arbeiten können, wird ein eigener Editor zur Verfügung gestellt. Und hier wirds interessant: Es ist die Open-Source-Engine Godot! Zumindest könnte Performance ein Element sein neben dem Vorteil, dass sie keinen eigenen Editor entwickeln müssen bzw. Teile der proprietären Battlefield-Engine Frostbyte veröffentlichen müssen. In der leidigen Rubrik "schöner Scheiß mit KI": Atlassian (Confluence, Jira) möchten 150 Mitarbeiter entlassen und im Support auf "KI" setzen. Yay. Viel Spaß mit Folge 267! Sprecher:innen: Meep, Michael Kister, Mohammed Ali DadAudioproduktion: Michael KisterVideoproduktion: Michael KisterTitelbild: MeepBildquellen: AMD/Tom's HardwareAufnahmedatum: 01.08.2025 Besucht unsim Discord https://discord.gg/SneNarVCBMauf Bluesky https://bsky.app/profile/technikquatsch.deauf TikTok https://www.tiktok.com/@technikquatschauf Youtube https://www.youtube.com/@technikquatschauf Instagram https://www.instagram.com/technikquatschauf Twitch https://www.twitch.tv/technikquatsch RSS-Feed https://technikquatsch.de/feed/podcast/Spotify https://open.spotify.com/show/62ZVb7ZvmdtXqqNmnZLF5uApple Podcasts https://podcasts.apple.com/de/podcast/technikquatsch/id1510030975 00:00:00 Herzlich willkommen zu Technikquatsch Folge 267! 00:07:26 Ceconomy (Media Markt, Saturn) wird vom chinesischen Internetkonzern JD.com übernommenhttps://www.heise.de/news/Media-Markt-und-Saturn-wird-mehrheitlich-nach-China-verkauft-10505016.html 00:30:20 AMD Ryzen Threadripper 9000 Reviews: massenhaft Kerne, Threads und PCIe-Laneshttps://www.computerbase.de/artikel/prozessoren/amd-ryzen-threadripper-9000-test.93164/Gamers Nexus: AMD Threadripper 9980X 64-Core CPU Review & Benchmarks https://www.youtube.com/watch?v=IItu46EWaic 00:46:20 Battlefield 6 erscheint am 10. Oktober, nutzt als Editor für von Nutzern geschaffene Mods etc. die Open-Source-Engine Godothttps://www.computerbase.de/news/gaming/battlefield-6-release-am-10-oktober-open-beta-ab-7-august.93738/https://www.linkedin.com/posts/robin-yann-storm_godot-battlefield6-ugcPost-7356808739493924864-oFbsBattlefield: Battlefield 6 Maps, Modes & Portal https://www.youtube.com/watch?v=92CHDiFW0wA 00:56:43 Atlassian (Confluence, Jira) möchte 150 Mitarbeiter entlassen und auf KI für den Support setzenhttps://www.golem.de/news/mit-video-atlassian-entlaesst-150-mitarbeiter-setzt-auf-ki-support-2508-198724.html 01:08:20 Mike spielt immer noch Lies of P, hat nach Tomb Raider (2013) mal wieder Rise of the Tomb Raider angefangenMo spielt Pro Wrestler Story und Dungeon Village 2 von Kairosoft; Meep möchte Green Hell spielen. 01:15:39 Jimmy Carr zu Hulk Hogan; Undertaker, John Cena und Arnold Schwarzenegger 01:22:18 Abos, Bewertungen, weiterempfehlen! Vielen Dank!
Deutschland schwitzt und Fabian tut es auch, während Jan dem Spektakel aus dem kühlen Keller beiwohnt. Nichtsdestoweniger ist es Fabian, der Jan erklären darf, was AMD zusammen mit der Hochschule Coburg in Bezug auf das prozedurale Rendern von Bäumen vorgestellt hat - und warum das viel VRAM spart, eventuell aber nicht direkt 35 GB. Anschließend geht es um die Gerüchte um einen RTX-5000-Super-Refresh, der zwar früh, aber bis auf Angaben zu Preisen schon sehr glaubhaft erscheint. Kaum glauben wollen beide, dass auch bei Intel 18A wieder der Wurm drin ist und externe Kunden wohl einen Bogen darum machen. Immerhin: Intel 14A wird es richten!... Mit der Übernahme von Mindfactory durch Heise und der Powerbank-Sonntagsfrage endet der Podcast an diesem denkwürdig heißen Tag. Viel Spaß beim Zuhören!
In 2014, when Lisa Su took over as CEO of Advanced Micro Devices, AMD was on the verge of bankruptcy. Su bet hard on hardware and not only pulled the semiconductor company back from the brink, but also led it to surpass its historical rival, Intel, in market cap. Since the launch of ChatGPT made high-powered chips like AMDs “sexy” again, demand for chips has intensified exponentially, but so has the public spotlight on the industry — including from the federal government. In a live conversation, at the Johns Hopkins University Bloomberg Center, as part of their inaugural Discovery Series, Kara talks to Su about her strategy in face of the Trump administration's tariff and export control threats, how to safeguard the US in the global AI race, and what she says when male tech leaders brag about the size of their GPUs. Listen to more from On with Kara Swisher here. Learn more about your ad choices. Visit podcastchoices.com/adchoices
AMDs starke Ryzen-9000-CPUs laufen auf AM5-Mainboards. Die gibt es in breiter Auswahl, wir erklären die Unterschiede: Folge 2025/10 des Podcasts Bit-Rauschen.
In 2014, when Lisa Su took over as CEO of Advanced Micro Devices, AMD was on the verge of bankruptcy. Su bet hard on hardware and not only pulled the semiconductor company back from the brink, but also led it to surpass its historical rival, Intel, in market cap. Since the launch of ChatGPT made high-powered chips like AMDs “sexy” again, demand for chips has intensified exponentially, but so has the public spotlight on the industry — including from the federal government. In a live conversation, at the Johns Hopkins University Bloomberg Center, as part of their inaugural Discovery Series, Kara talks to Su about her strategy in face of the Trump administration's tariff and export control threats, how to safeguard the US in the global AI race, and what she says when male tech leaders brag about the size of their GPUs. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Da ist er, der Showdown: AMD Radeon RX 9070 (XT) mit RDNA 4 und FSR 4 gegen Nvidia GeForce RTX 5070 (Ti) mit DLSS 4. Fabian und Jan analysieren, wie gut AMDs neue GPU-Architektur und das neue KI-Upscaling wirklich geworden sind und wie sich AMDs beide 70er gegen Nvidias beide 70er im Benchmarkparcours und darüber hinaus schlagen.
This week, were talking about live service games, AMDs new GPUs, Monster Hunter Wilds, and SO MUCH MORE!!!!
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Vill du stötta oss och ta del av våra exklusiva avsnitt i Goty & Blandat så bli Patreon! Patreon.com/gotypodden Joina oss på Discord! (05:34) Analys & Diskussion: Xbox Developer Direct, EA failar, AMDs nya grafikkort m.m. (40:57) Spel: Yakuza 0, Ender Magnolia: Bloom in the Mist, Eternal Strands, Ninja Gaiden 2 Black, Resident Evil 2, Death Stranding, Persona 5 Royal, Heroes of Hammerwatch 2, Sniper Elite: Resistance (01:43:26) Radarn: Vad kommer nästa vecka i spelvärlden? Feedback, tips eller eventuella frågor får gärna skickas till gotypodden@gmail.com Discord eller på Instagram / Twitter @gotypodden. Tack Emma Idberg för våra fina bilder! GOTY-merch i vår merchbutik! Vill ni höra eller se mer av oss finns våra andra poddar och vår Youtube-kanal i vårt länkträd!
RobChrisRob linked up from their various pretend locations in low earth orbit to talk about matters of great importance including the accidental crumpling of a Bezos rocket, the Beastie Boys playing their gold record for Paul's Boutique and discovering that it was in fact.. NOT Paul's Boutique, a lady mailed an AirTag to herself to catch a mail thief, the trailer for the remastered goat simulator is fire, NASA has finally decided to send the Starliner home uncrewed & stranding the the astronauts for another 6 months, RFK drove 5 hours to cut of a whale's head with a chainsaw (wait, what?) the disability advocate who never existed, 3D Printing a Benchy in under 2 minutes, HDMI Forum blocking AMDs 120hz 4k HDMI Linux Driver because we can't have nice things, an explanation for the famous 'Wow!' signal has been found, Xitter warns users that NPR might be unsafe. Join our discord to talk along or the Subreddit where you will find all the links https://discord.gg/YZMTgpyhB https://www.reddit.com/r/TacoZone/
AMDs neue Zen-5-CPU Ryzen AI 300 kontert die neuen ARM-Chips für Windows-Notebooks. Ob das gelingt, verrät Folge 2024/17 des Podcast Bit-Rauschen.
On this week's Windows Central Podcast, Zac is joined by Windows Central tech editor Ben Wilson to discuss the week's biggest news. We talk Intel's 13th/14th Gen issues, Intel's layoffs, and Intel being sued. We also discuss AMDs new Ryzen chips and how they compare to Intel/Qualcomm, plus the rumor that Microsoft is working on a new VR headset!
Die ersten Tests zu zwei von AMDs neuen Ryzen-9000-Prozessoren sind online. Einen gigantischen Leistungssprung sollten PC-Enthusiasten nicht erwarten, doch AMD trumpft dafür in einem anderen Bereich auf.
Eigentlich sollten AMDs neue Desktop-Prozessoren Ende Juli in den Verkauf starten. Jetzt hat sich der Konzern jedoch dazu entschieden, den Launch noch einmal kurzfristig zu verschieben. Der Grund dafür ist jedoch nachvollziehbar.
Die letzte Woche im Februar hatte es in sich, Fabian und Jan haben in CB-Funk Episode 58 so einiges zu besprechen: Den Test von Intels "neuen" Core i5-14500 & i5-14400F, hitzig diskutierte Probleme mit Intels K-CPUs, AMDs zweiten Marktstart für die RX 7900 GRE, GPU-Preis-Leistungs-Diagramme mit neuen FPS-Leistungs-Ratings... und dann sogar noch die von Fabian gefürchtete Sonntagsfrage zum Thema "Star Citizen". Viel Spaß beim Zuhören!
► Check out today's hottest tech deals here: https://www.ufd.deals/ https://howl.me/clrICKZP8Qx https://geni.us/LSALpB https://howl.me/clrIGBMQSM 0:00 - Intro 00:18 - S24 Launched: https://howl.me/clruqPh8yxs https://tinyurl.com/ylc362lg https://tinyurl.com/ypydaf6b https://tinyurl.com/yt35l9ew https://tinyurl.com/ywxmg254 https://tinyurl.com/yse269mw https://tinyurl.com/ylnl8sqz 04:33 - Sponsor 06:59 - New GPU Vulnerability: https://tinyurl.com/yug5orpz https://tinyurl.com/ykuenzy7 https://tinyurl.com/yr7lrjxc 08:21 - UFD Deals: https://www.ufd.deals/ https://howl.me/clrICKZP8Qx https://geni.us/LSALpB https://howl.me/clrIGBMQSMq 09:24 - AMDs Response to SUPER Cards: https://tinyurl.com/ylcfyym9 https://tinyurl.com/yt3hmnul https://tinyurl.com/yvtprl57 11:34 - Comment Response ► Follow me on Twitch - http://www.twitch.tv/ufdisciple ► Join Our Discord: https://discord.gg/GduJmEM ► Support Us on Floatplane: https://www.floatplane.com/channel/ufdtech ► Support Us on Patreon: https://www.patreon.com/UFDTech ► Twitter - http://www.twitter.com/UFDTech ► Facebook - http://www.facebook.com/ufdtech ► Instagram - http://www.instagram.com/ufd_tech ► Reddit - https://www.reddit.com/r/UFDTech/ Presenter: Brett Sticklemonster Videographer: Brett Sticklemonster Editor: Rikus Strauss Thumbnail Designer: Reece Hill
Google beeindruckt mit einem Gemini AI Video (und hat dabei leicht geschummelt). Wir sprechen über das Businessmodell und die Bewertung von Celonis. Ist die Regulierung in Europa ein Nachteil für den Tech-Standort? Googles Search Partner Network hat Probleme mit Brand Safety. Es gibt Earnings von Asana, Rent The Runway, Braze, C3AI, Sentinel One, DocuSign und MongoDB. Aktuelle Werbepartner des Doppelgänger Tech Talk Podcasts und unser Sheet. Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Milano Vice Series A (00:02:50) Gemini (00:16:25) EU Tech Regulierung (00:21:50) Celonis (00:33:00) Search Partner Network (01:10:20) Rent The Runway Earnings (01:12:30) Braze Earnings (01:15:00) Asana Earnings (01:16:00) C3AI Earnings (01:17:45) Sentinel One Earnings (01:19:10) DocuSign Earnings (01:20:20) MongoDB Earnings (01:21:50) AMD neuer Chip Shownotes: Search Partner Network: LinkedIn, Adweek Gemini: Google
Wolfgang hat sich FSR 3 mit FMF in Forspoken und Immortals of Aveum angesehen und das Urteil fällt eindeutig aus: AMDs künstliche Zwischenbilder "Fluid Motion Frames" haben allem Anschein nach Potential, es zu finden, ist in Anbetracht der vielen Baustellen (Probleme, Bugs?) aber aktuell kaum möglich. Für AMDs Treiber Team heißt es also nachsitzen. Über die Details sprechen Fabian und Jan im Podcast. Dabei sitzt Jan eigentlich auch gerade nach und zwar zum Thema Intel Arc: Mit einem gut gemeinten Testparcours aus aktuelleren und schon etwas älteren Messwerten, dafür mit sehr vielen GPUs gestartet, ereilte ihn am Ende des Tests der Arc A580 die Erkenntnis: So wird das nichts - und Wolfgang hatte es bereits vorhergesehen. Also noch einmal von vorne! Sicher nicht aus Höflichkeit, hat sich zum Nachsitzen auch Corsair gemeldet: In dem Fall in Sachen Firmware der neuen "Budget-Tastatur" K70 Core, die vor Fall des Embargos zu doppelten Eingaben neigte, wenn schnell getippt wird - ein erstes Firmware-Update brachte Besserung, aber noch keine Heilung. Im starken Kontrast dazu hat in dieser Woche Sony abgeliefert und zwar die Ankündigung einer kompakteren PS5, die aber nicht "Slim" heißt. Was daran neu ist, besprechen Fabian und Jan ebenfalsl. Episode #38 schließt mit einem Lesertest von ComputerBase-Leser "Pizza!", weiteren Antworten auf Zuhörerfragen und der letzten Sonntagsfrage. Viel Spaß beim Zuhören!
We have just announced our first set of speakers at AI Engineer Summit! Sign up for the livestream or email sponsors@ai.engineer if you'd like to support.We are facing a massive GPU crunch. As both startups and VC's hoard Nvidia GPUs like countries count nuclear stockpiles, tweets about GPU shortages have become increasingly common. But what if we could run LLMs with AMD cards, or without a GPU at all? There's just one weird trick: compilation. And there's one person uniquely qualified to do it.We had the pleasure to sit down with Tianqi Chen, who's an Assistant Professor at CMU, where he both teaches the MLC course and runs the MLC group. You might also know him as the creator of XGBoost, Apache TVM, and MXNet, as well as the co-founder of OctoML. The MLC (short for Machine Learning Compilation) group has released a lot of interesting projects:* MLC Chat: an iPhone app that lets you run models like RedPajama-3B and Vicuna-7B on-device. It gets up to 30 tok/s!* Web LLM: Run models like LLaMA-70B in your browser (!!) to offer local inference in your product.* MLC LLM: a framework that allows any language models to be deployed natively on different hardware and software stacks.The MLC group has just announced new support for AMD cards; we previously talked about the shortcomings of ROCm, but using MLC you can get performance very close to the NVIDIA's counterparts. This is great news for founders and builders, as AMD cards are more readily available. Here are their latest results on AMD's 7900s vs some of top NVIDIA consumer cards.If you just can't get a GPU at all, MLC LLM also supports ARM and x86 CPU architectures as targets by leveraging LLVM. While speed performance isn't comparable, it allows for non-time-sensitive inference to be run on commodity hardware.We also enjoyed getting a peek into TQ's process, which involves a lot of sketching:With all the other work going on in this space with projects like ggml and Ollama, we're excited to see GPUs becoming less and less of an issue to get models in the hands of more people, and innovative software solutions to hardware problems!Show Notes* TQ's Projects:* XGBoost* Apache TVM* MXNet* MLC* OctoML* CMU Catalyst* ONNX* GGML* Mojo* WebLLM* RWKV* HiPPO* Tri Dao's Episode* George Hotz EpisodePeople:* Carlos Guestrin* Albert GuTimestamps* [00:00:00] Intros* [00:03:41] The creation of XGBoost and its surprising popularity* [00:06:01] Comparing tree-based models vs deep learning* [00:10:33] Overview of TVM and how it works with ONNX* [00:17:18] MLC deep dive* [00:28:10] Using int4 quantization for inference of language models* [00:30:32] Comparison of MLC to other model optimization projects* [00:35:02] Running large language models in the browser with WebLLM* [00:37:47] Integrating browser models into applications* [00:41:15] OctoAI and self-optimizing compute* [00:45:45] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Okay, and we are here with Tianqi Chen, or TQ as people call him, who is assistant professor in ML computer science at CMU, Carnegie Mellon University, also helping to run Catalyst Group, also chief technologist of OctoML. You wear many hats. Are those, you know, your primary identities these days? Of course, of course. [00:00:42]Tianqi: I'm also, you know, very enthusiastic open source. So I'm also a VP and PRC member of the Apache TVM project and so on. But yeah, these are the things I've been up to so far. [00:00:53]Swyx: Yeah. So you did Apache TVM, XGBoost, and MXNet, and we can cover any of those in any amount of detail. But maybe what's one thing about you that people might not learn from your official bio or LinkedIn, you know, on the personal side? [00:01:08]Tianqi: Let me say, yeah, so normally when I do, I really love coding, even though like I'm trying to run all those things. So one thing that I keep a habit on is I try to do sketchbooks. I have a book, like real sketchbooks to draw down the design diagrams and the sketchbooks I keep sketching over the years, and now I have like three or four of them. And it's kind of a usually a fun experience of thinking the design through and also seeing how open source project evolves and also looking back at the sketches that we had in the past to say, you know, all these ideas really turn into code nowadays. [00:01:43]Alessio: How many sketchbooks did you get through to build all this stuff? I mean, if one person alone built one of those projects, he'll be a very accomplished engineer. Like you built like three of these. What's that process like for you? Like it's the sketchbook, like the start, and then you think about the code or like. [00:01:59]Swyx: Yeah. [00:02:00]Tianqi: So, so usually I start sketching on high level architectures and also in a project that works for over years, we also start to think about, you know, new directions, like of course generative AI language model comes in, how it's going to evolve. So normally I would say it takes like one book a year, roughly at that rate. It's usually fun to, I find it's much easier to sketch things out and then gives a more like a high level architectural guide for some of the future items. Yeah. [00:02:28]Swyx: Have you ever published this sketchbooks? Cause I think people would be very interested on, at least on a historical basis. Like this is the time where XGBoost was born, you know? Yeah, not really. [00:02:37]Tianqi: I started sketching like after XGBoost. So that's a kind of missing piece, but a lot of design details in TVM are actually part of the books that I try to keep a record of. [00:02:48]Swyx: Yeah, we'll try to publish them and publish something in the journals. Maybe you can grab a little snapshot for visual aid. Sounds good. [00:02:57]Alessio: Yeah. And yeah, talking about XGBoost, so a lot of people in the audience might know it's a gradient boosting library, probably the most popular out there. And it became super popular because many people started using them in like a machine learning competitions. And I think there's like a whole Wikipedia page of like all state-of-the-art models. They use XGBoost and like, it's a really long list. When you were working on it, so we just had Tri Dao, who's the creator of FlashAttention on the podcast. And I asked him this question, it's like, when you were building FlashAttention, did you know that like almost any transform race model will use it? And so I asked the same question to you when you were coming up with XGBoost, like, could you predict it would be so popular or like, what was the creation process? And when you published it, what did you expect? We have no idea. [00:03:41]Tianqi: Like, actually, the original reason that we built that library is that at that time, deep learning just came out. Like that was the time where AlexNet just came out. And one of the ambitious mission that myself and my advisor, Carlos Guestrin, then is we want to think about, you know, try to test the hypothesis. Can we find alternatives to deep learning models? Because then, you know, there are other alternatives like, you know, support vector machines, linear models, and of course, tree-based models. And our question was, if you build those models and feed them with big enough data, because usually like one of the key characteristics of deep learning is that it's taking a lot [00:04:22]Swyx: of data, right? [00:04:23]Tianqi: So we will be able to get the same amount of performance. That's a hypothesis we're setting out to test. Of course, if you look at now, right, that's a wrong hypothesis, but as a byproduct, what we find out is that, you know, most of the gradient boosting library out there is not efficient enough for us to test that hypothesis. So I happen to have quite a bit of experience in the past of building gradient boosting trees and their variants. So Effective Action Boost was kind of like a byproduct of that hypothesis testing. At that time, I'm also competing a bit in data science challenges, like I worked on KDDCup and then Kaggle kind of become bigger, right? So I kind of think maybe it's becoming useful to others. One of my friends convinced me to try to do a Python binding of it. That tends to be like a very good decision, right, to be effective. Usually when I build it, we feel like maybe a command line interface is okay. And now we have a Python binding, we have R bindings. And then it realized, you know, it started getting interesting. People started contributing different perspectives, like visualization and so on. So we started to push a bit more on to building distributive support to make sure it works on any platform and so on. And even at that time point, when I talked to Carlos, my advisor, later, he said he never anticipated that we'll get to that level of success. And actually, why I pushed for gradient boosting trees, interestingly, at that time, he also disagreed. He thinks that maybe we should go for kernel machines then. And it turns out, you know, actually, we are both wrong in some sense, and Deep Neural Network was the king in the hill. But at least the gradient boosting direction got into something fruitful. [00:06:01]Swyx: Interesting. [00:06:02]Alessio: I'm always curious when it comes to these improvements, like, what's the design process in terms of like coming up with it? And how much of it is a collaborative with like other people that you're working with versus like trying to be, you know, obviously, in academia, it's like very paper-driven kind of research driven. [00:06:19]Tianqi: I would say the extra boost improvement at that time point was more on like, you know, I'm trying to figure out, right. But it's combining lessons. Before that, I did work on some of the other libraries on matrix factorization. That was like my first open source experience. Nobody knew about it, because you'll find, likely, if you go and try to search for the package SVD feature, you'll find some SVN repo somewhere. But it's actually being used for some of the recommender system packages. So I'm trying to apply some of the previous lessons there and trying to combine them. The later projects like MXNet and then TVM is much, much more collaborative in a sense that... But, of course, extra boost has become bigger, right? So when we started that project myself, and then we have, it's really amazing to see people come in. Michael, who was a lawyer, and now he works on the AI space as well, on contributing visualizations. Now we have people from our community contributing different things. So extra boost even today, right, it's a community of committers driving the project. So it's definitely something collaborative and moving forward on getting some of the things continuously improved for our community. [00:07:37]Alessio: Let's talk a bit about TVM too, because we got a lot of things to run through in this episode. [00:07:42]Swyx: I would say that at some point, I'd love to talk about this comparison between extra boost or tree-based type AI or machine learning compared to deep learning, because I think there is a lot of interest around, I guess, merging the two disciplines, right? And we can talk more about that. I don't know where to insert that, by the way, so we can come back to it later. Yeah. [00:08:04]Tianqi: Actually, what I said, when we test the hypothesis, the hypothesis is kind of, I would say it's partially wrong, because the hypothesis we want to test now is, can you run tree-based models on image classification tasks, where deep learning is certainly a no-brainer right [00:08:17]Swyx: now today, right? [00:08:18]Tianqi: But if you try to run it on tabular data, still, you'll find that most people opt for tree-based models. And there's a reason for that, in the sense that when you are looking at tree-based models, the decision boundaries are naturally rules that you're looking at, right? And they also have nice properties, like being able to be agnostic to scale of input and be able to automatically compose features together. And I know there are attempts on building neural network models that work for tabular data, and I also sometimes follow them. I do feel like it's good to have a bit of diversity in the modeling space. Actually, when we're building TVM, we build cost models for the programs, and actually we are using XGBoost for that as well. I still think tree-based models are going to be quite relevant, because first of all, it's really to get it to work out of the box. And also, you will be able to get a bit of interoperability and control monotonicity [00:09:18]Swyx: and so on. [00:09:19]Tianqi: So yes, it's still going to be relevant. I also sometimes keep coming back to think about, are there possible improvements that we can build on top of these models? And definitely, I feel like it's a space that can have some potential in the future. [00:09:34]Swyx: Are there any current projects that you would call out as promising in terms of merging the two directions? [00:09:41]Tianqi: I think there are projects that try to bring a transformer-type model for tabular data. I don't remember specifics of them, but I think even nowadays, if you look at what people are using, tree-based models are still one of their toolkits. So I think maybe eventually it's not even a replacement, it will be just an ensemble of models that you can call. Perfect. [00:10:07]Alessio: Next up, about three years after XGBoost, you built this thing called TVM, which is now a very popular compiler framework for models. Let's talk about, so this came out about at the same time as ONNX. So I think it would be great if you could maybe give a little bit of an overview of how the two things work together. Because it's kind of like the model, then goes to ONNX, then goes to the TVM. But I think a lot of people don't understand the nuances. I can get a bit of a backstory on that. [00:10:33]Tianqi: So actually, that's kind of an ancient history. Before XGBoost, I worked on deep learning for two years or three years. I got a master's before I started my PhD. And during my master's, my thesis focused on applying convolutional restricted Boltzmann machine for ImageNet classification. That is the thing I'm working on. And that was before AlexNet moment. So effectively, I had to handcraft NVIDIA CUDA kernels on, I think, a GTX 2070 card. I have a 22070 card. It took me about six months to get one model working. And eventually, that model is not so good, and we should have picked a better model. But that was like an ancient history that really got me into this deep learning field. And of course, eventually, we find it didn't work out. So in my master's, I ended up working on recommender system, which got me a paper, and I applied and got a PhD. But I always want to come back to work on the deep learning field. So after XGBoost, I think I started to work with some folks on this particular MXNet. At that time, it was like the frameworks of CAFE, Ciano, PyTorch haven't yet come out. And we're really working hard to optimize for performance on GPUs. At that time, I found it's really hard, even for NVIDIA GPU. It took me six months. And then it's amazing to see on different hardwares how hard it is to go and optimize code for the platforms that are interesting. So that gets me thinking, can we build something more generic and automatic? So that I don't need an entire team of so many people to go and build those frameworks. So that's the motivation of starting working on TVM. There is really too little about machine learning engineering needed to support deep learning models on the platforms that we're interested in. I think it started a bit earlier than ONNX, but once it got announced, I think it's in a similar time period at that time. So overall, how it works is that TVM, you will be able to take a subset of machine learning programs that are represented in what we call a computational graph. Nowadays, we can also represent a loop-level program ingest from your machine learning models. Usually, you have model formats ONNX, or in PyTorch, they have FX Tracer that allows you to trace the FX graph. And then it goes through TVM. We also realized that, well, yes, it needs to be more customizable, so it will be able to perform some of the compilation optimizations like fusion operator together, doing smart memory planning, and more importantly, generate low-level code. So that works for NVIDIA and also is portable to other GPU backends, even non-GPU backends [00:13:36]Swyx: out there. [00:13:37]Tianqi: So that's a project that actually has been my primary focus over the past few years. And it's great to see how it started from where I think we are the very early initiator of machine learning compilation. I remember there was a visit one day, one of the students asked me, are you still working on deep learning frameworks? I tell them that I'm working on ML compilation. And they said, okay, compilation, that sounds very ancient. It sounds like a very old field. And why are you working on this? And now it's starting to get more traction, like if you say Torch Compile and other things. I'm really glad to see this field starting to pick up. And also we have to continue innovating here. [00:14:17]Alessio: I think the other thing that I noticed is, it's kind of like a big jump in terms of area of focus to go from XGBoost to TVM, it's kind of like a different part of the stack. Why did you decide to do that? And I think the other thing about compiling to different GPUs and eventually CPUs too, did you already see some of the strain that models could have just being focused on one runtime, only being on CUDA and that, and how much of that went into it? [00:14:50]Tianqi: I think it's less about trying to get impact, more about wanting to have fun. I like to hack code, I had great fun hacking CUDA code. Of course, being able to generate CUDA code is cool, right? But now, after being able to generate CUDA code, okay, by the way, you can do it on other platforms, isn't that amazing? So it's more of that attitude to get me started on this. And also, I think when we look at different researchers, myself is more like a problem solver type. So I like to look at a problem and say, okay, what kind of tools we need to solve that problem? So regardless, it could be building better models. For example, while we build extra boots, we build certain regularizations into it so that it's more robust. It also means building system optimizations, writing low-level code, maybe trying to write assembly and build compilers and so on. So as long as they solve the problem, definitely go and try to do them together. And I also see it's a common trend right now. Like if you want to be able to solve machine learning problems, it's no longer at Aggressor layer, right? You kind of need to solve it from both Aggressor data and systems angle. And this entire field of machine learning system, I think it's kind of emerging. And there's now a conference around it. And it's really good to see a lot more people are starting to look into this. [00:16:10]Swyx: Yeah. Are you talking about ICML or something else? [00:16:13]Tianqi: So machine learning and systems, right? So not only machine learning, but machine learning and system. So there's a conference called MLsys. It's definitely a smaller community than ICML, but I think it's also an emerging and growing community where people are talking about what are the implications of building systems for machine learning, right? And how do you go and optimize things around that and co-design models and systems together? [00:16:37]Swyx: Yeah. And you were area chair for ICML and NeurIPS as well. So you've just had a lot of conference and community organization experience. Is that also an important part of your work? Well, it's kind of expected for academic. [00:16:48]Tianqi: If I hold an academic job, I need to do services for the community. Okay, great. [00:16:53]Swyx: Your most recent venture in MLsys is going to the phone with MLCLLM. You announced this in April. I have it on my phone. It's great. I'm running Lama 2, Vicuña. I don't know what other models that you offer. But maybe just kind of describe your journey into MLC. And I don't know how this coincides with your work at CMU. Is that some kind of outgrowth? [00:17:18]Tianqi: I think it's more like a focused effort that we want in the area of machine learning compilation. So it's kind of related to what we built in TVM. So when we built TVM was five years ago, right? And a lot of things happened. We built the end-to-end machine learning compiler that works, the first one that works. But then we captured a lot of lessons there. So then we are building a second iteration called TVM Unity. That allows us to be able to allow ML engineers to be able to quickly capture the new model and how we demand building optimizations for them. And MLCLLM is kind of like an MLC. It's more like a vertical driven organization that we go and build tutorials and go and build projects like LLM to solutions. So that to really show like, okay, you can take machine learning compilation technology and apply it and bring something fun forward. Yeah. So yes, it runs on phones, which is really cool. But the goal here is not only making it run on phones, right? The goal is making it deploy universally. So we do run on Apple M2 Macs, the 17 billion models. Actually, on a single batch inference, more recently on CUDA, we get, I think, the most best performance you can get out there already on the 4-bit inference. Actually, as I alluded earlier before the podcast, we just had a result on AMD. And on a single batch, actually, we can get the latest AMD GPU. This is a consumer card. It can get to about 80% of the 4019, so NVIDIA's best consumer card out there. So it's not yet on par, but thinking about how diversity and what you can enable and the previous things you can get on that card, it's really amazing that what you can do with this kind of technology. [00:19:10]Swyx: So one thing I'm a little bit confused by is that most of these models are in PyTorch, but you're running this inside a TVM. I don't know. Was there any fundamental change that you needed to do, or was this basically the fundamental design of TVM? [00:19:25]Tianqi: So the idea is that, of course, it comes back to program representation, right? So effectively, TVM has this program representation called TVM script that contains more like computational graph and operational representation. So yes, initially, we do need to take a bit of effort of bringing those models onto the program representation that TVM supports. Usually, there are a mix of ways, depending on the kind of model you're looking at. For example, for vision models and stable diffusion models, usually we can just do tracing that takes PyTorch model onto TVM. That part is still being robustified so that we can bring more models in. On language model tasks, actually what we do is we directly build some of the model constructors and try to directly map from Hugging Face models. The goal is if you have a Hugging Face configuration, we will be able to bring that in and apply optimization on them. So one fun thing about model compilation is that your optimization doesn't happen only as a soft language, right? For example, if you're writing PyTorch code, you just go and try to use a better fused operator at a source code level. Torch compile might help you do a bit of things in there. In most of the model compilations, it not only happens at the beginning stage, but we also apply generic transformations in between, also through a Python API. So you can tweak some of that. So that part of optimization helps a lot of uplifting in getting both performance and also portability on the environment. And another thing that we do have is what we call universal deployment. So if you get the ML program into this TVM script format, where there are functions that takes in tensor and output tensor, we will be able to have a way to compile it. So they will be able to load the function in any of the language runtime that TVM supports. So if you could load it in JavaScript, and that's a JavaScript function that you can take in tensors and output tensors. If you're loading Python, of course, and C++ and Java. So the goal there is really bring the ML model to the language that people care about and be able to run it on a platform they like. [00:21:37]Swyx: It strikes me that I've talked to a lot of compiler people, but you don't have a traditional compiler background. You're inventing your own discipline called machine learning compilation, or MLC. Do you think that this will be a bigger field going forward? [00:21:52]Tianqi: First of all, I do work with people working on compilation as well. So we're also taking inspirations from a lot of early innovations in the field. Like for example, TVM initially, we take a lot of inspirations from Halide, which is just an image processing compiler. And of course, since then, we have evolved quite a bit to focus on the machine learning related compilations. If you look at some of our conference publications, you'll find that machine learning compilation is already kind of a subfield. So if you look at papers in both machine learning venues, the MLC conferences, of course, and also system venues, every year there will be papers around machine learning compilation. And in the compiler conference called CGO, there's a C4ML workshop that also kind of trying to focus on this area. So definitely it's already starting to gain traction and becoming a field. I wouldn't claim that I invented this field, but definitely I helped to work with a lot of folks there. And I try to bring a perspective, of course, trying to learn a lot from the compiler optimizations as well as trying to bring in knowledges in machine learning and systems together. [00:23:07]Alessio: So we had George Hotz on the podcast a few episodes ago, and he had a lot to say about AMD and their software. So when you think about TVM, are you still restricted in a way by the performance of the underlying kernel, so to speak? So if your target is like a CUDA runtime, you still get better performance, no matter like TVM kind of helps you get there, but then that level you don't take care of, right? [00:23:34]Swyx: There are two parts in here, right? [00:23:35]Tianqi: So first of all, there is the lower level runtime, like CUDA runtime. And then actually for NVIDIA, a lot of the mood came from their libraries, like Cutlass, CUDN, right? Those library optimizations. And also for specialized workloads, actually you can specialize them. Because a lot of cases you'll find that if you go and do benchmarks, it's very interesting. Like two years ago, if you try to benchmark ResNet, for example, usually the NVIDIA library [00:24:04]Swyx: gives you the best performance. [00:24:06]Tianqi: It's really hard to beat them. But as soon as you start to change the model to something, maybe a bit of a variation of ResNet, not for the traditional ImageNet detections, but for latent detection and so on, there will be some room for optimization because people sometimes overfit to benchmarks. These are people who go and optimize things, right? So people overfit the benchmarks. So that's the largest barrier, like being able to get a low level kernel libraries, right? In that sense, the goal of TVM is actually we try to have a generic layer to both, of course, leverage libraries when available, but also be able to automatically generate [00:24:45]Swyx: libraries when possible. [00:24:46]Tianqi: So in that sense, we are not restricted by the libraries that they have to offer. That's why we will be able to run Apple M2 or WebGPU where there's no library available because we are kind of like automatically generating libraries. That makes it easier to support less well-supported hardware, right? For example, WebGPU is one example. From a runtime perspective, AMD, I think before their Vulkan driver was not very well supported. Recently, they are getting good. But even before that, we'll be able to support AMD through this GPU graphics backend called Vulkan, which is not as performant, but it gives you a decent portability across those [00:25:29]Swyx: hardware. [00:25:29]Alessio: And I know we got other MLC stuff to talk about, like WebLLM, but I want to wrap up on the optimization that you're doing. So there's kind of four core things, right? Kernel fusion, which we talked a bit about in the flash attention episode and the tiny grab one memory planning and loop optimization. I think those are like pretty, you know, self-explanatory. I think the one that people have the most questions, can you can you quickly explain [00:25:53]Swyx: those? [00:25:54]Tianqi: So there are kind of a different things, right? Kernel fusion means that, you know, if you have an operator like Convolutions or in the case of a transformer like MOP, you have other operators that follow that, right? You don't want to launch two GPU kernels. You want to be able to put them together in a smart way, right? And as a memory planning, it's more about, you know, hey, if you run like Python code, every time when you generate a new array, you are effectively allocating a new piece of memory, right? Of course, PyTorch and other frameworks try to optimize for you. So there is a smart memory allocator behind the scene. But actually, in a lot of cases, it's much better to statically allocate and plan everything ahead of time. And that's where like a compiler can come in. We need to, first of all, actually for language model, it's much harder because dynamic shape. So you need to be able to what we call symbolic shape tracing. So we have like a symbolic variable that tells you like the shape of the first tensor is n by 12. And the shape of the third tensor is also n by 12. Or maybe it's n times 2 by 12. Although you don't know what n is, right? But you will be able to know that relation and be able to use that to reason about like fusion and other decisions. So besides this, I think loop transformation is quite important. And it's actually non-traditional. Originally, if you simply write a code and you want to get a performance, it's very hard. For example, you know, if you write a matrix multiplier, the simplest thing you can do is you do for i, j, k, c, i, j, plus, equal, you know, a, i, k, times b, i, k. But that code is 100 times slower than the best available code that you can get. So we do a lot of transformation, like being able to take the original code, trying to put things into shared memory, and making use of tensor calls, making use of memory copies, and all this. Actually, all these things, we also realize that, you know, we cannot do all of them. So we also make the ML compilation framework as a Python package, so that people will be able to continuously improve that part of engineering in a more transparent way. So we find that's very useful, actually, for us to be able to get good performance very quickly on some of the new models. Like when Lamato came out, we'll be able to go and look at the whole, here's the bottleneck, and we can go and optimize those. [00:28:10]Alessio: And then the fourth one being weight quantization. So everybody wants to know about that. And just to give people an idea of the memory saving, if you're doing FB32, it's like four bytes per parameter. Int8 is like one byte per parameter. So you can really shrink down the memory footprint. What are some of the trade-offs there? How do you figure out what the right target is? And what are the precision trade-offs, too? [00:28:37]Tianqi: Right now, a lot of people also mostly use int4 now for language models. So that really shrinks things down a lot. And more recently, actually, we started to think that, at least in MOC, we don't want to have a strong opinion on what kind of quantization we want to bring, because there are so many researchers in the field. So what we can do is we can allow developers to customize the quantization they want, but we still bring the optimum code for them. So we are working on this item called bring your own quantization. In fact, hopefully MOC will be able to support more quantization formats. And definitely, I think there's an open field that's being explored. Can you bring more sparsities? Can you quantize activations as much as possible, and so on? And it's going to be something that's going to be relevant for quite a while. [00:29:27]Swyx: You mentioned something I wanted to double back on, which is most people use int4 for language models. This is actually not obvious to me. Are you talking about the GGML type people, or even the researchers who are training the models also using int4? [00:29:40]Tianqi: Sorry, so I'm mainly talking about inference, not training, right? So when you're doing training, of course, int4 is harder, right? Maybe you could do some form of mixed type precision for inference. I think int4 is kind of like, in a lot of cases, you will be able to get away with int4. And actually, that does bring a lot of savings in terms of the memory overhead, and so on. [00:30:09]Alessio: Yeah, that's great. Let's talk a bit about maybe the GGML, then there's Mojo. How should people think about MLC? How do all these things play together? I think GGML is focused on model level re-implementation and improvements. Mojo is a language, super sad. You're more at the compiler level. Do you all work together? Do people choose between them? [00:30:32]Tianqi: So I think in this case, I think it's great to say the ecosystem becomes so rich with so many different ways. So in our case, GGML is more like you're implementing something from scratch in C, right? So that gives you the ability to go and customize each of a particular hardware backend. But then you will need to write from CUDA kernels, and you write optimally from AMD, and so on. So the kind of engineering effort is a bit more broadened in that sense. Mojo, I have not looked at specific details yet. I think it's good to start to say, it's a language, right? I believe there will also be machine learning compilation technologies behind it. So it's good to say, interesting place in there. In the case of MLC, our case is that we do not want to have an opinion on how, where, which language people want to develop, deploy, and so on. And we also realize that actually there are two phases. We want to be able to develop and optimize your model. By optimization, I mean, really bring in the best CUDA kernels and do some of the machine learning engineering in there. And then there's a phase where you want to deploy it as a part of the app. So if you look at the space, you'll find that GGML is more like, I'm going to develop and optimize in the C language, right? And then most of the low-level languages they have. And Mojo is that you want to develop and optimize in Mojo, right? And you deploy in Mojo. In fact, that's the philosophy they want to push for. In the ML case, we find that actually if you want to develop models, the machine learning community likes Python. Python is a language that you should focus on. So in the case of MLC, we really want to be able to enable, not only be able to just define your model in Python, that's very common, right? But also do ML optimization, like engineering optimization, CUDA kernel optimization, memory planning, all those things in Python that makes you customizable and so on. But when you do deployment, we realize that people want a bit of a universal flavor. If you are a web developer, you want JavaScript, right? If you're maybe an embedded system person, maybe you would prefer C++ or C or Rust. And people sometimes do like Python in a lot of cases. So in the case of MLC, we really want to have this vision of, you optimize, build a generic optimization in Python, then you deploy that universally onto the environments that people like. [00:32:54]Swyx: That's a great perspective and comparison, I guess. One thing I wanted to make sure that we cover is that I think you are one of these emerging set of academics that also very much focus on your artifacts of delivery. Of course. Something we talked about for three years, that he was very focused on his GitHub. And obviously you treated XGBoost like a product, you know? And then now you're publishing an iPhone app. Okay. Yeah. Yeah. What is his thinking about academics getting involved in shipping products? [00:33:24]Tianqi: I think there are different ways of making impact, right? Definitely, you know, there are academics that are writing papers and building insights for people so that people can build product on top of them. In my case, I think the particular field I'm working on, machine learning systems, I feel like really we need to be able to get it to the hand of people so that really we see the problem, right? And we show that we can solve a problem. And it's a different way of making impact. And there are academics that are doing similar things. Like, you know, if you look at some of the people from Berkeley, right? A few years, they will come up with big open source projects. Certainly, I think it's just a healthy ecosystem to have different ways of making impacts. And I feel like really be able to do open source and work with open source community is really rewarding because we have a real problem to work on when we build our research. Actually, those research bring together and people will be able to make use of them. And we also start to see interesting research challenges that we wouldn't otherwise say, right, if you're just trying to do a prototype and so on. So I feel like it's something that is one interesting way of making impact, making contributions. [00:34:40]Swyx: Yeah, you definitely have a lot of impact there. And having experience publishing Mac stuff before, the Apple App Store is no joke. It is the hardest compilation, human compilation effort. So one thing that we definitely wanted to cover is running in the browser. You have a 70 billion parameter model running in the browser. That's right. Can you just talk about how? Yeah, of course. [00:35:02]Tianqi: So I think that there are a few elements that need to come in, right? First of all, you know, we do need a MacBook, the latest one, like M2 Max, because you need the memory to be big enough to cover that. So for a 70 million model, it takes you about, I think, 50 gigahertz of RAM. So the M2 Max, the upper version, will be able to run it, right? And it also leverages machine learning compilation. Again, what we are doing is the same, whether it's running on iPhone, on server cloud GPUs, on AMDs, or on MacBook, we all go through that same MOC pipeline. Of course, in certain cases, maybe we'll do a bit of customization iteration for either ones. And then it runs on the browser runtime, this package of WebLM. So that will effectively... So what we do is we will take that original model and compile to what we call WebGPU. And then the WebLM will be to pick it up. And the WebGPU is this latest GPU technology that major browsers are shipping right now. So you can get it in Chrome for them already. It allows you to be able to access your native GPUs from a browser. And then effectively, that language model is just invoking the WebGPU kernels through there. So actually, when the LATMAR2 came out, initially, we asked the question about, can you run 17 billion on a MacBook? That was the question we're asking. So first, we actually... Jin Lu, who is the engineer pushing this, he got 17 billion on a MacBook. We had a CLI version. So in MLC, you will be able to... That runs through a metal accelerator. So effectively, you use the metal programming language to get the GPU acceleration. So we find, okay, it works for the MacBook. Then we asked, we had a WebGPU backend. Why not try it there? So we just tried it out. And it's really amazing to see everything up and running. And actually, it runs smoothly in that case. So I do think there are some kind of interesting use cases already in this, because everybody has a browser. You don't need to install anything. I think it doesn't make sense yet to really run a 17 billion model on a browser, because you kind of need to be able to download the weight and so on. But I think we're getting there. Effectively, the most powerful models you will be able to run on a consumer device. It's kind of really amazing. And also, in a lot of cases, there might be use cases. For example, if I'm going to build a chatbot that I talk to it and answer questions, maybe some of the components, like the voice to text, could run on the client side. And so there are a lot of possibilities of being able to have something hybrid that contains the edge component or something that runs on a server. [00:37:47]Alessio: Do these browser models have a way for applications to hook into them? So if I'm using, say, you can use OpenAI or you can use the local model. Of course. [00:37:56]Tianqi: Right now, actually, we are building... So there's an NPM package called WebILM, right? So that you will be able to, if you want to embed it onto your web app, you will be able to directly depend on WebILM and you will be able to use it. We are also having a REST API that's OpenAI compatible. So that REST API, I think, right now, it's actually running on native backend. So that if a CUDA server is faster to run on native backend. But also we have a WebGPU version of it that you can go and run. So yeah, we do want to be able to have easier integrations with existing applications. And OpenAI API is certainly one way to do that. Yeah, this is great. [00:38:37]Swyx: I actually did not know there's an NPM package that makes it very, very easy to try out and use. I want to actually... One thing I'm unclear about is the chronology. Because as far as I know, Chrome shipped WebGPU the same time that you shipped WebILM. Okay, yeah. So did you have some kind of secret chat with Chrome? [00:38:57]Tianqi: The good news is that Chrome is doing a very good job of trying to have early release. So although the official shipment of the Chrome WebGPU is the same time as WebILM, actually, you will be able to try out WebGPU technology in Chrome. There is an unstable version called Canary. I think as early as two years ago, there was a WebGPU version. Of course, it's getting better. So we had a TVM-based WebGPU backhand two years ago. Of course, at that time, there were no language models. It was running on less interesting, well, still quite interesting models. And then this year, we really started to see it getting matured and performance keeping up. So we have a more serious push of bringing the language model compatible runtime onto the WebGPU. [00:39:45]Swyx: I think you agree that the hardest part is the model download. Has there been conversations about a one-time model download and sharing between all the apps that might use this API? That is a great point. [00:39:58]Tianqi: I think it's already supported in some sense. When we download the model, WebILM will cache it onto a special Chrome cache. So if a different web app uses the same WebILM JavaScript package, you don't need to redownload the model again. So there is already something there. But of course, you have to download the model once at least to be able to use it. [00:40:19]Swyx: Okay. One more thing just in general before we're about to zoom out to OctoAI. Just the last question is, you're not the only project working on, I guess, local models. That's right. Alternative models. There's gpt4all, there's olama that just recently came out, and there's a bunch of these. What would be your advice to them on what's a valuable problem to work on? And what is just thin wrappers around ggml? Like, what are the interesting problems in this space, basically? [00:40:45]Tianqi: I think making API better is certainly something useful, right? In general, one thing that we do try to push very hard on is this idea of easier universal deployment. So we are also looking forward to actually have more integration with MOC. That's why we're trying to build API like WebILM and other things. So we're also looking forward to collaborate with all those ecosystems and working support to bring in models more universally and be able to also keep up the best performance when possible in a more push-button way. [00:41:15]Alessio: So as we mentioned in the beginning, you're also the co-founder of Octomel. Recently, Octomel released OctoAI, which is a compute service, basically focuses on optimizing model runtimes and acceleration and compilation. What has been the evolution there? So Octo started as kind of like a traditional MLOps tool, where people were building their own models and you help them on that side. And then it seems like now most of the market is shifting to starting from pre-trained generative models. Yeah, what has been that experience for you and what you've seen the market evolve? And how did you decide to release OctoAI? [00:41:52]Tianqi: One thing that we found out is that on one hand, it's really easy to go and get something up and running, right? So if you start to consider there's so many possible availabilities and scalability issues and even integration issues since becoming kind of interesting and complicated. So we really want to make sure to help people to get that part easy, right? And now a lot of things, if we look at the customers we talk to and the market, certainly generative AI is something that is very interesting. So that is something that we really hope to help elevate. And also building on top of technology we build to enable things like portability across hardwares. And you will be able to not worry about the specific details, right? Just focus on getting the model out. We'll try to work on infrastructure and other things that helps on the other end. [00:42:45]Alessio: And when it comes to getting optimization on the runtime, I see when we run an early adopters community and most enterprises issue is how to actually run these models. Do you see that as one of the big bottlenecks now? I think a few years ago it was like, well, we don't have a lot of machine learning talent. We cannot develop our own models. Versus now it's like, there's these great models you can use, but I don't know how to run them efficiently. [00:43:12]Tianqi: That depends on how you define by running, right? On one hand, it's easy to download your MLC, like you download it, you run on a laptop, but then there's also different decisions, right? What if you are trying to serve a larger user request? What if that request changes? What if the availability of hardware changes? Right now it's really hard to get the latest hardware on media, unfortunately, because everybody's trying to work on the things using the hardware that's out there. So I think when the definition of run changes, there are a lot more questions around things. And also in a lot of cases, it's not only about running models, it's also about being able to solve problems around them. How do you manage your model locations and how do you make sure that you get your model close to your execution environment more efficiently? So definitely a lot of engineering challenges out there. That we hope to elevate, yeah. And also, if you think about our future, definitely I feel like right now the technology, given the technology and the kind of hardware availability we have today, we will need to make use of all the possible hardware available out there. That will include a mechanism for cutting down costs, bringing something to the edge and cloud in a more natural way. So I feel like still this is a very early stage of where we are, but it's already good to see a lot of interesting progress. [00:44:35]Alessio: Yeah, that's awesome. I would love, I don't know how much we're going to go in depth into it, but what does it take to actually abstract all of this from the end user? You know, like they don't need to know what GPUs you run, what cloud you're running them on. You take all of that away. What was that like as an engineering challenge? [00:44:51]Tianqi: So I think that there are engineering challenges on. In fact, first of all, you will need to be able to support all the kind of hardware backhand you have, right? On one hand, if you look at the media library, you'll find very surprisingly, not too surprisingly, most of the latest libraries works well on the latest GPU. But there are other GPUs out there in the cloud as well. So certainly being able to have know-hows and being able to do model optimization is one thing, right? Also infrastructures on being able to scale things up, locate models. And in a lot of cases, we do find that on typical models, it also requires kind of vertical iterations. So it's not about, you know, build a silver bullet and that silver bullet is going to solve all the problems. It's more about, you know, we're building a product, we'll work with the users and we find out there are interesting opportunities in a certain point. And when our engineer will go and solve that, and it will automatically reflect it in a service. [00:45:45]Swyx: Awesome. [00:45:46]Alessio: We can jump into the lightning round until, I don't know, Sean, if you have more questions or TQ, if you have more stuff you wanted to talk about that we didn't get a chance to [00:45:54]Swyx: touch on. [00:45:54]Alessio: Yeah, we have talked a lot. [00:45:55]Swyx: So, yeah. We always would like to ask, you know, do you have a commentary on other parts of AI and ML that is interesting to you? [00:46:03]Tianqi: So right now, I think one thing that we are really pushing hard for is this question about how far can we bring open source, right? I'm kind of like a hacker and I really like to put things together. So I think it's unclear in the future of what the future of AI looks like. On one hand, it could be possible that, you know, you just have a few big players, you just try to talk to those bigger language models and that can do everything, right? On the other hand, one of the things that Wailing Academic is really excited and pushing for, that's one reason why I'm pushing for MLC, is that can we build something where you have different models? You have personal models that know the best movie you like, but you also have bigger models that maybe know more, and you get those models to interact with each other, right? And be able to have a wide ecosystem of AI agents that helps each person while still being able to do things like personalization. Some of them can run locally, some of them, of course, running on a cloud, and how do they interact with each other? So I think that is a very exciting time where the future is yet undecided, but I feel like there is something we can do to shape that future as well. [00:47:18]Swyx: One more thing, which is something I'm also pursuing, which is, and this kind of goes back into predictions, but also back in your history, do you have any idea, or are you looking out for anything post-transformers as far as architecture is concerned? [00:47:32]Tianqi: I think, you know, in a lot of these cases, you can find there are already promising models for long contexts, right? There are space-based models, where like, you know, a lot of some of our colleagues from Albert, who he worked on this HIPPO models, right? And then there is an open source version called RWKV. It's like a recurrent models that allows you to summarize things. Actually, we are bringing RWKV to MOC as well, so maybe you will be able to see one of the models. [00:48:00]Swyx: We actually recorded an episode with one of the RWKV core members. It's unclear because there's no academic backing. It's just open source people. Oh, I see. So you like the merging of recurrent networks and transformers? [00:48:13]Tianqi: I do love to see this model space continue growing, right? And I feel like in a lot of cases, it's just that attention mechanism is getting changed in some sense. So I feel like definitely there are still a lot of things to be explored here. And that is also one reason why we want to keep pushing machine learning compilation, because one of the things we are trying to push in was productivity. So that for machine learning engineering, so that as soon as some of the models came out, we will be able to, you know, empower them onto those environments that's out there. [00:48:43]Swyx: Yeah, it's a really good mission. Okay. Very excited to see that RWKV and state space model stuff. I'm hearing increasing chatter about that stuff. Okay. Lightning round, as always fun. I'll take the first one. Acceleration. What has already happened in AI that you thought would take much longer? [00:48:59]Tianqi: Emergence of more like a conversation chatbot ability is something that kind of surprised me before it came out. This is like one piece that I feel originally I thought would take much longer, but yeah, [00:49:11]Swyx: it happens. And it's funny because like the original, like Eliza chatbot was something that goes all the way back in time. Right. And then we just suddenly came back again. Yeah. [00:49:21]Tianqi: It's always too interesting to think about, but with a kind of a different technology [00:49:25]Swyx: in some sense. [00:49:25]Alessio: What about the most interesting unsolved question in AI? [00:49:31]Swyx: That's a hard one, right? [00:49:32]Tianqi: So I can tell you like what kind of I'm excited about. So, so I think that I have always been excited about this idea of continuous learning and lifelong learning in some sense. So how AI continues to evolve with the knowledges that have been there. It seems that we're getting much closer with all those recent technologies. So being able to develop systems, support, and be able to think about how AI continues to evolve is something that I'm really excited about. [00:50:01]Swyx: So specifically, just to double click on this, are you talking about continuous training? That's like a training. [00:50:06]Tianqi: I feel like, you know, training adaptation and it's all similar things, right? You want to think about entire life cycle, right? The life cycle of collecting data, training, fine tuning, and maybe have your local context that getting continuously curated and feed onto models. So I think all these things are interesting and relevant in here. [00:50:29]Swyx: Yeah. I think this is something that people are really asking, you know, right now we have moved a lot into the sort of pre-training phase and off the shelf, you know, the model downloads and stuff like that, which seems very counterintuitive compared to the continuous training paradigm that people want. So I guess the last question would be for takeaways. What's basically one message that you want every listener, every person to remember today? [00:50:54]Tianqi: I think it's getting more obvious now, but I think one of the things that I always want to mention in my talks is that, you know, when you're thinking about AI applications, originally people think about algorithms a lot more, right? Our algorithm models, they are still very important. But usually when you build AI applications, it takes, you know, both algorithm side, the system optimizations, and the data curations, right? So it takes a connection of so many facades to be able to bring together an AI system and be able to look at it from that holistic perspective is really useful when we start to build modern applications. I think it's going to continue going to be more important in the future. [00:51:35]Swyx: Yeah. Thank you for showing the way on this. And honestly, just making things possible that I thought would take a lot longer. So thanks for everything you've done. [00:51:46]Tianqi: Thank you for having me. [00:51:47]Swyx: Yeah. [00:51:47]Alessio: Thanks for coming on TQ. [00:51:49]Swyx: Have a good one. [00:51:49] Get full access to Latent Space at www.latent.space/subscribe
Ben Bajarin and Jay Goldberg discuss new competitive dynamics emerging in semis thanks to AI. They also highlight a few takeaways from AMDs recent AI and Datacenter day they attended.
EU-Parlament stimmt für AI Act, AMDs neue KI Chips, Bard lernt Coden, Spannungen zwischen Microsoft und OpenAI, Und Google SGE im Test https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/
Ohne Frage, Star Wars - Jedi Survivor hat auf dem PC eine technische Bruchlandung hingelegt: Selbst High-End-CPUs sind aktuell nicht in der Lage, schnelle Grafikkarten auszulasten, je nach Szene fallen die FPS deutlich, es hakt. Immerhin: Grafisch macht das Spiel so einiges her, wie Jan auf dem Asus ROG Strix Scar 17 mit AMDs Dragon-Range-CPU feststellen konnte und dann auch gleich noch mit Fabian den Test des Ryzen 9 7945HX mit 16 Kernen bespricht. Ebenfalls noch einmal zur Sprache kommen müssen der (viel zu) späte Schlussstrich unter eine enttäuschende 170-Euro-Maus von Roccat und am Ende gibt es mal wieder eine Kelle aus der Gaming-Grafikkarten-Gerüchteküche.
Denna vecka pratar vi om de handhållna spelkonsolernas renässans som till synes är i full sving: Steam Deck, Switch och nya Asus Ally. Och om 2023 års bästa mellanklassmobiler och AMDs sämsta lösning på CPU-namn. Detta och mycket mer i veckans avsnitt av TechBubbel. 00:00:00 – Välkommen till TechBubbel 00:02:22 – Tack till våra patrons! 00:05:53 – Vad är framtiden för handhållna spelkonsoler? 00:28:26 – Bästa mellanklass-telefonen just nu 00:43:58 – Nvidia fyller 30 år 00:53:56 – Veckans Facepalm 00:58:17 – Veckans bubbel 01:02:57 – Stöd oss på Patreon.com/techbubbel! Exekutiva producenter: Oskar Eriksson Joa War Mathias Alexandersson Mattias Ctrl Enqvist Emil Råsmark Rikner Oscar Wahlberg Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Daniel Timm Mats Jidaker
In this week's episode, we're so excited to bring Lynn Geautraux to you guys! Lynn is also going to be a presenter at the symposium. He self-published his book called Comprehensive Cane Tip and AMD Modifications, Design and Refurbishing where he shared all his knowledge and his many years of experience in tinkering with AMDs which we can learn from and even hear about it in person during his symposium presentation that's coming up. If this doesn't get you super excited already, come join us in the podcast and listen to what he has to share! Links: Allied Independence, Website | Instagram | Facebook Comprehensive Cane Tip and AMD Modifications, Design and Refurbishing, Book
Die CES 2023 ist vorrüber und hat zahlreiche technische Neuvorstellungen mit sich gebracht. Die Redakteure Jan und Fabian diskutieren sie und gewähren Einblick hinter die Kulissen.
Denna veckan pratar vi om CES 2023 och de största nyheterna, inklusive MagSafe som kommer till alla, Nvidias dåliga lansering som vi inte trodde kunde bli sämre, nya processorer från Intel och AMD samt nya ljusstarka OLED-TV-apparater. Detta och mycket mer i veckans avsnitt av TechBubbel. 00:02:39 – Tack till våra patrons 00:04:45 – Intel tar 24 kärnor till laptops 00:15:32 – Nvidia är sämst 00:34:55 – AMDs gaming-CPUer kan vinna 2023 00:53:16 – Största OLED-nyheterna någonsin 01:02:27 – Veckans Facepalm 01:04:23 – Veckans bubbel Exekutiv producent: Mattias Ctrl Enqvist Mathias Alexandersson Joa War Oskar Eriksson Emil Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Mats Jidaker Daniel Timm
Eine wilde Woche. Es fing an mit den Reviews und dem Verkaufsstart von AMDs Radeon RX 7900 XT(X): Treiber nicht fertig, Leistungswerte all over the place, Stromverbrauch viel zu hoch, kaum Bestand bei den Händlern. Trumps große Ankündigung waren... Sammelbildchen-NFTs! Und Elno. Elno, Elno, Elno. Erst sperrt er zahlreiche Journalisten vorgeblich wegen Doxxing, dann verbietet er das Posten von Links zu anderen Social-Media-Netzwerken. Das hat er natürlich nicht einmal zwei Stunden (!) nach unserer Aufnahme am Sonntag Abend, 18.12.2022, wieder zurückgenommen und dann per Twitter-Umfrage die "Vertrauensfrage" gestellt, ob er zurücktreten soll als CEO (57% stimmten für "ja"). Wir haben jetzt kein Update dazu aufgenommen, irgendwann reichts auch. Könnte eh schon wieder überholt sein, wenn die Folge jetzt online geht. Viel Spaß mit Folge 131! Sprecher: Meep, Michael Kister Besucht unsim Discord https://discord.gg/SneNarVCBMauf Twitter https://twitter.com/technikquatschauf Youtube https://www.youtube.com/channel/UCm7FRJku8ZzrZkmeY79j0WQ 00:00:49 Vorweihnachtsplanungen und AvatarTasting History with Max Miller: The true story of the First Thanksgiving https://www.youtube.com/watch?v=ixTkzBuD-cw 00:08:20 Muskeradehttps://www.businessinsider.de/wirtschaft/twitter-hat-keine-chance-elon-musk-war-schon-immer-ein-blender-jetzt-fliegt-er-auf-a/https://twitter.com/JanAlbrecht/status/1604589609303900160https://www.nbcnews.com/tech/social-media/twitter-suspends-journalists-covering-elon-musk-company-rcna62032https://www.zdf.de/nachrichten/digitales/twitter-sperre-journalisten-musk-julian-jaursch-100.html#xtor=CS5-62https://twitter.com/az_rww/status/1604008769397981185Lambrecht, Musk - und eine Adelige: Verleihung der Goldenen Vollpfosten | heute-show vom 16.12.2022 https://www.youtube.com/watch?v=h8AUBxaxHtQ#t=7m56s 00:23:24 Trump NFTshttps://collecttrumpcards.com/ 00:27:18 Steam Deck u.a. in Japan gelandet, weitere Pläne von Valvehttps://www.theverge.com/23499215/valve-steam-deck-interview-late-2022 00:30:26 verkorkster Start für AMD Radeon RX 7900 XT(X)https://www.computerbase.de/2022-12/amd-radeon-rx-7900-xtx-xt-review-test/Gamer's Nexus: AMD Radeon RX 7900 XTX Review & GPU Benchmarks https://www.youtube.com/watch?v=We71eXwKODwDigital Foundry: Radeon RX 7900 XTX/ RX 7900 XT vs RTX 4080 Review https://www.youtube.com/watch?v=8RN9J6cE08cHardware Unboxed: RTX 4080 Killer? Radeon RX 7900 XTX Review & Benchmarks https://www.youtube.com/watch?v=4UFiG7CwpHk 00:41:20 gewaltiger Sprung für die Fusions-Forschung, ein kleiner für die Menschheithttps://bigthink.com/the-future/fusion-power-nif-hype-lose-energy/https://www.spektrum.de/news/ist-in-der-fusionsforschung-ein-durchbruch-gelungen/2087187 00:59:15 Vorsicht bei Wissenschafts-News oder: Besserer Sex mit Schokolade 01:10:45 Intel möchte mehr Geld für die geplante Fab bei Magdeburghttps://www.wiwo.de/technologie/digitale-welt/halbleiterindustrie-intel-will-mehr-geld-baustart-von-werk-in-magdeburg-koennte-sich-verschieben/28874806.html 01:19:25 Satelliten-Internet für Rechenzentrenhttps://www.handelsblatt.com/technik/it-internet/satelliten-internet-rivale-fuer-starlink-europaeische-satelliten-fuer-neues-hochleistungsnetz-gestartet/28874130.html 01:28:56 Elno-Zwischen-Update (ja, aber nicht das Neueste) 01:34:30 M&M müde
Eine wilde Woche. Es fing an mit den Reviews und dem Verkaufsstart von AMDs Radeon RX 7900 XT(X): Treiber nicht fertig, Leistungswerte all over the place, Stromverbrauch viel zu hoch, kaum Bestand bei den Händlern. Trumps große Ankündigung waren… Sammelbildchen-NFTs! Und Elno. Elno, Elno, Elno. Erst sperrt er zahlreiche Journalisten vorgeblich wegen Doxxing, dann verbietet […] The post Folge 131: Verkorkster Start von AMDs 7900XT(X), Trump-NFTs, Fusionsreaktoren dauern noch und Elno dreht am Rad appeared first on Technikquatsch.
Vi pratar bl.a. om AMDs nya kort, ZA/UM-soppan, PSVR2, Metroid Prime 4, Jedi: Survivor, Gotham Knights, Case of the Golden Idol, Signalis, Animal Crossing: New Horizons, Bayonetta 3 och God of War: Ragnarök Stötta oss på Patreon! För 50kr i månaden får du tillgång till podden oklippt, direkt när den har spelats in, ett exklusivt extraavsnitt kallat KB+(PLUS) i månaden och dessutom tillgång till allt tidigare exklusivt content samt allt material under jul/nyår och sommarledigheter. För 100kr i månaden får du allt som 50kr-patrons får men du får OCKSÅ vara en del av podden genom det RAFFLANDE segmentet "Audio Log" där vi spelar upp ett kort ljudmeddelande från våra 100kr Patreons som vi bemöter, besvarar eller dömer ut. På den här nivån får du dessutom första tjing på alla eventuella koder & gratisgrejer som vi ibland erbjuder! www.patreon.com/kontrollbehov Köp vår merch på Podstore.se! https://www.podstore.se/podstore/kontrollbehov/ Besök vår Youtube-kanal och prenumerera: https://www.youtube.com/channel/UClQ2sTbiCcR0dqNFHwcTB0g Gå med i gruppen Kontrollbehov - Eftersnack på Facebook: https://www.facebook.com/groups/1104625369694949/?ref=bookmarks Vi finns såklart också på Discord! https://discord.gg/848F6TWXDY Hör av er: kontrollbehovpodcast@gmail.com
Today we talk about the insane prices of the new GPU lineup from NVIDIA, AMDs hot CPUs, Intel ARC, Google Stadia and Portal RTX. Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
Today we talk about AMDs announcement of the longevity of AM5, Intel reveals A770 performance and Ethereum POW. Gaming news covers Overwatch ditching loot boxes, No split screen Halo and Valve games in development. https://www.youtube.com/watch?v=p0yjlORHYYg Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
It's Episode 70 of the Zenspath 4 Button Podcast! Join Jeremy (@Zenspath), Rachel (@Out_Racheous), Kelly (@MississippiSci) & Joie (@fromforestgreen) as we discuss the recent Nintendo Direct Mini, Xenoblade 3 CE debacle, AMDs new game dev tools, John Williams finishes up, Kojima does good, Kotick does bad, & more!
Denna veckan pratar vi om det bästa och hetaste från världens största datormässa. Hur hett kommer AMD Zen 4 bli egentligen, varför gör Corsair en laptop och vilka nyheter uteblev på årets mässa? Detta och mycket mer i veckans avsnitt av TechBubbel. 00:04:04 – AMDs 5,5 GHz-flaggskepp 00:28:04 – Corsairs fula laptop 00:32:08 – 500 Hz spelbildskärm 00:37:57 – Nvidia, Qualcomm och Intel 00:46:50 – Vi sammanfattar Computex 2022 00:48:52 – Veckans Facepalm 00:51:42 – Veckans bubbel Exekutiv producent: Mattias Ctrl Enqvist Joa War Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Oskar Eriksson Mats Jidaker Daniel Timm Mathias Alexandersson Emil Råsmark Rikner
In the regular show we talk about AMDs announcements at computex. Mainly the much anticipated 7000 series CPUs which will debut in the fall with the new AM5 motherboards. Also, AMD driver updates doing great for DX11, Steam Deck support on the rise, Star Citizen gouging and Duke Nukem Forever. Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
Daniel Buitrago, Brandon Fifield & Jack Lau pin the throttle with Nick Olzenak of AMDS & PWS Books Beard combs & Beardos, Wild Game Meat party, Drop Anchor get boarded, Sea Trials with the Commish, Future boat design, New Zealand's marine boat innovation, Coastal Craft to Kingfisher, Tags on Friday, buying a Cali boat, I need that boat for biz, History of AMDS and commercial drive, Snow check spring fever, SkiDoo time, sleds for kids, muff pots and cooker cans, COOP sled option, What's up with Arctic Cat, everyone needs a tractor, treasure hunters, carrying on a legacy, not getting blown off the hook, winter boating and prep, sea lions hunting B Ranger Danger, Tastes so good smallest deer story, sketchy winter boating story www.alaskawildproject.com https://www.youtube.com/channel/UCbYEEV6swi2yZWWuFop73LQ https://www.instagram.com/alaskawildproject/
Während alle Welt über den Chipmangel spricht, legt Intel äußerst ambitionierte Pläne für gleich vier neue Prozessorgenerationen für Desktop-PCs und Notebooks bis 2025 vor. Trotz der jahrelangen Verzögerungen beim Umstieg auf den 10-nm-Prozess gibt sich der US-Konzern optimistisch, ab dem kommenden Jahr jährlich eine neue Prozessgeneration in die Massenproduktion bringen zu können. Damit will Intel auch dem immer weiter aufholenden Konkurrenten AMD etwas entgegensetzen. Der hatte Intel zuletzt beim Börsenwert überholt, obwohl er nur ein Viertel des Umsatzes macht. Gleichzeitig schickt sich Apple an, den Abschied von Intels Prozessoren in diesem Jahr abzuschließen. Was uns angesichts all dessen auf dem CPU-Markt bevorsteht, besprechen wir in einer neuen #heiseshow. Was hat Intel für die kommenden drei Jahre angekündigt? Wo steht der Konzern aktuell, wie realistisch ist der ambitionierte CPU-Fahrplan bis 2025? Was plant AMD und wie groß ist die Konkurrenz für Intel wirklich. Wie will AMD den immer noch großen Vorsprung weiter aufholen? Wie unterschieden sich die technischen Ansätze von AMD und Intel, die Strukturbreiten sind ja nur ein Aspekt? Was machen die beiden Konzerne sonst noch unterschiedlich? Wodurch lässt sich das gegensätzliche Bild erklären, das sie abgeben? Und wie sieht es bei der Konkurrenz aus, was bedeutet Apples Wechsel auf eigene CPU-Technik? Darüber und über viele weitere Fragen auch aus dem Publikum spricht Martin Holland (@fingolas) mit c't-Redakteur Carsten Spille (@carstenspille) und Mark Mantel von heise online in einer neuen Folge der #heiseshow. === Anzeige / Sponsorenhinweis === Schnapp dir den Exklusiv-Deal + Geschenk zum nordVPN-Geburtstag: nordvpn.com/heiseshow Jetzt mit der risikofreien 30-Tage-Geld-zurück-Garantie. === Anzeige / Sponsorenhinweis Ende ===
Während alle Welt über den Chipmangel spricht, legt Intel äußerst ambitionierte Pläne für gleich vier neue Prozessorgenerationen für Desktop-PCs und Notebooks bis 2025 vor. Trotz der jahrelangen Verzögerungen beim Umstieg auf den 10-nm-Prozess gibt sich der US-Konzern optimistisch, ab dem kommenden Jahr jährlich eine neue Prozessgeneration in die Massenproduktion bringen zu können. Damit will Intel auch dem immer weiter aufholenden Konkurrenten AMD etwas entgegensetzen. Der hatte Intel zuletzt beim Börsenwert überholt, obwohl er nur ein Viertel des Umsatzes macht. Gleichzeitig schickt sich Apple an, den Abschied von Intels Prozessoren in diesem Jahr abzuschließen. Was uns angesichts all dessen auf dem CPU-Markt bevorsteht, besprechen wir in einer neuen #heiseshow. Was hat Intel für die kommenden drei Jahre angekündigt? Wo steht der Konzern aktuell, wie realistisch ist der ambitionierte CPU-Fahrplan bis 2025? Was plant AMD und wie groß ist die Konkurrenz für Intel wirklich. Wie will AMD den immer noch großen Vorsprung weiter aufholen? Wie unterschieden sich die technischen Ansätze von AMD und Intel, die Strukturbreiten sind ja nur ein Aspekt? Was machen die beiden Konzerne sonst noch unterschiedlich? Wodurch lässt sich das gegensätzliche Bild erklären, das sie abgeben? Und wie sieht es bei der Konkurrenz aus, was bedeutet Apples Wechsel auf eigene CPU-Technik? Darüber und über viele weitere Fragen auch aus dem Publikum spricht Martin Holland (@fingolas) mit c't-Redakteur Carsten Spille (@carstenspille) und Mark Mantel von heise online in einer neuen Folge der #heiseshow. === Anzeige / Sponsorenhinweis === Schnapp dir den Exklusiv-Deal + Geschenk zum nordVPN-Geburtstag: nordvpn.com/heiseshow Jetzt mit der risikofreien 30-Tage-Geld-zurück-Garantie. === Anzeige / Sponsorenhinweis Ende ===
In this episode Aubrey shares her passion for dance and it's therapeutic effect for those who experience mental health issues. She also shares how she fits dance into her marriage. Enjoy!
The Big Bells contest results, we announce the prize winners. Radio Delta 171 having second thoughts about longwave. NASA has been flying a Boeing plane over the VOA Greenville transmitter site to understand electromagnetic interference. DJ Wolfman Jack wants recordings of his early shows from Mexico. Blue Danube Radio is cutting back. Wolf Harranth reports on its origins as the Blue Danube Network. Having a station labelling system. We revisit the IDLogic idea from Pierre Schwab and a competing system called AMDS developed by Deutsche Welle. Also, do you remember when stations were thinking of adopting single sideband (SSB) in order to save bandwidth on the shortwave dial? A few years later it was dead. Mike Bird reports we got the propagation forecast this week. Lovely signoff jingle from Jim Cutler. (Diana Janssen's partner is a lawyer).
What kind of Adaptive Mobility Device (AMD) does my student need? How will I know when they are ready for a different type, or to transition to a long cane? In this episode, we break down what AMDs are, the types of AMD's, and how to know which type is right for your students. Find the rest of the show notes on our Allied Independence blog page.