POPULARITY
The stories that kept us talking all year, and are only getting hotter! Plus the big flops we're still sore about. Special Guest: Kenji Berthold.
Alan Wake 2 ist erschienen und die Menschen sind begeistert! Sofern sie es spielen können. Am PC wird nämlich eine Grafikkarte aus der Generation Turing (RTX 2000) oder RDNA 2 (Radeon RX 6000) benötigt und nur in zweiter Linie wegen der Performance. DirectX 12 Ultimate (Feature Level 12_2) muss sie unterstützen. Oder vielmehr Mesh Shader. Qualcomm hat nicht nur den neuesten Flagship-SoC Snapdragon 8 Gen 3 vorgestellt, sondern möchte mit dem Snapdragon Elite X einen direkten Angriff auf Apples M2 starten und auch auf Laptop-Chips von Intel und AMD. Diesmal soll es aber wirklich so richtig klappen mit Windows-on-ARM! Wir werden sehen, ob es auch Geräte dazu gibt. Mike hat sich die Demo/Beta zu einem kleinen RTS mit dem Titel "Space Tales" angesehen: Er findet es leider ziemlich grützig. Im Gegensatz zu "The Bloodline" schimmert hier nicht großes Potential durch. Wirklich nicht gut. Meep hingegen hat sich (wieder mal) Duolingo geschnappt, um sich auf verschiedene Italien-Aufenthalte vorzubereiten. Viel Spaß mit Folge 176! Sprecher: Meep, Michael KisterProduktion: Michael KisterTitelbild: Mohammed Ali DadBildquellen: Remedy Entertainment/Epic Games/Qualcomm/DuolingoAufnahmedatum: 27.10.2023 Besucht unsim Discord https://discord.gg/SneNarVCBMauf Twitter https://twitter.com/technikquatschauf Bluesky https://bsky.app/profile/technikquatsch.bsky.socialauf Youtube https://www.youtube.com/@technikquatsch 00:00:00 Schlafkatze bei Meep 00:05:28 Laufwerk der PS5 Slim benötigt Internetverbindung zum Server für erstes Pairinghttps://www.videogameschronicle.com/news/the-new-ps5s-optional-disc-drive-requires-an-internet-connection-to-connect/ 00:21:08 Meep freut sich darauf, Super Mario Wonder zu spielen (und Alan Wake 2 evtl. irgendwann) 00:25:25 Alan Wake 2 setzt GPU mit DirectX 12 Feature Level 2 voraus (ab Turing/RTX 2000 und Navi 2/Radeon RX 6000)https://www.computerbase.de/2023-10/alan-wake-2-benchmark-test/https://www.computerbase.de/2023-10/alan-wake-2-gtx-1000-und-rx-5000-sind-aussen-vor-spieler-ausser-sich/https://www.pcgameshardware.de/Alan-Wake-2-Spiel-17706/Specials/Release-Test-Benchmarks-1432197/Alan Wake 2 PC - Rasterisation Optimised Settings Breakdown - Is It Really THAT Demanding? https://www.youtube.com/watch?v=QrXoDon6fXshttps://microsoft.github.io/DirectX-Specs/d3d/MeshShader.html#motivation-for-adding-mesh-shaderReinventing the Geometry Pipeline: Mesh Shaders in DirectX 12 | Shawn Hargreaves | DirectX Dev Day https://www.youtube.com/watch?v=CFXKTXtil34 00:48:25 Qualcomm mit Snapdragon Elite X gegen Apple M2 und Intel/AMD Laptop-CPUs; Nvidia und AMD sollen Windows-on-Arm-Chips für 2025 planenhttps://www.computerbase.de/2023-10/snapdragon-8-gen-3-im-benchmark-qualcomm-veroeffentlicht-starke-eigene-ergebnisse/https://www.computerbase.de/2023-10/snapdragon-8-gen-3-schneller-effizienter-und-mit-generative-ai-noch-schlauer/https://www.computerbase.de/2023-10/snapdragon-8-gen-3-im-benchmark-qualcomm-veroeffentlicht-starke-eigene-ergebnisse/https://www.reuters.com/technology/nvidia-make-arm-based-pc-chips-major-new-challenge-intel-2023-10-23/ 01:10:31 Rant auf Microsoft 01:15:45 Space Tales Preview https://store.steampowered.com/app/2457960/Space_Tales/ 01:23:07 Meep spielt Duolingo Italienisch https://www.duolingo.com
Alan Wake 2 ist erschienen und die Menschen sind begeistert! Sofern sie es spielen können. Am PC wird nämlich eine Grafikkarte aus der Generation Turing (RTX 2000) oder RDNA 2 (Radeon RX 6000) benötigt und nur in zweiter Linie wegen der Performance. DirectX 12 Ultimate (Feature Level 12_2) muss sie unterstützen. Oder vielmehr Mesh Shader. […] The post Folge 176: Alan Wake 2 verlangt RTX 2000, Qualcomm greift Apple M2 an, Italienisch mit Duolingo, Space Tales Beta/Demo appeared first on Technikquatsch.
We have just announced our first set of speakers at AI Engineer Summit! Sign up for the livestream or email sponsors@ai.engineer if you'd like to support.We are facing a massive GPU crunch. As both startups and VC's hoard Nvidia GPUs like countries count nuclear stockpiles, tweets about GPU shortages have become increasingly common. But what if we could run LLMs with AMD cards, or without a GPU at all? There's just one weird trick: compilation. And there's one person uniquely qualified to do it.We had the pleasure to sit down with Tianqi Chen, who's an Assistant Professor at CMU, where he both teaches the MLC course and runs the MLC group. You might also know him as the creator of XGBoost, Apache TVM, and MXNet, as well as the co-founder of OctoML. The MLC (short for Machine Learning Compilation) group has released a lot of interesting projects:* MLC Chat: an iPhone app that lets you run models like RedPajama-3B and Vicuna-7B on-device. It gets up to 30 tok/s!* Web LLM: Run models like LLaMA-70B in your browser (!!) to offer local inference in your product.* MLC LLM: a framework that allows any language models to be deployed natively on different hardware and software stacks.The MLC group has just announced new support for AMD cards; we previously talked about the shortcomings of ROCm, but using MLC you can get performance very close to the NVIDIA's counterparts. This is great news for founders and builders, as AMD cards are more readily available. Here are their latest results on AMD's 7900s vs some of top NVIDIA consumer cards.If you just can't get a GPU at all, MLC LLM also supports ARM and x86 CPU architectures as targets by leveraging LLVM. While speed performance isn't comparable, it allows for non-time-sensitive inference to be run on commodity hardware.We also enjoyed getting a peek into TQ's process, which involves a lot of sketching:With all the other work going on in this space with projects like ggml and Ollama, we're excited to see GPUs becoming less and less of an issue to get models in the hands of more people, and innovative software solutions to hardware problems!Show Notes* TQ's Projects:* XGBoost* Apache TVM* MXNet* MLC* OctoML* CMU Catalyst* ONNX* GGML* Mojo* WebLLM* RWKV* HiPPO* Tri Dao's Episode* George Hotz EpisodePeople:* Carlos Guestrin* Albert GuTimestamps* [00:00:00] Intros* [00:03:41] The creation of XGBoost and its surprising popularity* [00:06:01] Comparing tree-based models vs deep learning* [00:10:33] Overview of TVM and how it works with ONNX* [00:17:18] MLC deep dive* [00:28:10] Using int4 quantization for inference of language models* [00:30:32] Comparison of MLC to other model optimization projects* [00:35:02] Running large language models in the browser with WebLLM* [00:37:47] Integrating browser models into applications* [00:41:15] OctoAI and self-optimizing compute* [00:45:45] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Okay, and we are here with Tianqi Chen, or TQ as people call him, who is assistant professor in ML computer science at CMU, Carnegie Mellon University, also helping to run Catalyst Group, also chief technologist of OctoML. You wear many hats. Are those, you know, your primary identities these days? Of course, of course. [00:00:42]Tianqi: I'm also, you know, very enthusiastic open source. So I'm also a VP and PRC member of the Apache TVM project and so on. But yeah, these are the things I've been up to so far. [00:00:53]Swyx: Yeah. So you did Apache TVM, XGBoost, and MXNet, and we can cover any of those in any amount of detail. But maybe what's one thing about you that people might not learn from your official bio or LinkedIn, you know, on the personal side? [00:01:08]Tianqi: Let me say, yeah, so normally when I do, I really love coding, even though like I'm trying to run all those things. So one thing that I keep a habit on is I try to do sketchbooks. I have a book, like real sketchbooks to draw down the design diagrams and the sketchbooks I keep sketching over the years, and now I have like three or four of them. And it's kind of a usually a fun experience of thinking the design through and also seeing how open source project evolves and also looking back at the sketches that we had in the past to say, you know, all these ideas really turn into code nowadays. [00:01:43]Alessio: How many sketchbooks did you get through to build all this stuff? I mean, if one person alone built one of those projects, he'll be a very accomplished engineer. Like you built like three of these. What's that process like for you? Like it's the sketchbook, like the start, and then you think about the code or like. [00:01:59]Swyx: Yeah. [00:02:00]Tianqi: So, so usually I start sketching on high level architectures and also in a project that works for over years, we also start to think about, you know, new directions, like of course generative AI language model comes in, how it's going to evolve. So normally I would say it takes like one book a year, roughly at that rate. It's usually fun to, I find it's much easier to sketch things out and then gives a more like a high level architectural guide for some of the future items. Yeah. [00:02:28]Swyx: Have you ever published this sketchbooks? Cause I think people would be very interested on, at least on a historical basis. Like this is the time where XGBoost was born, you know? Yeah, not really. [00:02:37]Tianqi: I started sketching like after XGBoost. So that's a kind of missing piece, but a lot of design details in TVM are actually part of the books that I try to keep a record of. [00:02:48]Swyx: Yeah, we'll try to publish them and publish something in the journals. Maybe you can grab a little snapshot for visual aid. Sounds good. [00:02:57]Alessio: Yeah. And yeah, talking about XGBoost, so a lot of people in the audience might know it's a gradient boosting library, probably the most popular out there. And it became super popular because many people started using them in like a machine learning competitions. And I think there's like a whole Wikipedia page of like all state-of-the-art models. They use XGBoost and like, it's a really long list. When you were working on it, so we just had Tri Dao, who's the creator of FlashAttention on the podcast. And I asked him this question, it's like, when you were building FlashAttention, did you know that like almost any transform race model will use it? And so I asked the same question to you when you were coming up with XGBoost, like, could you predict it would be so popular or like, what was the creation process? And when you published it, what did you expect? We have no idea. [00:03:41]Tianqi: Like, actually, the original reason that we built that library is that at that time, deep learning just came out. Like that was the time where AlexNet just came out. And one of the ambitious mission that myself and my advisor, Carlos Guestrin, then is we want to think about, you know, try to test the hypothesis. Can we find alternatives to deep learning models? Because then, you know, there are other alternatives like, you know, support vector machines, linear models, and of course, tree-based models. And our question was, if you build those models and feed them with big enough data, because usually like one of the key characteristics of deep learning is that it's taking a lot [00:04:22]Swyx: of data, right? [00:04:23]Tianqi: So we will be able to get the same amount of performance. That's a hypothesis we're setting out to test. Of course, if you look at now, right, that's a wrong hypothesis, but as a byproduct, what we find out is that, you know, most of the gradient boosting library out there is not efficient enough for us to test that hypothesis. So I happen to have quite a bit of experience in the past of building gradient boosting trees and their variants. So Effective Action Boost was kind of like a byproduct of that hypothesis testing. At that time, I'm also competing a bit in data science challenges, like I worked on KDDCup and then Kaggle kind of become bigger, right? So I kind of think maybe it's becoming useful to others. One of my friends convinced me to try to do a Python binding of it. That tends to be like a very good decision, right, to be effective. Usually when I build it, we feel like maybe a command line interface is okay. And now we have a Python binding, we have R bindings. And then it realized, you know, it started getting interesting. People started contributing different perspectives, like visualization and so on. So we started to push a bit more on to building distributive support to make sure it works on any platform and so on. And even at that time point, when I talked to Carlos, my advisor, later, he said he never anticipated that we'll get to that level of success. And actually, why I pushed for gradient boosting trees, interestingly, at that time, he also disagreed. He thinks that maybe we should go for kernel machines then. And it turns out, you know, actually, we are both wrong in some sense, and Deep Neural Network was the king in the hill. But at least the gradient boosting direction got into something fruitful. [00:06:01]Swyx: Interesting. [00:06:02]Alessio: I'm always curious when it comes to these improvements, like, what's the design process in terms of like coming up with it? And how much of it is a collaborative with like other people that you're working with versus like trying to be, you know, obviously, in academia, it's like very paper-driven kind of research driven. [00:06:19]Tianqi: I would say the extra boost improvement at that time point was more on like, you know, I'm trying to figure out, right. But it's combining lessons. Before that, I did work on some of the other libraries on matrix factorization. That was like my first open source experience. Nobody knew about it, because you'll find, likely, if you go and try to search for the package SVD feature, you'll find some SVN repo somewhere. But it's actually being used for some of the recommender system packages. So I'm trying to apply some of the previous lessons there and trying to combine them. The later projects like MXNet and then TVM is much, much more collaborative in a sense that... But, of course, extra boost has become bigger, right? So when we started that project myself, and then we have, it's really amazing to see people come in. Michael, who was a lawyer, and now he works on the AI space as well, on contributing visualizations. Now we have people from our community contributing different things. So extra boost even today, right, it's a community of committers driving the project. So it's definitely something collaborative and moving forward on getting some of the things continuously improved for our community. [00:07:37]Alessio: Let's talk a bit about TVM too, because we got a lot of things to run through in this episode. [00:07:42]Swyx: I would say that at some point, I'd love to talk about this comparison between extra boost or tree-based type AI or machine learning compared to deep learning, because I think there is a lot of interest around, I guess, merging the two disciplines, right? And we can talk more about that. I don't know where to insert that, by the way, so we can come back to it later. Yeah. [00:08:04]Tianqi: Actually, what I said, when we test the hypothesis, the hypothesis is kind of, I would say it's partially wrong, because the hypothesis we want to test now is, can you run tree-based models on image classification tasks, where deep learning is certainly a no-brainer right [00:08:17]Swyx: now today, right? [00:08:18]Tianqi: But if you try to run it on tabular data, still, you'll find that most people opt for tree-based models. And there's a reason for that, in the sense that when you are looking at tree-based models, the decision boundaries are naturally rules that you're looking at, right? And they also have nice properties, like being able to be agnostic to scale of input and be able to automatically compose features together. And I know there are attempts on building neural network models that work for tabular data, and I also sometimes follow them. I do feel like it's good to have a bit of diversity in the modeling space. Actually, when we're building TVM, we build cost models for the programs, and actually we are using XGBoost for that as well. I still think tree-based models are going to be quite relevant, because first of all, it's really to get it to work out of the box. And also, you will be able to get a bit of interoperability and control monotonicity [00:09:18]Swyx: and so on. [00:09:19]Tianqi: So yes, it's still going to be relevant. I also sometimes keep coming back to think about, are there possible improvements that we can build on top of these models? And definitely, I feel like it's a space that can have some potential in the future. [00:09:34]Swyx: Are there any current projects that you would call out as promising in terms of merging the two directions? [00:09:41]Tianqi: I think there are projects that try to bring a transformer-type model for tabular data. I don't remember specifics of them, but I think even nowadays, if you look at what people are using, tree-based models are still one of their toolkits. So I think maybe eventually it's not even a replacement, it will be just an ensemble of models that you can call. Perfect. [00:10:07]Alessio: Next up, about three years after XGBoost, you built this thing called TVM, which is now a very popular compiler framework for models. Let's talk about, so this came out about at the same time as ONNX. So I think it would be great if you could maybe give a little bit of an overview of how the two things work together. Because it's kind of like the model, then goes to ONNX, then goes to the TVM. But I think a lot of people don't understand the nuances. I can get a bit of a backstory on that. [00:10:33]Tianqi: So actually, that's kind of an ancient history. Before XGBoost, I worked on deep learning for two years or three years. I got a master's before I started my PhD. And during my master's, my thesis focused on applying convolutional restricted Boltzmann machine for ImageNet classification. That is the thing I'm working on. And that was before AlexNet moment. So effectively, I had to handcraft NVIDIA CUDA kernels on, I think, a GTX 2070 card. I have a 22070 card. It took me about six months to get one model working. And eventually, that model is not so good, and we should have picked a better model. But that was like an ancient history that really got me into this deep learning field. And of course, eventually, we find it didn't work out. So in my master's, I ended up working on recommender system, which got me a paper, and I applied and got a PhD. But I always want to come back to work on the deep learning field. So after XGBoost, I think I started to work with some folks on this particular MXNet. At that time, it was like the frameworks of CAFE, Ciano, PyTorch haven't yet come out. And we're really working hard to optimize for performance on GPUs. At that time, I found it's really hard, even for NVIDIA GPU. It took me six months. And then it's amazing to see on different hardwares how hard it is to go and optimize code for the platforms that are interesting. So that gets me thinking, can we build something more generic and automatic? So that I don't need an entire team of so many people to go and build those frameworks. So that's the motivation of starting working on TVM. There is really too little about machine learning engineering needed to support deep learning models on the platforms that we're interested in. I think it started a bit earlier than ONNX, but once it got announced, I think it's in a similar time period at that time. So overall, how it works is that TVM, you will be able to take a subset of machine learning programs that are represented in what we call a computational graph. Nowadays, we can also represent a loop-level program ingest from your machine learning models. Usually, you have model formats ONNX, or in PyTorch, they have FX Tracer that allows you to trace the FX graph. And then it goes through TVM. We also realized that, well, yes, it needs to be more customizable, so it will be able to perform some of the compilation optimizations like fusion operator together, doing smart memory planning, and more importantly, generate low-level code. So that works for NVIDIA and also is portable to other GPU backends, even non-GPU backends [00:13:36]Swyx: out there. [00:13:37]Tianqi: So that's a project that actually has been my primary focus over the past few years. And it's great to see how it started from where I think we are the very early initiator of machine learning compilation. I remember there was a visit one day, one of the students asked me, are you still working on deep learning frameworks? I tell them that I'm working on ML compilation. And they said, okay, compilation, that sounds very ancient. It sounds like a very old field. And why are you working on this? And now it's starting to get more traction, like if you say Torch Compile and other things. I'm really glad to see this field starting to pick up. And also we have to continue innovating here. [00:14:17]Alessio: I think the other thing that I noticed is, it's kind of like a big jump in terms of area of focus to go from XGBoost to TVM, it's kind of like a different part of the stack. Why did you decide to do that? And I think the other thing about compiling to different GPUs and eventually CPUs too, did you already see some of the strain that models could have just being focused on one runtime, only being on CUDA and that, and how much of that went into it? [00:14:50]Tianqi: I think it's less about trying to get impact, more about wanting to have fun. I like to hack code, I had great fun hacking CUDA code. Of course, being able to generate CUDA code is cool, right? But now, after being able to generate CUDA code, okay, by the way, you can do it on other platforms, isn't that amazing? So it's more of that attitude to get me started on this. And also, I think when we look at different researchers, myself is more like a problem solver type. So I like to look at a problem and say, okay, what kind of tools we need to solve that problem? So regardless, it could be building better models. For example, while we build extra boots, we build certain regularizations into it so that it's more robust. It also means building system optimizations, writing low-level code, maybe trying to write assembly and build compilers and so on. So as long as they solve the problem, definitely go and try to do them together. And I also see it's a common trend right now. Like if you want to be able to solve machine learning problems, it's no longer at Aggressor layer, right? You kind of need to solve it from both Aggressor data and systems angle. And this entire field of machine learning system, I think it's kind of emerging. And there's now a conference around it. And it's really good to see a lot more people are starting to look into this. [00:16:10]Swyx: Yeah. Are you talking about ICML or something else? [00:16:13]Tianqi: So machine learning and systems, right? So not only machine learning, but machine learning and system. So there's a conference called MLsys. It's definitely a smaller community than ICML, but I think it's also an emerging and growing community where people are talking about what are the implications of building systems for machine learning, right? And how do you go and optimize things around that and co-design models and systems together? [00:16:37]Swyx: Yeah. And you were area chair for ICML and NeurIPS as well. So you've just had a lot of conference and community organization experience. Is that also an important part of your work? Well, it's kind of expected for academic. [00:16:48]Tianqi: If I hold an academic job, I need to do services for the community. Okay, great. [00:16:53]Swyx: Your most recent venture in MLsys is going to the phone with MLCLLM. You announced this in April. I have it on my phone. It's great. I'm running Lama 2, Vicuña. I don't know what other models that you offer. But maybe just kind of describe your journey into MLC. And I don't know how this coincides with your work at CMU. Is that some kind of outgrowth? [00:17:18]Tianqi: I think it's more like a focused effort that we want in the area of machine learning compilation. So it's kind of related to what we built in TVM. So when we built TVM was five years ago, right? And a lot of things happened. We built the end-to-end machine learning compiler that works, the first one that works. But then we captured a lot of lessons there. So then we are building a second iteration called TVM Unity. That allows us to be able to allow ML engineers to be able to quickly capture the new model and how we demand building optimizations for them. And MLCLLM is kind of like an MLC. It's more like a vertical driven organization that we go and build tutorials and go and build projects like LLM to solutions. So that to really show like, okay, you can take machine learning compilation technology and apply it and bring something fun forward. Yeah. So yes, it runs on phones, which is really cool. But the goal here is not only making it run on phones, right? The goal is making it deploy universally. So we do run on Apple M2 Macs, the 17 billion models. Actually, on a single batch inference, more recently on CUDA, we get, I think, the most best performance you can get out there already on the 4-bit inference. Actually, as I alluded earlier before the podcast, we just had a result on AMD. And on a single batch, actually, we can get the latest AMD GPU. This is a consumer card. It can get to about 80% of the 4019, so NVIDIA's best consumer card out there. So it's not yet on par, but thinking about how diversity and what you can enable and the previous things you can get on that card, it's really amazing that what you can do with this kind of technology. [00:19:10]Swyx: So one thing I'm a little bit confused by is that most of these models are in PyTorch, but you're running this inside a TVM. I don't know. Was there any fundamental change that you needed to do, or was this basically the fundamental design of TVM? [00:19:25]Tianqi: So the idea is that, of course, it comes back to program representation, right? So effectively, TVM has this program representation called TVM script that contains more like computational graph and operational representation. So yes, initially, we do need to take a bit of effort of bringing those models onto the program representation that TVM supports. Usually, there are a mix of ways, depending on the kind of model you're looking at. For example, for vision models and stable diffusion models, usually we can just do tracing that takes PyTorch model onto TVM. That part is still being robustified so that we can bring more models in. On language model tasks, actually what we do is we directly build some of the model constructors and try to directly map from Hugging Face models. The goal is if you have a Hugging Face configuration, we will be able to bring that in and apply optimization on them. So one fun thing about model compilation is that your optimization doesn't happen only as a soft language, right? For example, if you're writing PyTorch code, you just go and try to use a better fused operator at a source code level. Torch compile might help you do a bit of things in there. In most of the model compilations, it not only happens at the beginning stage, but we also apply generic transformations in between, also through a Python API. So you can tweak some of that. So that part of optimization helps a lot of uplifting in getting both performance and also portability on the environment. And another thing that we do have is what we call universal deployment. So if you get the ML program into this TVM script format, where there are functions that takes in tensor and output tensor, we will be able to have a way to compile it. So they will be able to load the function in any of the language runtime that TVM supports. So if you could load it in JavaScript, and that's a JavaScript function that you can take in tensors and output tensors. If you're loading Python, of course, and C++ and Java. So the goal there is really bring the ML model to the language that people care about and be able to run it on a platform they like. [00:21:37]Swyx: It strikes me that I've talked to a lot of compiler people, but you don't have a traditional compiler background. You're inventing your own discipline called machine learning compilation, or MLC. Do you think that this will be a bigger field going forward? [00:21:52]Tianqi: First of all, I do work with people working on compilation as well. So we're also taking inspirations from a lot of early innovations in the field. Like for example, TVM initially, we take a lot of inspirations from Halide, which is just an image processing compiler. And of course, since then, we have evolved quite a bit to focus on the machine learning related compilations. If you look at some of our conference publications, you'll find that machine learning compilation is already kind of a subfield. So if you look at papers in both machine learning venues, the MLC conferences, of course, and also system venues, every year there will be papers around machine learning compilation. And in the compiler conference called CGO, there's a C4ML workshop that also kind of trying to focus on this area. So definitely it's already starting to gain traction and becoming a field. I wouldn't claim that I invented this field, but definitely I helped to work with a lot of folks there. And I try to bring a perspective, of course, trying to learn a lot from the compiler optimizations as well as trying to bring in knowledges in machine learning and systems together. [00:23:07]Alessio: So we had George Hotz on the podcast a few episodes ago, and he had a lot to say about AMD and their software. So when you think about TVM, are you still restricted in a way by the performance of the underlying kernel, so to speak? So if your target is like a CUDA runtime, you still get better performance, no matter like TVM kind of helps you get there, but then that level you don't take care of, right? [00:23:34]Swyx: There are two parts in here, right? [00:23:35]Tianqi: So first of all, there is the lower level runtime, like CUDA runtime. And then actually for NVIDIA, a lot of the mood came from their libraries, like Cutlass, CUDN, right? Those library optimizations. And also for specialized workloads, actually you can specialize them. Because a lot of cases you'll find that if you go and do benchmarks, it's very interesting. Like two years ago, if you try to benchmark ResNet, for example, usually the NVIDIA library [00:24:04]Swyx: gives you the best performance. [00:24:06]Tianqi: It's really hard to beat them. But as soon as you start to change the model to something, maybe a bit of a variation of ResNet, not for the traditional ImageNet detections, but for latent detection and so on, there will be some room for optimization because people sometimes overfit to benchmarks. These are people who go and optimize things, right? So people overfit the benchmarks. So that's the largest barrier, like being able to get a low level kernel libraries, right? In that sense, the goal of TVM is actually we try to have a generic layer to both, of course, leverage libraries when available, but also be able to automatically generate [00:24:45]Swyx: libraries when possible. [00:24:46]Tianqi: So in that sense, we are not restricted by the libraries that they have to offer. That's why we will be able to run Apple M2 or WebGPU where there's no library available because we are kind of like automatically generating libraries. That makes it easier to support less well-supported hardware, right? For example, WebGPU is one example. From a runtime perspective, AMD, I think before their Vulkan driver was not very well supported. Recently, they are getting good. But even before that, we'll be able to support AMD through this GPU graphics backend called Vulkan, which is not as performant, but it gives you a decent portability across those [00:25:29]Swyx: hardware. [00:25:29]Alessio: And I know we got other MLC stuff to talk about, like WebLLM, but I want to wrap up on the optimization that you're doing. So there's kind of four core things, right? Kernel fusion, which we talked a bit about in the flash attention episode and the tiny grab one memory planning and loop optimization. I think those are like pretty, you know, self-explanatory. I think the one that people have the most questions, can you can you quickly explain [00:25:53]Swyx: those? [00:25:54]Tianqi: So there are kind of a different things, right? Kernel fusion means that, you know, if you have an operator like Convolutions or in the case of a transformer like MOP, you have other operators that follow that, right? You don't want to launch two GPU kernels. You want to be able to put them together in a smart way, right? And as a memory planning, it's more about, you know, hey, if you run like Python code, every time when you generate a new array, you are effectively allocating a new piece of memory, right? Of course, PyTorch and other frameworks try to optimize for you. So there is a smart memory allocator behind the scene. But actually, in a lot of cases, it's much better to statically allocate and plan everything ahead of time. And that's where like a compiler can come in. We need to, first of all, actually for language model, it's much harder because dynamic shape. So you need to be able to what we call symbolic shape tracing. So we have like a symbolic variable that tells you like the shape of the first tensor is n by 12. And the shape of the third tensor is also n by 12. Or maybe it's n times 2 by 12. Although you don't know what n is, right? But you will be able to know that relation and be able to use that to reason about like fusion and other decisions. So besides this, I think loop transformation is quite important. And it's actually non-traditional. Originally, if you simply write a code and you want to get a performance, it's very hard. For example, you know, if you write a matrix multiplier, the simplest thing you can do is you do for i, j, k, c, i, j, plus, equal, you know, a, i, k, times b, i, k. But that code is 100 times slower than the best available code that you can get. So we do a lot of transformation, like being able to take the original code, trying to put things into shared memory, and making use of tensor calls, making use of memory copies, and all this. Actually, all these things, we also realize that, you know, we cannot do all of them. So we also make the ML compilation framework as a Python package, so that people will be able to continuously improve that part of engineering in a more transparent way. So we find that's very useful, actually, for us to be able to get good performance very quickly on some of the new models. Like when Lamato came out, we'll be able to go and look at the whole, here's the bottleneck, and we can go and optimize those. [00:28:10]Alessio: And then the fourth one being weight quantization. So everybody wants to know about that. And just to give people an idea of the memory saving, if you're doing FB32, it's like four bytes per parameter. Int8 is like one byte per parameter. So you can really shrink down the memory footprint. What are some of the trade-offs there? How do you figure out what the right target is? And what are the precision trade-offs, too? [00:28:37]Tianqi: Right now, a lot of people also mostly use int4 now for language models. So that really shrinks things down a lot. And more recently, actually, we started to think that, at least in MOC, we don't want to have a strong opinion on what kind of quantization we want to bring, because there are so many researchers in the field. So what we can do is we can allow developers to customize the quantization they want, but we still bring the optimum code for them. So we are working on this item called bring your own quantization. In fact, hopefully MOC will be able to support more quantization formats. And definitely, I think there's an open field that's being explored. Can you bring more sparsities? Can you quantize activations as much as possible, and so on? And it's going to be something that's going to be relevant for quite a while. [00:29:27]Swyx: You mentioned something I wanted to double back on, which is most people use int4 for language models. This is actually not obvious to me. Are you talking about the GGML type people, or even the researchers who are training the models also using int4? [00:29:40]Tianqi: Sorry, so I'm mainly talking about inference, not training, right? So when you're doing training, of course, int4 is harder, right? Maybe you could do some form of mixed type precision for inference. I think int4 is kind of like, in a lot of cases, you will be able to get away with int4. And actually, that does bring a lot of savings in terms of the memory overhead, and so on. [00:30:09]Alessio: Yeah, that's great. Let's talk a bit about maybe the GGML, then there's Mojo. How should people think about MLC? How do all these things play together? I think GGML is focused on model level re-implementation and improvements. Mojo is a language, super sad. You're more at the compiler level. Do you all work together? Do people choose between them? [00:30:32]Tianqi: So I think in this case, I think it's great to say the ecosystem becomes so rich with so many different ways. So in our case, GGML is more like you're implementing something from scratch in C, right? So that gives you the ability to go and customize each of a particular hardware backend. But then you will need to write from CUDA kernels, and you write optimally from AMD, and so on. So the kind of engineering effort is a bit more broadened in that sense. Mojo, I have not looked at specific details yet. I think it's good to start to say, it's a language, right? I believe there will also be machine learning compilation technologies behind it. So it's good to say, interesting place in there. In the case of MLC, our case is that we do not want to have an opinion on how, where, which language people want to develop, deploy, and so on. And we also realize that actually there are two phases. We want to be able to develop and optimize your model. By optimization, I mean, really bring in the best CUDA kernels and do some of the machine learning engineering in there. And then there's a phase where you want to deploy it as a part of the app. So if you look at the space, you'll find that GGML is more like, I'm going to develop and optimize in the C language, right? And then most of the low-level languages they have. And Mojo is that you want to develop and optimize in Mojo, right? And you deploy in Mojo. In fact, that's the philosophy they want to push for. In the ML case, we find that actually if you want to develop models, the machine learning community likes Python. Python is a language that you should focus on. So in the case of MLC, we really want to be able to enable, not only be able to just define your model in Python, that's very common, right? But also do ML optimization, like engineering optimization, CUDA kernel optimization, memory planning, all those things in Python that makes you customizable and so on. But when you do deployment, we realize that people want a bit of a universal flavor. If you are a web developer, you want JavaScript, right? If you're maybe an embedded system person, maybe you would prefer C++ or C or Rust. And people sometimes do like Python in a lot of cases. So in the case of MLC, we really want to have this vision of, you optimize, build a generic optimization in Python, then you deploy that universally onto the environments that people like. [00:32:54]Swyx: That's a great perspective and comparison, I guess. One thing I wanted to make sure that we cover is that I think you are one of these emerging set of academics that also very much focus on your artifacts of delivery. Of course. Something we talked about for three years, that he was very focused on his GitHub. And obviously you treated XGBoost like a product, you know? And then now you're publishing an iPhone app. Okay. Yeah. Yeah. What is his thinking about academics getting involved in shipping products? [00:33:24]Tianqi: I think there are different ways of making impact, right? Definitely, you know, there are academics that are writing papers and building insights for people so that people can build product on top of them. In my case, I think the particular field I'm working on, machine learning systems, I feel like really we need to be able to get it to the hand of people so that really we see the problem, right? And we show that we can solve a problem. And it's a different way of making impact. And there are academics that are doing similar things. Like, you know, if you look at some of the people from Berkeley, right? A few years, they will come up with big open source projects. Certainly, I think it's just a healthy ecosystem to have different ways of making impacts. And I feel like really be able to do open source and work with open source community is really rewarding because we have a real problem to work on when we build our research. Actually, those research bring together and people will be able to make use of them. And we also start to see interesting research challenges that we wouldn't otherwise say, right, if you're just trying to do a prototype and so on. So I feel like it's something that is one interesting way of making impact, making contributions. [00:34:40]Swyx: Yeah, you definitely have a lot of impact there. And having experience publishing Mac stuff before, the Apple App Store is no joke. It is the hardest compilation, human compilation effort. So one thing that we definitely wanted to cover is running in the browser. You have a 70 billion parameter model running in the browser. That's right. Can you just talk about how? Yeah, of course. [00:35:02]Tianqi: So I think that there are a few elements that need to come in, right? First of all, you know, we do need a MacBook, the latest one, like M2 Max, because you need the memory to be big enough to cover that. So for a 70 million model, it takes you about, I think, 50 gigahertz of RAM. So the M2 Max, the upper version, will be able to run it, right? And it also leverages machine learning compilation. Again, what we are doing is the same, whether it's running on iPhone, on server cloud GPUs, on AMDs, or on MacBook, we all go through that same MOC pipeline. Of course, in certain cases, maybe we'll do a bit of customization iteration for either ones. And then it runs on the browser runtime, this package of WebLM. So that will effectively... So what we do is we will take that original model and compile to what we call WebGPU. And then the WebLM will be to pick it up. And the WebGPU is this latest GPU technology that major browsers are shipping right now. So you can get it in Chrome for them already. It allows you to be able to access your native GPUs from a browser. And then effectively, that language model is just invoking the WebGPU kernels through there. So actually, when the LATMAR2 came out, initially, we asked the question about, can you run 17 billion on a MacBook? That was the question we're asking. So first, we actually... Jin Lu, who is the engineer pushing this, he got 17 billion on a MacBook. We had a CLI version. So in MLC, you will be able to... That runs through a metal accelerator. So effectively, you use the metal programming language to get the GPU acceleration. So we find, okay, it works for the MacBook. Then we asked, we had a WebGPU backend. Why not try it there? So we just tried it out. And it's really amazing to see everything up and running. And actually, it runs smoothly in that case. So I do think there are some kind of interesting use cases already in this, because everybody has a browser. You don't need to install anything. I think it doesn't make sense yet to really run a 17 billion model on a browser, because you kind of need to be able to download the weight and so on. But I think we're getting there. Effectively, the most powerful models you will be able to run on a consumer device. It's kind of really amazing. And also, in a lot of cases, there might be use cases. For example, if I'm going to build a chatbot that I talk to it and answer questions, maybe some of the components, like the voice to text, could run on the client side. And so there are a lot of possibilities of being able to have something hybrid that contains the edge component or something that runs on a server. [00:37:47]Alessio: Do these browser models have a way for applications to hook into them? So if I'm using, say, you can use OpenAI or you can use the local model. Of course. [00:37:56]Tianqi: Right now, actually, we are building... So there's an NPM package called WebILM, right? So that you will be able to, if you want to embed it onto your web app, you will be able to directly depend on WebILM and you will be able to use it. We are also having a REST API that's OpenAI compatible. So that REST API, I think, right now, it's actually running on native backend. So that if a CUDA server is faster to run on native backend. But also we have a WebGPU version of it that you can go and run. So yeah, we do want to be able to have easier integrations with existing applications. And OpenAI API is certainly one way to do that. Yeah, this is great. [00:38:37]Swyx: I actually did not know there's an NPM package that makes it very, very easy to try out and use. I want to actually... One thing I'm unclear about is the chronology. Because as far as I know, Chrome shipped WebGPU the same time that you shipped WebILM. Okay, yeah. So did you have some kind of secret chat with Chrome? [00:38:57]Tianqi: The good news is that Chrome is doing a very good job of trying to have early release. So although the official shipment of the Chrome WebGPU is the same time as WebILM, actually, you will be able to try out WebGPU technology in Chrome. There is an unstable version called Canary. I think as early as two years ago, there was a WebGPU version. Of course, it's getting better. So we had a TVM-based WebGPU backhand two years ago. Of course, at that time, there were no language models. It was running on less interesting, well, still quite interesting models. And then this year, we really started to see it getting matured and performance keeping up. So we have a more serious push of bringing the language model compatible runtime onto the WebGPU. [00:39:45]Swyx: I think you agree that the hardest part is the model download. Has there been conversations about a one-time model download and sharing between all the apps that might use this API? That is a great point. [00:39:58]Tianqi: I think it's already supported in some sense. When we download the model, WebILM will cache it onto a special Chrome cache. So if a different web app uses the same WebILM JavaScript package, you don't need to redownload the model again. So there is already something there. But of course, you have to download the model once at least to be able to use it. [00:40:19]Swyx: Okay. One more thing just in general before we're about to zoom out to OctoAI. Just the last question is, you're not the only project working on, I guess, local models. That's right. Alternative models. There's gpt4all, there's olama that just recently came out, and there's a bunch of these. What would be your advice to them on what's a valuable problem to work on? And what is just thin wrappers around ggml? Like, what are the interesting problems in this space, basically? [00:40:45]Tianqi: I think making API better is certainly something useful, right? In general, one thing that we do try to push very hard on is this idea of easier universal deployment. So we are also looking forward to actually have more integration with MOC. That's why we're trying to build API like WebILM and other things. So we're also looking forward to collaborate with all those ecosystems and working support to bring in models more universally and be able to also keep up the best performance when possible in a more push-button way. [00:41:15]Alessio: So as we mentioned in the beginning, you're also the co-founder of Octomel. Recently, Octomel released OctoAI, which is a compute service, basically focuses on optimizing model runtimes and acceleration and compilation. What has been the evolution there? So Octo started as kind of like a traditional MLOps tool, where people were building their own models and you help them on that side. And then it seems like now most of the market is shifting to starting from pre-trained generative models. Yeah, what has been that experience for you and what you've seen the market evolve? And how did you decide to release OctoAI? [00:41:52]Tianqi: One thing that we found out is that on one hand, it's really easy to go and get something up and running, right? So if you start to consider there's so many possible availabilities and scalability issues and even integration issues since becoming kind of interesting and complicated. So we really want to make sure to help people to get that part easy, right? And now a lot of things, if we look at the customers we talk to and the market, certainly generative AI is something that is very interesting. So that is something that we really hope to help elevate. And also building on top of technology we build to enable things like portability across hardwares. And you will be able to not worry about the specific details, right? Just focus on getting the model out. We'll try to work on infrastructure and other things that helps on the other end. [00:42:45]Alessio: And when it comes to getting optimization on the runtime, I see when we run an early adopters community and most enterprises issue is how to actually run these models. Do you see that as one of the big bottlenecks now? I think a few years ago it was like, well, we don't have a lot of machine learning talent. We cannot develop our own models. Versus now it's like, there's these great models you can use, but I don't know how to run them efficiently. [00:43:12]Tianqi: That depends on how you define by running, right? On one hand, it's easy to download your MLC, like you download it, you run on a laptop, but then there's also different decisions, right? What if you are trying to serve a larger user request? What if that request changes? What if the availability of hardware changes? Right now it's really hard to get the latest hardware on media, unfortunately, because everybody's trying to work on the things using the hardware that's out there. So I think when the definition of run changes, there are a lot more questions around things. And also in a lot of cases, it's not only about running models, it's also about being able to solve problems around them. How do you manage your model locations and how do you make sure that you get your model close to your execution environment more efficiently? So definitely a lot of engineering challenges out there. That we hope to elevate, yeah. And also, if you think about our future, definitely I feel like right now the technology, given the technology and the kind of hardware availability we have today, we will need to make use of all the possible hardware available out there. That will include a mechanism for cutting down costs, bringing something to the edge and cloud in a more natural way. So I feel like still this is a very early stage of where we are, but it's already good to see a lot of interesting progress. [00:44:35]Alessio: Yeah, that's awesome. I would love, I don't know how much we're going to go in depth into it, but what does it take to actually abstract all of this from the end user? You know, like they don't need to know what GPUs you run, what cloud you're running them on. You take all of that away. What was that like as an engineering challenge? [00:44:51]Tianqi: So I think that there are engineering challenges on. In fact, first of all, you will need to be able to support all the kind of hardware backhand you have, right? On one hand, if you look at the media library, you'll find very surprisingly, not too surprisingly, most of the latest libraries works well on the latest GPU. But there are other GPUs out there in the cloud as well. So certainly being able to have know-hows and being able to do model optimization is one thing, right? Also infrastructures on being able to scale things up, locate models. And in a lot of cases, we do find that on typical models, it also requires kind of vertical iterations. So it's not about, you know, build a silver bullet and that silver bullet is going to solve all the problems. It's more about, you know, we're building a product, we'll work with the users and we find out there are interesting opportunities in a certain point. And when our engineer will go and solve that, and it will automatically reflect it in a service. [00:45:45]Swyx: Awesome. [00:45:46]Alessio: We can jump into the lightning round until, I don't know, Sean, if you have more questions or TQ, if you have more stuff you wanted to talk about that we didn't get a chance to [00:45:54]Swyx: touch on. [00:45:54]Alessio: Yeah, we have talked a lot. [00:45:55]Swyx: So, yeah. We always would like to ask, you know, do you have a commentary on other parts of AI and ML that is interesting to you? [00:46:03]Tianqi: So right now, I think one thing that we are really pushing hard for is this question about how far can we bring open source, right? I'm kind of like a hacker and I really like to put things together. So I think it's unclear in the future of what the future of AI looks like. On one hand, it could be possible that, you know, you just have a few big players, you just try to talk to those bigger language models and that can do everything, right? On the other hand, one of the things that Wailing Academic is really excited and pushing for, that's one reason why I'm pushing for MLC, is that can we build something where you have different models? You have personal models that know the best movie you like, but you also have bigger models that maybe know more, and you get those models to interact with each other, right? And be able to have a wide ecosystem of AI agents that helps each person while still being able to do things like personalization. Some of them can run locally, some of them, of course, running on a cloud, and how do they interact with each other? So I think that is a very exciting time where the future is yet undecided, but I feel like there is something we can do to shape that future as well. [00:47:18]Swyx: One more thing, which is something I'm also pursuing, which is, and this kind of goes back into predictions, but also back in your history, do you have any idea, or are you looking out for anything post-transformers as far as architecture is concerned? [00:47:32]Tianqi: I think, you know, in a lot of these cases, you can find there are already promising models for long contexts, right? There are space-based models, where like, you know, a lot of some of our colleagues from Albert, who he worked on this HIPPO models, right? And then there is an open source version called RWKV. It's like a recurrent models that allows you to summarize things. Actually, we are bringing RWKV to MOC as well, so maybe you will be able to see one of the models. [00:48:00]Swyx: We actually recorded an episode with one of the RWKV core members. It's unclear because there's no academic backing. It's just open source people. Oh, I see. So you like the merging of recurrent networks and transformers? [00:48:13]Tianqi: I do love to see this model space continue growing, right? And I feel like in a lot of cases, it's just that attention mechanism is getting changed in some sense. So I feel like definitely there are still a lot of things to be explored here. And that is also one reason why we want to keep pushing machine learning compilation, because one of the things we are trying to push in was productivity. So that for machine learning engineering, so that as soon as some of the models came out, we will be able to, you know, empower them onto those environments that's out there. [00:48:43]Swyx: Yeah, it's a really good mission. Okay. Very excited to see that RWKV and state space model stuff. I'm hearing increasing chatter about that stuff. Okay. Lightning round, as always fun. I'll take the first one. Acceleration. What has already happened in AI that you thought would take much longer? [00:48:59]Tianqi: Emergence of more like a conversation chatbot ability is something that kind of surprised me before it came out. This is like one piece that I feel originally I thought would take much longer, but yeah, [00:49:11]Swyx: it happens. And it's funny because like the original, like Eliza chatbot was something that goes all the way back in time. Right. And then we just suddenly came back again. Yeah. [00:49:21]Tianqi: It's always too interesting to think about, but with a kind of a different technology [00:49:25]Swyx: in some sense. [00:49:25]Alessio: What about the most interesting unsolved question in AI? [00:49:31]Swyx: That's a hard one, right? [00:49:32]Tianqi: So I can tell you like what kind of I'm excited about. So, so I think that I have always been excited about this idea of continuous learning and lifelong learning in some sense. So how AI continues to evolve with the knowledges that have been there. It seems that we're getting much closer with all those recent technologies. So being able to develop systems, support, and be able to think about how AI continues to evolve is something that I'm really excited about. [00:50:01]Swyx: So specifically, just to double click on this, are you talking about continuous training? That's like a training. [00:50:06]Tianqi: I feel like, you know, training adaptation and it's all similar things, right? You want to think about entire life cycle, right? The life cycle of collecting data, training, fine tuning, and maybe have your local context that getting continuously curated and feed onto models. So I think all these things are interesting and relevant in here. [00:50:29]Swyx: Yeah. I think this is something that people are really asking, you know, right now we have moved a lot into the sort of pre-training phase and off the shelf, you know, the model downloads and stuff like that, which seems very counterintuitive compared to the continuous training paradigm that people want. So I guess the last question would be for takeaways. What's basically one message that you want every listener, every person to remember today? [00:50:54]Tianqi: I think it's getting more obvious now, but I think one of the things that I always want to mention in my talks is that, you know, when you're thinking about AI applications, originally people think about algorithms a lot more, right? Our algorithm models, they are still very important. But usually when you build AI applications, it takes, you know, both algorithm side, the system optimizations, and the data curations, right? So it takes a connection of so many facades to be able to bring together an AI system and be able to look at it from that holistic perspective is really useful when we start to build modern applications. I think it's going to continue going to be more important in the future. [00:51:35]Swyx: Yeah. Thank you for showing the way on this. And honestly, just making things possible that I thought would take a lot longer. So thanks for everything you've done. [00:51:46]Tianqi: Thank you for having me. [00:51:47]Swyx: Yeah. [00:51:47]Alessio: Thanks for coming on TQ. [00:51:49]Swyx: Have a good one. [00:51:49] Get full access to Latent Space at www.latent.space/subscribe
Pause estivale du Mug à partir du 1 juillet, dernier mug le 30 juin à 8h. Les mug reviendront début septembre. Découvrir pCloud
This is a recap of the top 10 posts on Hacker News on May 7th, 2023.(00:31): Walkout at global science journal over ‘unethical' feesOriginal post: https://news.ycombinator.com/item?id=35848894(02:14): I'm never investing in Google's smart home ecosystem againOriginal post: https://news.ycombinator.com/item?id=35849060(03:36): Pixel phones are sold with bootloader unlocking disabledOriginal post: https://news.ycombinator.com/item?id=35852192(04:58): The Prime Video microservices to monolith storyOriginal post: https://news.ycombinator.com/item?id=35853148(06:16): EU sends Apple stark warning over USB-C charging on new iPhonesOriginal post: https://news.ycombinator.com/item?id=35849043(07:45): BurnoutOriginal post: https://news.ycombinator.com/item?id=35849384(09:04): Contrast RebellionOriginal post: https://news.ycombinator.com/item?id=35850044(10:25): Five Books: The best books on everythingOriginal post: https://news.ycombinator.com/item?id=35853131(11:52): Passkeys: A Loss of User Control?Original post: https://news.ycombinator.com/item?id=35854216(12:58): AMD promises its new laptop chips will crush the Apple M2 and it's got receiptsOriginal post: https://news.ycombinator.com/item?id=35851174This is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
AI főzés, feliratozó progi allrite.io, Jancsa Jani a pizza vébén, itt a jóidő, Zolinál elkészült a napelem rendszer, iPhone screen mirroring Macre, iPad mini, Apple M2, Zelda, 90 days Discovery
A classic gadget gets a Linux-powered new lease on life, the next project getting Rusty, great news for Btrfs users, and more.
A classic gadget gets a Linux-powered new lease on life, the next project getting Rusty, great news for Btrfs users, and more.
We discuss Nvidia Lovelace, AMD 7900 XT, Intel Sierra Forrest, Granite Rapids, and more! [SPON: dieshrink = 3% off Everything, brokensilicon = 25% off Windows: https://biitt.ly/shbSk ] [SPON: Get 10% off Vite Ramen AND a FREE Pack w/ “MOORESLAW”: https://bit.ly/3wKx6v1 ] 0:00 Sundance Recommendations, Possessor on Christmas (Intro Banter) 4:52 PS5 Liquid Metal, AMD in ASUS G14, Phoenix Design Wins (Corrections) 13:49 AMD Lowers RX 6950 XT to $699 – Is the 4070 Ti already Pointless? 24:40 AMD Driver Support for RX 6000 Series (or Older) GPUs 29:06 New Pictures of Nvidia Titan add Credibility, and Intrigue, to MLID Leak! 40:49 Will GDDR7 make Graphics Cards notably more expensive? 45:34 AMD A620 Motherboards with Prom 22 - Cheap AM5 is Still Coming! 53:13 Intel Reports Disastrous Q4 2022 Earnings…and Warns of a WORSE Q1 58:49 Does Intel's 10nm cost more than TSMC 7nm? Is Intel doomed? 1:10:00 NEW Intel Sierra Forest & Granite Rapids Details leaked by MLID 1:18:02 Could Intel Catch up to TSMC in 2 Years? 1:24:28 Is AMD too Risk Adverse for its own good? 1:29:10 Apple Launches M2 Pro and M2 Max 1:37:10 Upgrading B650 to X670, Microsoft Layoffs, Recession (Wrap-Up) 1:45:25 Should Nvidia make a 32-bit Lovelace MX650? Is MX obsolete? https://rog.asus.com/laptops/rog-zephyrus/rog-zephyrus-g14-2023-series/ https://www.amd.com/en/direct-buy/us https://youtu.be/aH70c4S-XPk https://community.amd.com/t5/gaming/never-a-better-time-to-upgrade-with-radeon-graphics/ba-p/580010 https://twitter.com/Zed__Wang/status/1619350027801100289/photo/2 https://youtu.be/wwZr4kp2gnc https://www.techpowerup.com/304210/amd-allegedly-prepares-an-even-cheaper-a620-chipset-set-to-deliver-usd-125-motherboards https://youtu.be/JnLbqB1FBu8 https://www.intc.com/news-events/press-releases/detail/1600/intel-reports-fourth-quarter-and-full-year-2022-financial https://d1io3yog0oux5.cloudfront.net/_2cead9b6413a1a91de449423742eea20/intel/db/887/8894/earnings_presentation/Q4%272022+Earnings+Deck_Final+PDF.pdf https://youtu.be/aH70c4S-XPk?t=728 https://www.notebookcheck.net/Intel-discontinues-network-switch-business-along-with-Pathfinder-RISC-V-program-following-grim-Q4-2022-earnings.686929.0.html https://youtu.be/h20inMLeDnE https://youtu.be/7thf6CRILQk https://twitter.com/chiakokhua/status/1413357432554754049?lang=en https://www.apple.com/newsroom/2023/01/apple-unveils-m2-pro-and-m2-max-next-generation-chips-for-next-level-workflows/ https://www.reuters.com/technology/apple-launches-new-macbooks-mac-mini-rare-january-launch-2023-01-17/ https://www.theverge.com/23559676/apple-macbook-pro-16-inch-2023-m2-max-review https://www.notebookcheck.net/Apple-MacBook-Pro-14-2023-review-The-M2-Pro-is-slowed-down-in-the-small-MacBook-Pro.687345.0.html https://www.dualshockers.com/bethesda-343-employees-microsoft-layoffs/ https://videocardz.com/newz/asrock-expansion-card-could-turn-amd-b650-motherboard-into-x670 https://store.nvidia.com/en-us/nvidia-rtx/store/?page=1&limit=9&locale=en-us https://wccftech.com/lenovo-vp-confirms-instinct-mi400-hpc-apu-accelerator-as-part-of-amd-instinct-roadmap/ https://www.techpowerup.com/304124/forspoken-simply-doesnt-work-with-amd-radeon-rx-400-and-rx-500-polaris-gpus https://www.reddit.com/r/xboxone/comments/ktxhdh/since_it_was_founded_343_industries_has_grown/
0:00 awkwarddd 0:13 RTX 4090 Ti leaks 1:31 Apple M2 leads Passmarl 2:20 PS5 supply increase 3:11 Roland Bridge Cast 3:50 QUICK BITS 3:59 Intel Arc driver boost 4:33 Ford SUV recall 5:05 Foldable iPad rumors 5:36 Corsair PCIe 5.0 SSD 6:13 Amazon Fresh price increase News Sources: https://lmg.gg/VbiKb
Tune into episode 306 of the Mobile Tech Podcast with guest Rich Woods of XDA Developers -- brought to you by Mint Mobile. In this week's episode, we review Nubia's gorgeous RedMagic 8 Pro gaming phone, plus Apple's M2 MacBook Pro and Mac mini. We then discuss LG's new Gram laptops, plus the latest OnePlus leaks and rumors. Finally, we cover news from Razer and Moto. Enjoy :)Episode Links- Support the podcast on Patreon: https://www.patreon.com/tnkgrl- Donate: https://tnkgrl.com/tnkgrl/- Support the podcast with Mint Mobile: https://mintmobile.com/mobiletech- Rich Woods: https://twitter.com/therichwoods- Razer Edge 5G now available: https://www.xda-developers.com/razer-edge-5g-now-available/- RedMagic 8 Pro review: https://www.xda-developers.com/redmagic-8-pro-review/- Apple M2 MacBook Pro and Mac mini reviews: https://www.xda-developers.com/macbook-pro-2023-launch/- Apple SSD gate: https://www.xda-developers.com/macbook-pro-2023-slower-512gb-ssd/- Rich's LG Gram hands-on: https://www.xda-developers.com/hands-on-lg-gram-style/- OnePlus 11 and Buds Pro 2 are here: https://www.instagram.com/p/Cn2YrFAvXgs/- OnePlus 11R rumors: https://www.xda-developers.com/oneplus-11r-launch-confirmed/- OnePlus mechanical keyboard teaser: https://www.xda-developers.com/oneplus-keyboard-february-7-launch/- OnePlus Tab leaks: https://www.xda-developers.com/oneplus-pad-tablet-renders/- New Moto G phones abroad: https://www.xda-developers.com/motorola-moto-g13-g23-g53-g73-launch/
Today on the flagship podcast of staring directly down the barrel of a camera, The Verge's Nilay Patel, Alex Cranz, Richard Lawler, and Monica Chin start the show with an inside look at our M2 MacBook Pro and Mac Mini reviews. After that, the crew breaks down the case the US Department of Justice has filed against Google's ad business and of course we try to make sense of the latest Elon Musk shenanigans. Further reading: The Vergecast - YouTube Apple Mac Mini (2023) review: Mac Studio junior Apple MacBook Pro 16 (2023) review: the core count grows Google is being sued by the US government and eight states over online advertising Google plans to demo AI chatbot search as it panics about ChatGPT More details come out on which departments saw layoffs at Google, Microsoft, and Amazon Tesla made more money in 2022 than ever before, but its future still looks rocky Elon Musk is theoretically sad that Tesla investors lost money because of his tweets Elon Musk thinks Twitter is real life Elon Musk's Twitter is caving to government censorship, just like he promised Elon Musk gets serious about 420 at securities fraud trial - The Verge Tesla's new $3.6 billion Nevada investment includes a ‘high-volume' Semi factory Tesla Cybertruck mass production won't start until 2024 Microsoft Q2 2023: Windows, devices, and Xbox down as cloud holds strong Senators and Swifties take on Ticketmaster in Washington GoldenEye 007 is coming to Nintendo Switch and Xbox on January 27th TikTok confirms that its own employees can decide what goes viral Learn more about your ad choices. Visit podcastchoices.com/adchoices
This is episode 305 of the Mobile Tech Podcast with guests Cliff Lin and Finbarr Moynihan (MediaTek), plus Mark Linsangan (HeyMarkL) -- brought to you by MediaTek. This episode comes in two parts. First, we explore how MediaTek is implementing 3GPP NTN satellite smartphone connectivity. Second (18:46), we discuss Apple's new M2 MacBook Pros and Mac minis, Sony's latest Walkmans, Samsung's 200MP Isocell HP2, and Google's Pixel Fold. Finally, we cover a few leaks and rumors... Phew!Episode Links- Support the podcast on Patreon: https://www.patreon.com/tnkgrl- Donate: https://tnkgrl.com/tnkgrl/- MediaTek: http://www.poweredbymediatek.com/ (sponsor)- MediaTek satellite smartphone connectivity: https://www.mediatek.com/blog/how-mediatek-is-driving-the-key-tech-trends-from-ces-2023- Cliff Lin: https://www.linkedin.com/in/cliff-lin-52269321/- Finbarr Moynihan: https://www.linkedin.com/in/finbarr-moynihan-b653b51/- Mark Linsangan: https://twitter.com/HeyMarkL- Apple M2 MacBook Pros and Mac minis: https://www.engadget.com/apple-updates-its-mac-book-pros-with-m-2-pro-and-max-chips-143848558.html- Sony Walkmans: https://www.inverse.com/gear/sony-walkman-nw-zx707-nw-a306-music-player- Samsung 200 MP Isocell HP2: https://www.gsmarena.com/samsung_introduces_its_best_200_mp_sensor_isocell_hp2_galaxy_s23_ultra-news-57214.php- Samsung S23 Ultra specs: https://www.gsmarena.com/samsung_galaxy_s23_ultra_entire_specs_sheet_leaks_in_full-news-57236.php- Samsung Galaxy S23 and S23+ specs: https://www.gsmarena.com/samsung_galaxy_s23_galaxy_s23_plus_leaked_full_specs_images-news-57232.php- Google Pixel Fold dummy: https://www.youtube.com/watch?v=7SIlor1Zi9w- Google Pixel 8 rumors: https://www.androidpolice.com/google-pixel-8/
We review the new Apple M2 Mac Mini and Macbook Pros. We discuss the new chips and our surprise at the cost. Also, the Apple HomePod has returned. Is it worth $299. We are former Apple retail creatives and geniuses. We have a combined 20 years of experience working at Apple. This channel will dive into our skills in training in macOS, iOS, and iPadOS. Also, provide news and discussion on our podcast.
This time we report, speculate, paraphrase, and pontificate on another week's worth of product launches, press releases, and insecurity news. Also, a burger. Apple M2's many things, Intel does 13900KS and Gen 4 scalable Xeons, CAMM winns finally, 30TB of Micron and Monoprice small speakers. See the timestamps below for all the rest!Timestamps:00:00 Intro01:11 Burger of the Week02:50 Intel launches Core i9-13900KS04:27 More Intel: the 4th Gen Xeon Scalable processors09:52 Apple announces M2 chips and things that use them18:58 CAMM might be the new laptop memory standard23:18 Put your Radeon RX 7900 XTX under water25:04 G.Skill Flare X5 6000 CL32 Ryzen memory29:53 Podcast sponsor - Rocket Money31:08 Razer NAGA v2 mouse33:14 Micron 9400 Pro enterprise SSDs36:23 Security Corner44:32 Gaming Quick Hits47:00 Monoprice DT-3BT desktop speaker review54:18 Picks of the Week1:02:33 Outro ★ Support this podcast on Patreon ★
Go to http://factor75.com/lewlater60 and use code lewlater60 to get 60% off your first box. --- Subscribe for more internet + tech news. Email questions to will [at] lewlater dot com FOLLOW ME IN THESE PLACES https://twitter.com/lewlater https://instagram.com/lewlater https://facebook.com/lewlater
Solo soletto, Maurizio affronta le novità Apple annunciate il 17 e il 18 febbraio: M2 a tutto spiano!
The latest In Touch With iOS with Dave he is joined by guest Patrice Brend'amour, and Jeff Gamet. Our 2nd annual year in review with Patrice and Jeff. We review what Apple announced in the 3 events and press release in 2022. We review what is in store for 2023 including AR headset, lack of sales of iPhone 14 Plus and what will Apple do, and more. Dave received an awesome gift in the Grid Studio torn down iPhone 4s nicely framed. The Home App Apple has it on their major issues list. The show notes are at InTouchwithiOS.com Direct Link to Audio Links to our Show Click this link Buy me a Coffee to support the show we would really appreciate it. intouchwithios.com/coffee Another way to support the show is to become a Patreon member patreon.com/intouchwithios Website: In Touch With iOS YouTube Channel In Touch with iOS Magazine on Flipboard Facebook Page Twitter Instagram News iPhone 15 Ultra' Won't Be Exclusively Assembled by Foxconn iPhone 14 Pro Dynamic Island requires special manufacturing process from Samsung Apple Adds iOS 16.2's Home App Upgrade to Internal List of Major Issues Topics Beta this week. iOS16.3 Beta 1 continues. Patrice gives her review so far. Tip turn off Home hub on Apple TV due th thread. Apple year in review 2022. Apple had only 3 events plus a press release in 2022. We did a recap of the announcements (reviewed below) and what did stand out to us this year. The March 6th event was entitled Peek Performance. They announced the iPhone SE (3rd generation) with the Apple A15 Bionic chip and Ceramic Shield with a base price of $429. The iPad Air (5th generation) was announced with the Apple M1 chip, Center Stage.Studio Display and Mac Studio. WWDC 2022 was June 8-10 and as always new OS are announced. iOS16, iPadOS16, also announced watchOS 9 as well as updates to CarPlay, HomeKit, Parental Controls. MacBook announcements. The third event on September 7th was entitled Far Out. What was introduced was the Apple Watch Series 8, 2nd-generation Apple Watch SE, and Apple Watch Ultra. Other announcements included the 2nd-generation AirPods Pro, iPhone 14, 14 Plus, 14 Pro, and 14 Pro Max. New Apple processors included the Apple H2 and A16 A press release on October18 announced The 6th-generation iPad Pro with the Apple M2 chip, 10th-generation iPad with the Apple A14 Bionic chip, Magic Keyboard Folio for iPad, and 3rd-generation Apple TV 4K console with the Apple A15 Bionic chip iPadOS 16 is scheduled to be released on October 24 with the new iPad models appearing in stores on the October 26. The new Apple TV 4K console arrived on November 4. In memoriam: 5 once-great products Apple killed in 2022 RIP iPod Touch. So what do we expect in 2023? The panel discusses some of the possibilities of what Apple will be up to next and we make some predictions about them. Possible big changes include CarPlay HomePod Reevaluate iPhone 14 Plus and future iPhone 15 Models Wearables Plus more surprise predictions. Dave received a great Christmas Present: https://gridstudio.cc/collections/iphone Beautifully frame tear down of an iPhone 4S. Really a cool thing to have hanging in your office or elsewhere, Our Host Dave Ginsburg is an IT professional supporting Mac, iOS and Windows users and shares his wealth of knowledge of iPhone, iPad, Apple Watch, Apple TV and related technologies. Visit the YouTube channel https://youtube.com/intouchwithios follow him on Mastadon @daveg65, Twitter @daveg65.and the show @intouchwithios Our Regular Contributor Jeff Gamet is a podcaster, technology blogger, artist, and author. Previously, he was The Mac Observer's managing editor, and Smile's TextExpander Evangelist. You can find him on Mastadon @jgamet as well as Twitter and Instagram as @jgamet His YouTube channel https://youtube.com/jgamet About our Guest Patrice Brend'amour loves to create podcasts, automations or software. She also enjoys working with diverse sets of people, leading them to success and making a tiny difference in the world. Which she does as VP of Development at a Healthcare Software provider. She can be found at https://the-patrice.com and her podcast Foodie Flashback at https://foodieflashback.com
Como a nova tecnologia da Apple está transformando os produtos da empresa? Fique por dentro AGORA! No novo podcast da StartSe, você se atualiza sobre o que está acontecendo de mais atual em tecnologia no mundo. Todas as terças-feiras, às 17h, o CTO da StartSe Gustavo Bodra e Marcelo Castro conversam sobre o que há de mais inovador no mundo tech e ajudam você a entender qual será o impacto no mundo e no mercado. Inscreva-se para não perder nenhum episódio! Confira mais sobre inovação, tecnologia e nova economia em www.startse.com.
In shocking news, Steven has announced that he has no interest in buying the new Apple M2 laptops. After reading some less than glowing reviews of Apples latest & greatest computers it seems that the performance gains over the last gen M1 powered laptops is nothing to get excited about. In fact, there are reports that the SSD performance is worse in the M2 line-up. However, the real question is do Marc & Shaun actually believe that Steven won't be buying a new laptop?Finally we discuss the new ARX Vision wearable headset. With a design similar to an AfterShox headset but also featuring a camera on one side, could this finally be the perfect accessory for Aira? Steven has been taking a look at what it currently offers.
Michał Gapiński dzisiaj do nas dołącza i opowiada o swoim projekcie TeslaAndroid, która umożliwia korzystanie z CarPlaya w samochodach Tesla. Na koniec jeszcze rozmawiamy o problemach z nowymi MacBookami Pro z Apple M2. Sponsor odcinka: Farnell – An Avnet Company … Czytaj dalej → The post 368: Twórca CarPlay dla Tesli – Michał Gapiński – opowiada o swoim projekcie, a MacBook Pro z M2 ma problemy first appeared on Retro Rocket Network.
This week: Apple's prepping to drop an "ambitious deluge" of new hardware, including the return of one of the greatest Apple products of all time... This episode supported by Easily create a beautiful website all by yourself, at Squarespace.com/cultcast. Use offer code CultCast at checkout to get 10% off your first purchase of a website or domain. Cult of Mac's watch store is full of beautiful straps that cost way less than Apple's. See the full curated collection at Store.Cultofmac.com CultCloth will keep your Mac Studio, Studio Display, iPhone 13, glasses and lenses sparkling clean, and for a limited time use code CULTCAST at checkout to score a free CarryCloth with any order at CultCloth.co. This week's stories Apple Readies iPhone 14 and HomePod Upgrade in Flood of New Products Gurman: From what I've been told, the company is about to embark on one of the most ambitious periods of new products in its history—with the deluge coming between the fall of 2022 and first half of 2023. Don't expect a speed boost in Apple Watch Series 8 The Apple Watch Series 8 expected this fall won't have better performance than its recent predecessors, according to a reliable tipster. Win a 4-in-1 MagSafe charging stand with built-in nightlight [Cult of Mac giveaways] This week's giveaway is a sturdy, MagSafe-compatible, multi-device wireless charger that includes a convenient nightlight. Apple's next HomePod could be less mini and more powerful Apple is working on a new HomePod that could launch next year with a new S8 chip. The smart speaker reportedly will be similar in size to the original HomePod rather than the HomePod mini. Entry-level M2 MacBook Pro's SSD is slower than M1 MacBook Pro Apple's new M2 MacBook Pro ships with a notably slower SSD than its predecessor. Tests show the speed difference is as big as 50% in some scenarios. M1 MacBook Pro beats the newer M2 model in real-life stress tests The slower SSD on the entry-level M2 MacBook Pro has a noticeable impact on its performance in daily use. Tests show the M1 MacBook Pro beating its successor in stress tests that rely heavily on swap memory usage. ‘Severe' thermal throttling in Apple M2 chip raises questions about MacBook Air performance The just-released MacBook Pro running Apple's new M2 processor offers generally good performance, but its single fan can't stop it from experiencing ‘severe' thermal throttling under certain conditions, according to tests done on the device. AirPods Pro 2 could act as heart monitor and hearing aid AirPods Pro 2 will look much like AirPods 3 from 2021, but with the soft tips necessary for ANC, according to information leaked to 52audio. That means the stems aren't going away, and the buttons will stay, too. Commercials are coming to Netflix (and other streamers!) It's official: Netflix is adding a cheaper ad-supported tier to its streaming service. The change is apparently intended to reverse a recent drop in subscribers.
Fedora gets serious about its server editions, our thoughts on Valve's increased Steam Deck production, and the surprising results of booting Linux on the Apple M2 SoC.
0:00 heeeey 0:09 M2 SSD half as slow as M1's 1:28 Please Don't Mod Steam Deck(?) 2:23 Cyberpunk 2077 whistleblower 3:14 Secret Lab 3:46 QUICK BITS 3:52 Ryzen 7000X3D 4:36 Valorant monitoring voice chats 5:07 Juul ban put on hold 5:36 Google Hangouts shutting down in Nov. 6:00 OpenAi Minecraft AI News Sources: https://lmg.gg/tSobE
Cette semaine : Valorant Patch, Capcom Fighting Collection, GeForce Now 2.0.41, confirmation : du Michael Jackson dans Sonic 3, Telegram Premium, The Notorious B.I.G. - Life After Death : édition spéciale pour les 25 ans de l'album, et Apple M2 : les tests. Lisez plutôt Torréfaction #223 : Patch Valorant, Capcom Fighting Collection, du C/C dans GeForce Now et les premiers tests des M2 Apple avec sa vraie mise en page sur Geekzone. Pensez à vos rétines.
This week: The first reviews of the 13-inch MacBook Pro with M2 are in and they are not kind! Plus: new details elude to a new MacBook that may be a hybrid between an MacBook Air and MacBook Pro, iOS 16 is adding a MUST-HAVE SMS feature, and an odd Apple show becomes one of most popular shows streamed on any streaming service. This episode supported by CleanMyMac X is a decluttering app for Mac that can keep it in tip-top shape. It includes 49 tools to find and delete invisible computer junk, and helps keep your Mac running at maximum speed. Get CleanMyMac X today with 5% off at https://macpaw.app/cultcast ... The discount is valid until June 30. Easily create a beautiful website all by yourself, at Squarespace.com/cultcast. Use offer code CultCast at checkout to get 10% off your first purchase of a website or domain. Cult of Mac's watch store is full of beautiful straps that cost way less than Apple's. See the full curated collection at Store.Cultofmac.com CultCloth will keep your Mac Studio, Studio Display, iPhone 13, glasses and lenses sparkling clean, and for a limited time use code CULTCAST at checkout to score a free CarryCloth with any order at CultCloth.co. This week's stories M2 MacBook Pro review roundup: Powerful chip, archaic design The first reviews of the 13-inch MacBook Pro with an Apple M2 processor are not kind. It gets called a “Pro in name only” and “literally a processor update and nothing more.” 15-inch MacBook with M2 Pro might launch in spring 2023 A 15-inch MacBook will debut in less than a year that offers either an M2 or an M2 Pro processor, according to a reliable Apple analyst. That could make it one of the first with the upgraded version of the M2. Apple M2 is fast, but not faster than M1 Pro or M1 Max The first real-world benchmark tests of the Apple M2 processor show that the just-launched processor is about 20% to 25% faster than the original M1, which is welcome news for anyone in the market for a MacBook or iPad in the next few years. Win a 65W GaN mini charger that packs a punch [Cult of Mac giveaway] The Cult of Mac giveaway prize this week is a perfect fit for anyone who needs a powerful yet compact charger. We teamed up with Lululook to give five lucky winners a chance to get their hands on a three-port, 65W gallium nitride charger. Apple's New CarPlay Is the Foreshock to Releasing Its Own Vehicle iOS 16 beta 2 improves SMS filtering on Messages to iPhone users Apple had already teased in a WWDC 2022 session that iOS 16 would add 12 new subcategories to the SMS filtering API. With beta 2, they're now available to developers to use. Apple TV+ sci-fi drama For All Mankind rockets to popularity For All Mankind is one of the most popular shows available on streaming, according to a ratings tracker. Season three of the alternate-history sci-fi series premiered on Apple TV+ in early June, and it's been a Top 10 show both weeks since then.
This week: The M2 benchmarks have leaked and this new chip is even better than we hoped. Plus: new details on iPhone 14, a 12 and 15-inch MacBook Air, and new 14-inch iPad Pro MAX. This episode supported by Remotely manage your Mac, iPhone, or iPad with Jamf. Manage 3 devices for FREE at jamf.com/beyond Easily create a beautiful website all by yourself, at Squarespace.com/cultcast. Use offer code CultCast at checkout to get 10% off your first purchase of a website or domain. Cult of Mac's watch store is full of beautiful straps that cost way less than Apple's. See the full curated collection at Store.Cultofmac.com CultCloth will keep your Mac Studio, Studio Display, iPhone 13, glasses and lenses sparkling clean, and for a limited time use code CULTCAST at checkout to score a free CarryCloth with any order at CultCloth.co. This week's stories First MacBook Pro with speedy M2 chip goes up for preorder June 17 - Lewis, and then Erfon takes the benchmarks?!? Preorders for the updated 13-inch MacBook Pro packing the powerful new Apple M2 processor will begin Friday, June 17, two weeks after it was announced at Apple's Worldwide Developers Conference. iPhone 14's front camera could get some much-needed upgrades The iPhone's front camera was last upgraded in September 2019 with the iPhone 11's release. Apple bumped the camera resolution to 12MP and paired it with an f/2.2 lens. Since then, Android smartphones have added 40MP selfie cameras with autofocus, but iPhones have stuck with the same setup. Win a multi-device charger with built-in MagSafe battery pack [Cult of Mac giveaway] Worth $130, the MagEZ Slider from Pitaka is great way to charge your iPhone and AirPods on your desk or nightstand. But better yet, the charger has an integrated battery pack that attaches magnetically to your iPhone to continue charging away from home. Apple pushes ahead with 15-inch M2 MacBook Air and 12-inch M2 MacBook Apple is developing new form factors and planning upgrades for its MacBook lineup, Bloomberg reported Thursday. That should result in a 15-inch M2 MacBook Air and a new version of a 12-inch M2 MacBook arriving by late 2023 or early 2024. Apple may launch 14.1-inch mini-LED iPad Pro with M2 chip Apple is working on an iPad Pro with a 14.1-inch mini-LED display, ProMotion and Face ID, sources suggested on Thursday. Unfortunately for folks who are already salivating at the idea of such a big Apple tablet, it probably won't arrive until early 2023. Apple TV will stream all Major League Soccer games for 10 years Cupertino unveiled a major new sports deal Tuesday, saying Apple TV will be the exclusive streaming destination for all Major League Soccer (MLS) games for 10 years, starting with the 2023 season.
On MacBreak Weekly, the panel discovers when the new 13" MacBook Pro M2 will be available to pre-order and which MacBook may be best for you. Full episode at twit.tv/mbw822 Hosts: Alex Lindsay, Leo Laporte, Rene Ritchie, and Andy Ihnatko You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/
On MacBreak Weekly, the panel discovers when the new 13" MacBook Pro M2 will be available to pre-order and which MacBook may be best for you. Full episode at twit.tv/mbw822 Hosts: Alex Lindsay, Leo Laporte, Rene Ritchie, and Andy Ihnatko You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/
This episode was brought to you by Farnell, your global distributor of electronic components, products and solutions.https://uk.farnell.comIn this episode we're talking about the latest evolution of Apple Silicon: M2. Are we excited by the release announcement... or disappointed? And what do we think of the new M2 MacBook Air and M2 MacBook Pro 13"?CHAPTERS==========0:00:00 Intro and banter0:09:05 News: 10 years of Retina display0:12:04 News: why Stage Manager is M1 iPad only0:15:13 More banter0:17:59 Farnell and IoT0:27:36 Apple M2: excitement or disappointment?1:12:24 The Rumour Mill1:14:09 Conclusion#Farnell #apple #podcast
Apple M2 specs and Ryzen 7000 layout and delidded, Intel ARC tested, and our review of the Phanteks G360A. Free games from Amazon, satellite hacking, and fun observations from the Steam hardware survey.Recorded the night before AMD announced all the things. Oh well, there's always next week. Anyway, we made it to 680. What's next?! Oh yeah. 681. But not before we talked about the following things for an HOUR:Timestamps:00:00 Intro00:39 Burger of the Week03:01 Apple M205:25 Ryzen 7000 delidded?12:22 Intel Arc A730M tested16:36 Samsung Odyssey Neo monitors19:59 MS wants SSD21:51 New Arm ISP23:59 May Steam Survey27:17 Prime gaming29:07 Euro Sat Hack30:47 Express VPN does the right thing?31:33 Phanteks Eclipse G360A case review44:28 Patriot Viper VP4300 PS5 SSD upgrade story49:40 Picks of the Week1:01:54 Outro★ Support this podcast on Patreon ★
00:00:00 — Начало00:02:20 — WWDC 202200:04:15 — iOS 1600:32:49 — watchOS 900:39:24 — Apple M200:45:29 — MacBook Air00:58:41 — MacBook Pro 13”01:01:50 — macOS Ventura: Вспомнили Continuity01:07:31 — iPad OS01:13:57 — USB-C с осени 2024 года в Европе01:15:16 — Илон Маск НЕ ПОКУПАЕТ Twitter01:16:39 — Sony планирует выпускать половину игр на ПК и смартфонах к 2025 году01:09:03 — State of Play01:31:59 — DALL-E 2 дорисовал известные картины01:35:49 — Teenage Engineering и Pixel?01:39:39 — HBO закрывает Raised By Wolves Ридли Скотта01:39:03 — Resident Evil в Netflix01:39:43 — Wednesday Тима Бёртона на Netflix01:43:45 — ФиналApple Подкасты: https://bit.ly/droidercastGoogle Подкасты: https://bit.ly/google-droidercastЯндекс.Музыка: https://music.yandex.ru/album/9048349Podster: https://droidercast.podster.fm/
Jonathan Horst from Mac Address joins Riley to talk about their favorite parts of Apple's WWDC 2022, including the Apple Silicon M2 chip arriving first in a redesigned Macbook Air and 13" Macbook Pro. NEWS SOURCES: WWDC22 keynote https://youtu.be/q5D55G7Ejs8 https://www.apple.com/ Timestamps: 0:00 We got the Mac Guy! 1:33 M2 Macbook Air 5:34 Apple M2, gaming 8:21 iPadOS 16, Stage Manager, monitors 20:35 iOS 16, CarPlay, watchOS 9
A Worldwide Developers Conference, ou apenas WWDC, é onde a Apple apresenta suas novidades de software e nuvem. Neste ano, além das novas versões do iOS, macOS e outros sistemas operacionais, o anúncio da nova geração dos chips Apple Silicon pegou muita gente de surpresa. Mas será que o poderoso M2 vai fazer todo o barulho que prometia? No episódio de hoje, batemos um papo sobre os principais destaques da WWDC 2022, comentando tudo que chamou nossa atenção. Quer entender como usar o iPhone como webcam e por que o M2 talvez seja um M1 e meio? Então dá o play e vem com a gente! ## Participantes Thiago Mobilon Paulo Higa Felipe Ventura Emerson Alecrim ## Créditos Produtor: Josué de Oliveira Edição e Sonorização: Ariel Liborio Arte da capa: Vitor Pádua
Join The Full Nerd gang as they talk about the latest PC hardware topics. In this episode the gang covers Apple's announcement of the M2 at WWDC 2022, the results of our $1200 PC build challenge, and more. And of course we answer your questions live! *This episode is sponsored by SK hynix, the maker of fastest-in-class SSDs. Grab the Platinum P41 on Amazon and give your PC a fast and reliable upgrade: https://www.amazon.com/dp/B09QVD9V7R?maas=maas_adg_BFCDFE6551141D6970F40391F61426CD_afap_abs&ref_=aa_maas&tag=maas Read the M2 news on PCWorld.com: https://www.pcworld.com/article/782139/apple-m2-chip-wont-beat-intels-finest.html Buy The Full Nerd merch: https://crowdmade.com/collections/pcworld Join the PC related discussions and ask us questions on Discord: https://discord.gg/SGPRSy7 Follow the crew on Twitter: @GordonUng @BradChacos @MorphingBall @KeithPlaysPC @AdamPMurray Follow PCWorld for all things PC! ---------------------------------- SUBSCRIBE: http://www.youtube.com/subscription_center?add_user=PCWorldVideos TWITCH: https://www.twitch.tv/PCWorldUS TWITTER: https://www.twitter.com/pcworld
On MacBreak Weekly, Leo Laporte, Andy Ihnatko, Alex Lindsay, and Mikah Sargent talk about the new Continuity Camera feature that allows you to use your iPhone as a webcam. Full episode at twit.tv/mbw821 Hosts: Leo Laporte, Mikah Sargent, and Alex Lindsay You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/
Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further. Apple Announces M2 SoC: Apple Silicon for Macs updated for 2022. Apple unveils all-new MacBook Air, supercharged by the new M2 chip. 13-inch MacBook Pro with M2. Hands-on: Using the iPhone as a webcam with iOS 16 and macOS Ventura. iPadOS 16 - All New Features. macOS Ventura - All New Features. iOS 16 - All New Features. iOS 16 code includes multiple 'always-on display' references ahead of iPhone 14 Pro. Picks of the Week Mikah's Pick: G'Day World Andy's Pick: Dual Monitor Stand Mount Alex's Pick: Shokz OpenComm Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/macbreak Blueland.com/MACBREAK cachefly.com
Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further. Apple Announces M2 SoC: Apple Silicon for Macs updated for 2022. Apple unveils all-new MacBook Air, supercharged by the new M2 chip. 13-inch MacBook Pro with M2. Hands-on: Using the iPhone as a webcam with iOS 16 and macOS Ventura. iPadOS 16 - All New Features. macOS Ventura - All New Features. iOS 16 - All New Features. iOS 16 code includes multiple 'always-on display' references ahead of iPhone 14 Pro. Picks of the Week Mikah's Pick: G'Day World Andy's Pick: Dual Monitor Stand Mount Alex's Pick: Shokz OpenComm Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/macbreak Blueland.com/MACBREAK cachefly.com
Пожалуй, это была одна из самых длинных презентаций Apple. Вот основные вещи, которые нам показали: iOS 16: виджеты на экране блокировки, исправление и удаление в iMessages, обновлённый CarPlay и HomeKit, новый протокол умных устройств — Matter (объединит разработчиков умных домов)iPadOS 16: менеджер окон Stage Control и совместная работа с другими людьми в приложенияхwatchOS 9: новые функции для бегунов, отслеживание фибрилляции. А Apple Fitness теперь работает без Apple Watch.macOS 13 Ventura: обновление менеджера окон — Stage Control, замена пароля в виде PassKeys (разработана с Google и Microsoft) и новые возможности Continuity.Чип Apple M2: быстрее, выше и сильнее, чем M1, если вкратце. MacBook Air и MacBook Pro 13: обновлённые ноутбуки прошлого года, но на M2. У Air теперь нет скошенных краёв, он похож на MacBook Pro 14. А вот MacBook 13 выглядит как раньше, даже с тачбаром. В целом, было очень насыщенно, но хочется, конечно, всё потрогать своими руками. Надеюсь, что часть из этого получится увидеть уже летом. Не переключайтесь!
On MacBreak Weekly, Leo Laporte, Andy Ihnatko, Alex Lindsay, and Mikah Sargent talk about the new Continuity Camera feature that allows you to use your iPhone as a webcam. Full episode at twit.tv/mbw821 Hosts: Leo Laporte, Mikah Sargent, and Alex Lindsay You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/
Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further. Apple Announces M2 SoC: Apple Silicon for Macs updated for 2022. Apple unveils all-new MacBook Air, supercharged by the new M2 chip. 13-inch MacBook Pro with M2. Hands-on: Using the iPhone as a webcam with iOS 16 and macOS Ventura. iPadOS 16 - All New Features. macOS Ventura - All New Features. iOS 16 - All New Features. iOS 16 code includes multiple 'always-on display' references ahead of iPhone 14 Pro. Picks of the Week Mikah's Pick: G'Day World Andy's Pick: Dual Monitor Stand Mount Alex's Pick: Shokz OpenComm Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/macbreak Blueland.com/MACBREAK cachefly.com
Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further. Apple Announces M2 SoC: Apple Silicon for Macs updated for 2022. Apple unveils all-new MacBook Air, supercharged by the new M2 chip. 13-inch MacBook Pro with M2. Hands-on: Using the iPhone as a webcam with iOS 16 and macOS Ventura. iPadOS 16 - All New Features. macOS Ventura - All New Features. iOS 16 - All New Features. iOS 16 code includes multiple 'always-on display' references ahead of iPhone 14 Pro. Picks of the Week Mikah's Pick: G'Day World Andy's Pick: Dual Monitor Stand Mount Alex's Pick: Shokz OpenComm Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/macbreak Blueland.com/MACBREAK cachefly.com
Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further. Apple Announces M2 SoC: Apple Silicon for Macs updated for 2022. Apple unveils all-new MacBook Air, supercharged by the new M2 chip. 13-inch MacBook Pro with M2. Hands-on: Using the iPhone as a webcam with iOS 16 and macOS Ventura. iPadOS 16 - All New Features. macOS Ventura - All New Features. iOS 16 - All New Features. iOS 16 code includes multiple 'always-on display' references ahead of iPhone 14 Pro. Picks of the Week Mikah's Pick: G'Day World Andy's Pick: Dual Monitor Stand Mount Alex's Pick: Shokz OpenComm Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/macbreak Blueland.com/MACBREAK cachefly.com
Apple announced updated versions of its operating systems, including iOS 16, iPadOS 16, watchOS 9, and macOS Ventura. The company also announced its latest Apple Silicon chip, the M2, and two new Macs: an M2 MacBook Air and an M2 MacBook Pro. Apple unveils an all-new Lock Screen experience and new ways to share and communicate in iOS 16 watchOS 9 delivers new ways to stay connected, active, and healthy macOS Ventura adds powerful productivity tools and new Continuity features that make the Mac experience better than ever iPadOS 16 takes the versatility of iPad even further with powerful new productivity and collaboration features Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further Apple unveils all-new MacBook Air, supercharged by the new M2 chip Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Apple announced updated versions of its operating systems, including iOS 16, iPadOS 16, watchOS 9, and macOS Ventura. The company also announced its latest Apple Silicon chip, the M2, and two new Macs: an M2 MacBook Air and an M2 MacBook Pro. Apple unveils an all-new Lock Screen experience and new ways to share and communicate in iOS 16 watchOS 9 delivers new ways to stay connected, active, and healthy macOS Ventura adds powerful productivity tools and new Continuity features that make the Mac experience better than ever iPadOS 16 takes the versatility of iPad even further with powerful new productivity and collaboration features Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further Apple unveils all-new MacBook Air, supercharged by the new M2 chip Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Apple announced updated versions of its operating systems, including iOS 16, iPadOS 16, watchOS 9, and macOS Ventura. The company also announced its latest Apple Silicon chip, the M2, and two new Macs: an M2 MacBook Air and an M2 MacBook Pro. Apple unveils an all-new Lock Screen experience and new ways to share and communicate in iOS 16 watchOS 9 delivers new ways to stay connected, active, and healthy macOS Ventura adds powerful productivity tools and new Continuity features that make the Mac experience better than ever iPadOS 16 takes the versatility of iPad even further with powerful new productivity and collaboration features Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further Apple unveils all-new MacBook Air, supercharged by the new M2 chip Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Apple announced updated versions of its operating systems, including iOS 16, iPadOS 16, watchOS 9, and macOS Ventura. The company also announced its latest Apple Silicon chip, the M2, and two new Macs: an M2 MacBook Air and an M2 MacBook Pro. Apple unveils an all-new Lock Screen experience and new ways to share and communicate in iOS 16 watchOS 9 delivers new ways to stay connected, active, and healthy macOS Ventura adds powerful productivity tools and new Continuity features that make the Mac experience better than ever iPadOS 16 takes the versatility of iPad even further with powerful new productivity and collaboration features Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further Apple unveils all-new MacBook Air, supercharged by the new M2 chip Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Apple announced updated versions of its operating systems, including iOS 16, iPadOS 16, watchOS 9, and macOS Ventura. The company also announced its latest Apple Silicon chip, the M2, and two new Macs: an M2 MacBook Air and an M2 MacBook Pro. Apple unveils an all-new Lock Screen experience and new ways to share and communicate in iOS 16 watchOS 9 delivers new ways to stay connected, active, and healthy macOS Ventura adds powerful productivity tools and new Continuity features that make the Mac experience better than ever iPadOS 16 takes the versatility of iPad even further with powerful new productivity and collaboration features Apple unveils M2, taking the breakthrough performance and capabilities of M1 even further Apple unveils all-new MacBook Air, supercharged by the new M2 chip Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit