POPULARITY
In 2014, when Lisa Su took over as CEO of Advanced Micro Devices, AMD was on the verge of bankruptcy. Su bet hard on hardware and not only pulled the semiconductor company back from the brink, but also led it to surpass its historical rival, Intel, in market cap. Since the launch of ChatGPT made high-powered chips like AMDs “sexy” again, demand for chips has intensified exponentially, but so has the public spotlight on the industry — including from the federal government. In a live conversation, at the Johns Hopkins University Bloomberg Center, as part of their inaugural Discovery Series, Kara talks to Su about her strategy in face of the Trump administration's tariff and export control threats, how to safeguard the US in the global AI race, and what she says when male tech leaders brag about the size of their GPUs. Listen to more from On with Kara Swisher here. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In 2014, when Lisa Su took over as CEO of Advanced Micro Devices, AMD was on the verge of bankruptcy. Su bet hard on hardware and not only pulled the semiconductor company back from the brink, but also led it to surpass its historical rival, Intel, in market cap. Since the launch of ChatGPT made high-powered chips like AMDs “sexy” again, demand for chips has intensified exponentially, but so has the public spotlight on the industry — including from the federal government. In a live conversation, at the Johns Hopkins University Bloomberg Center, as part of their inaugural Discovery Series, Kara talks to Su about her strategy in face of the Trump administration's tariff and export control threats, how to safeguard the US in the global AI race, and what she says when male tech leaders brag about the size of their GPUs. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Da ist er, der Showdown: AMD Radeon RX 9070 (XT) mit RDNA 4 und FSR 4 gegen Nvidia GeForce RTX 5070 (Ti) mit DLSS 4. Fabian und Jan analysieren, wie gut AMDs neue GPU-Architektur und das neue KI-Upscaling wirklich geworden sind und wie sich AMDs beide 70er gegen Nvidias beide 70er im Benchmarkparcours und darüber hinaus schlagen.
This week, were talking about live service games, AMDs new GPUs, Monster Hunter Wilds, and SO MUCH MORE!!!!
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
On this episode of Hands-On Tech, Mikah Sargent helps Jerry, who is experiencing issues getting iTunes to recognize their iPhone 16 on their Windows PC. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Vill du stötta oss och ta del av våra exklusiva avsnitt i Goty & Blandat så bli Patreon! Patreon.com/gotypodden Joina oss på Discord! (05:34) Analys & Diskussion: Xbox Developer Direct, EA failar, AMDs nya grafikkort m.m. (40:57) Spel: Yakuza 0, Ender Magnolia: Bloom in the Mist, Eternal Strands, Ninja Gaiden 2 Black, Resident Evil 2, Death Stranding, Persona 5 Royal, Heroes of Hammerwatch 2, Sniper Elite: Resistance (01:43:26) Radarn: Vad kommer nästa vecka i spelvärlden? Feedback, tips eller eventuella frågor får gärna skickas till gotypodden@gmail.com Discord eller på Instagram / Twitter @gotypodden. Tack Emma Idberg för våra fina bilder! GOTY-merch i vår merchbutik! Vill ni höra eller se mer av oss finns våra andra poddar och vår Youtube-kanal i vårt länkträd!
RobChrisRob linked up from their various pretend locations in low earth orbit to talk about matters of great importance including the accidental crumpling of a Bezos rocket, the Beastie Boys playing their gold record for Paul's Boutique and discovering that it was in fact.. NOT Paul's Boutique, a lady mailed an AirTag to herself to catch a mail thief, the trailer for the remastered goat simulator is fire, NASA has finally decided to send the Starliner home uncrewed & stranding the the astronauts for another 6 months, RFK drove 5 hours to cut of a whale's head with a chainsaw (wait, what?) the disability advocate who never existed, 3D Printing a Benchy in under 2 minutes, HDMI Forum blocking AMDs 120hz 4k HDMI Linux Driver because we can't have nice things, an explanation for the famous 'Wow!' signal has been found, Xitter warns users that NPR might be unsafe. Join our discord to talk along or the Subreddit where you will find all the links https://discord.gg/YZMTgpyhB https://www.reddit.com/r/TacoZone/
AMDs neue Zen-5-CPU Ryzen AI 300 kontert die neuen ARM-Chips für Windows-Notebooks. Ob das gelingt, verrät Folge 2024/17 des Podcast Bit-Rauschen.
On this week's Windows Central Podcast, Zac is joined by Windows Central tech editor Ben Wilson to discuss the week's biggest news. We talk Intel's 13th/14th Gen issues, Intel's layoffs, and Intel being sued. We also discuss AMDs new Ryzen chips and how they compare to Intel/Qualcomm, plus the rumor that Microsoft is working on a new VR headset!
Die ersten Tests zu zwei von AMDs neuen Ryzen-9000-Prozessoren sind online. Einen gigantischen Leistungssprung sollten PC-Enthusiasten nicht erwarten, doch AMD trumpft dafür in einem anderen Bereich auf.
Eigentlich sollten AMDs neue Desktop-Prozessoren Ende Juli in den Verkauf starten. Jetzt hat sich der Konzern jedoch dazu entschieden, den Launch noch einmal kurzfristig zu verschieben. Der Grund dafür ist jedoch nachvollziehbar.
Die letzte Woche im Februar hatte es in sich, Fabian und Jan haben in CB-Funk Episode 58 so einiges zu besprechen: Den Test von Intels "neuen" Core i5-14500 & i5-14400F, hitzig diskutierte Probleme mit Intels K-CPUs, AMDs zweiten Marktstart für die RX 7900 GRE, GPU-Preis-Leistungs-Diagramme mit neuen FPS-Leistungs-Ratings... und dann sogar noch die von Fabian gefürchtete Sonntagsfrage zum Thema "Star Citizen". Viel Spaß beim Zuhören!
► Check out today's hottest tech deals here: https://www.ufd.deals/ https://howl.me/clrICKZP8Qx https://geni.us/LSALpB https://howl.me/clrIGBMQSM 0:00 - Intro 00:18 - S24 Launched: https://howl.me/clruqPh8yxs https://tinyurl.com/ylc362lg https://tinyurl.com/ypydaf6b https://tinyurl.com/yt35l9ew https://tinyurl.com/ywxmg254 https://tinyurl.com/yse269mw https://tinyurl.com/ylnl8sqz 04:33 - Sponsor 06:59 - New GPU Vulnerability: https://tinyurl.com/yug5orpz https://tinyurl.com/ykuenzy7 https://tinyurl.com/yr7lrjxc 08:21 - UFD Deals: https://www.ufd.deals/ https://howl.me/clrICKZP8Qx https://geni.us/LSALpB https://howl.me/clrIGBMQSMq 09:24 - AMDs Response to SUPER Cards: https://tinyurl.com/ylcfyym9 https://tinyurl.com/yt3hmnul https://tinyurl.com/yvtprl57 11:34 - Comment Response ► Follow me on Twitch - http://www.twitch.tv/ufdisciple ► Join Our Discord: https://discord.gg/GduJmEM ► Support Us on Floatplane: https://www.floatplane.com/channel/ufdtech ► Support Us on Patreon: https://www.patreon.com/UFDTech ► Twitter - http://www.twitter.com/UFDTech ► Facebook - http://www.facebook.com/ufdtech ► Instagram - http://www.instagram.com/ufd_tech ► Reddit - https://www.reddit.com/r/UFDTech/ Presenter: Brett Sticklemonster Videographer: Brett Sticklemonster Editor: Rikus Strauss Thumbnail Designer: Reece Hill
Google beeindruckt mit einem Gemini AI Video (und hat dabei leicht geschummelt). Wir sprechen über das Businessmodell und die Bewertung von Celonis. Ist die Regulierung in Europa ein Nachteil für den Tech-Standort? Googles Search Partner Network hat Probleme mit Brand Safety. Es gibt Earnings von Asana, Rent The Runway, Braze, C3AI, Sentinel One, DocuSign und MongoDB. Aktuelle Werbepartner des Doppelgänger Tech Talk Podcasts und unser Sheet. Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Milano Vice Series A (00:02:50) Gemini (00:16:25) EU Tech Regulierung (00:21:50) Celonis (00:33:00) Search Partner Network (01:10:20) Rent The Runway Earnings (01:12:30) Braze Earnings (01:15:00) Asana Earnings (01:16:00) C3AI Earnings (01:17:45) Sentinel One Earnings (01:19:10) DocuSign Earnings (01:20:20) MongoDB Earnings (01:21:50) AMD neuer Chip Shownotes: Search Partner Network: LinkedIn, Adweek Gemini: Google
Wolfgang hat sich FSR 3 mit FMF in Forspoken und Immortals of Aveum angesehen und das Urteil fällt eindeutig aus: AMDs künstliche Zwischenbilder "Fluid Motion Frames" haben allem Anschein nach Potential, es zu finden, ist in Anbetracht der vielen Baustellen (Probleme, Bugs?) aber aktuell kaum möglich. Für AMDs Treiber Team heißt es also nachsitzen. Über die Details sprechen Fabian und Jan im Podcast. Dabei sitzt Jan eigentlich auch gerade nach und zwar zum Thema Intel Arc: Mit einem gut gemeinten Testparcours aus aktuelleren und schon etwas älteren Messwerten, dafür mit sehr vielen GPUs gestartet, ereilte ihn am Ende des Tests der Arc A580 die Erkenntnis: So wird das nichts - und Wolfgang hatte es bereits vorhergesehen. Also noch einmal von vorne! Sicher nicht aus Höflichkeit, hat sich zum Nachsitzen auch Corsair gemeldet: In dem Fall in Sachen Firmware der neuen "Budget-Tastatur" K70 Core, die vor Fall des Embargos zu doppelten Eingaben neigte, wenn schnell getippt wird - ein erstes Firmware-Update brachte Besserung, aber noch keine Heilung. Im starken Kontrast dazu hat in dieser Woche Sony abgeliefert und zwar die Ankündigung einer kompakteren PS5, die aber nicht "Slim" heißt. Was daran neu ist, besprechen Fabian und Jan ebenfalsl. Episode #38 schließt mit einem Lesertest von ComputerBase-Leser "Pizza!", weiteren Antworten auf Zuhörerfragen und der letzten Sonntagsfrage. Viel Spaß beim Zuhören!
We have just announced our first set of speakers at AI Engineer Summit! Sign up for the livestream or email sponsors@ai.engineer if you'd like to support.We are facing a massive GPU crunch. As both startups and VC's hoard Nvidia GPUs like countries count nuclear stockpiles, tweets about GPU shortages have become increasingly common. But what if we could run LLMs with AMD cards, or without a GPU at all? There's just one weird trick: compilation. And there's one person uniquely qualified to do it.We had the pleasure to sit down with Tianqi Chen, who's an Assistant Professor at CMU, where he both teaches the MLC course and runs the MLC group. You might also know him as the creator of XGBoost, Apache TVM, and MXNet, as well as the co-founder of OctoML. The MLC (short for Machine Learning Compilation) group has released a lot of interesting projects:* MLC Chat: an iPhone app that lets you run models like RedPajama-3B and Vicuna-7B on-device. It gets up to 30 tok/s!* Web LLM: Run models like LLaMA-70B in your browser (!!) to offer local inference in your product.* MLC LLM: a framework that allows any language models to be deployed natively on different hardware and software stacks.The MLC group has just announced new support for AMD cards; we previously talked about the shortcomings of ROCm, but using MLC you can get performance very close to the NVIDIA's counterparts. This is great news for founders and builders, as AMD cards are more readily available. Here are their latest results on AMD's 7900s vs some of top NVIDIA consumer cards.If you just can't get a GPU at all, MLC LLM also supports ARM and x86 CPU architectures as targets by leveraging LLVM. While speed performance isn't comparable, it allows for non-time-sensitive inference to be run on commodity hardware.We also enjoyed getting a peek into TQ's process, which involves a lot of sketching:With all the other work going on in this space with projects like ggml and Ollama, we're excited to see GPUs becoming less and less of an issue to get models in the hands of more people, and innovative software solutions to hardware problems!Show Notes* TQ's Projects:* XGBoost* Apache TVM* MXNet* MLC* OctoML* CMU Catalyst* ONNX* GGML* Mojo* WebLLM* RWKV* HiPPO* Tri Dao's Episode* George Hotz EpisodePeople:* Carlos Guestrin* Albert GuTimestamps* [00:00:00] Intros* [00:03:41] The creation of XGBoost and its surprising popularity* [00:06:01] Comparing tree-based models vs deep learning* [00:10:33] Overview of TVM and how it works with ONNX* [00:17:18] MLC deep dive* [00:28:10] Using int4 quantization for inference of language models* [00:30:32] Comparison of MLC to other model optimization projects* [00:35:02] Running large language models in the browser with WebLLM* [00:37:47] Integrating browser models into applications* [00:41:15] OctoAI and self-optimizing compute* [00:45:45] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Okay, and we are here with Tianqi Chen, or TQ as people call him, who is assistant professor in ML computer science at CMU, Carnegie Mellon University, also helping to run Catalyst Group, also chief technologist of OctoML. You wear many hats. Are those, you know, your primary identities these days? Of course, of course. [00:00:42]Tianqi: I'm also, you know, very enthusiastic open source. So I'm also a VP and PRC member of the Apache TVM project and so on. But yeah, these are the things I've been up to so far. [00:00:53]Swyx: Yeah. So you did Apache TVM, XGBoost, and MXNet, and we can cover any of those in any amount of detail. But maybe what's one thing about you that people might not learn from your official bio or LinkedIn, you know, on the personal side? [00:01:08]Tianqi: Let me say, yeah, so normally when I do, I really love coding, even though like I'm trying to run all those things. So one thing that I keep a habit on is I try to do sketchbooks. I have a book, like real sketchbooks to draw down the design diagrams and the sketchbooks I keep sketching over the years, and now I have like three or four of them. And it's kind of a usually a fun experience of thinking the design through and also seeing how open source project evolves and also looking back at the sketches that we had in the past to say, you know, all these ideas really turn into code nowadays. [00:01:43]Alessio: How many sketchbooks did you get through to build all this stuff? I mean, if one person alone built one of those projects, he'll be a very accomplished engineer. Like you built like three of these. What's that process like for you? Like it's the sketchbook, like the start, and then you think about the code or like. [00:01:59]Swyx: Yeah. [00:02:00]Tianqi: So, so usually I start sketching on high level architectures and also in a project that works for over years, we also start to think about, you know, new directions, like of course generative AI language model comes in, how it's going to evolve. So normally I would say it takes like one book a year, roughly at that rate. It's usually fun to, I find it's much easier to sketch things out and then gives a more like a high level architectural guide for some of the future items. Yeah. [00:02:28]Swyx: Have you ever published this sketchbooks? Cause I think people would be very interested on, at least on a historical basis. Like this is the time where XGBoost was born, you know? Yeah, not really. [00:02:37]Tianqi: I started sketching like after XGBoost. So that's a kind of missing piece, but a lot of design details in TVM are actually part of the books that I try to keep a record of. [00:02:48]Swyx: Yeah, we'll try to publish them and publish something in the journals. Maybe you can grab a little snapshot for visual aid. Sounds good. [00:02:57]Alessio: Yeah. And yeah, talking about XGBoost, so a lot of people in the audience might know it's a gradient boosting library, probably the most popular out there. And it became super popular because many people started using them in like a machine learning competitions. And I think there's like a whole Wikipedia page of like all state-of-the-art models. They use XGBoost and like, it's a really long list. When you were working on it, so we just had Tri Dao, who's the creator of FlashAttention on the podcast. And I asked him this question, it's like, when you were building FlashAttention, did you know that like almost any transform race model will use it? And so I asked the same question to you when you were coming up with XGBoost, like, could you predict it would be so popular or like, what was the creation process? And when you published it, what did you expect? We have no idea. [00:03:41]Tianqi: Like, actually, the original reason that we built that library is that at that time, deep learning just came out. Like that was the time where AlexNet just came out. And one of the ambitious mission that myself and my advisor, Carlos Guestrin, then is we want to think about, you know, try to test the hypothesis. Can we find alternatives to deep learning models? Because then, you know, there are other alternatives like, you know, support vector machines, linear models, and of course, tree-based models. And our question was, if you build those models and feed them with big enough data, because usually like one of the key characteristics of deep learning is that it's taking a lot [00:04:22]Swyx: of data, right? [00:04:23]Tianqi: So we will be able to get the same amount of performance. That's a hypothesis we're setting out to test. Of course, if you look at now, right, that's a wrong hypothesis, but as a byproduct, what we find out is that, you know, most of the gradient boosting library out there is not efficient enough for us to test that hypothesis. So I happen to have quite a bit of experience in the past of building gradient boosting trees and their variants. So Effective Action Boost was kind of like a byproduct of that hypothesis testing. At that time, I'm also competing a bit in data science challenges, like I worked on KDDCup and then Kaggle kind of become bigger, right? So I kind of think maybe it's becoming useful to others. One of my friends convinced me to try to do a Python binding of it. That tends to be like a very good decision, right, to be effective. Usually when I build it, we feel like maybe a command line interface is okay. And now we have a Python binding, we have R bindings. And then it realized, you know, it started getting interesting. People started contributing different perspectives, like visualization and so on. So we started to push a bit more on to building distributive support to make sure it works on any platform and so on. And even at that time point, when I talked to Carlos, my advisor, later, he said he never anticipated that we'll get to that level of success. And actually, why I pushed for gradient boosting trees, interestingly, at that time, he also disagreed. He thinks that maybe we should go for kernel machines then. And it turns out, you know, actually, we are both wrong in some sense, and Deep Neural Network was the king in the hill. But at least the gradient boosting direction got into something fruitful. [00:06:01]Swyx: Interesting. [00:06:02]Alessio: I'm always curious when it comes to these improvements, like, what's the design process in terms of like coming up with it? And how much of it is a collaborative with like other people that you're working with versus like trying to be, you know, obviously, in academia, it's like very paper-driven kind of research driven. [00:06:19]Tianqi: I would say the extra boost improvement at that time point was more on like, you know, I'm trying to figure out, right. But it's combining lessons. Before that, I did work on some of the other libraries on matrix factorization. That was like my first open source experience. Nobody knew about it, because you'll find, likely, if you go and try to search for the package SVD feature, you'll find some SVN repo somewhere. But it's actually being used for some of the recommender system packages. So I'm trying to apply some of the previous lessons there and trying to combine them. The later projects like MXNet and then TVM is much, much more collaborative in a sense that... But, of course, extra boost has become bigger, right? So when we started that project myself, and then we have, it's really amazing to see people come in. Michael, who was a lawyer, and now he works on the AI space as well, on contributing visualizations. Now we have people from our community contributing different things. So extra boost even today, right, it's a community of committers driving the project. So it's definitely something collaborative and moving forward on getting some of the things continuously improved for our community. [00:07:37]Alessio: Let's talk a bit about TVM too, because we got a lot of things to run through in this episode. [00:07:42]Swyx: I would say that at some point, I'd love to talk about this comparison between extra boost or tree-based type AI or machine learning compared to deep learning, because I think there is a lot of interest around, I guess, merging the two disciplines, right? And we can talk more about that. I don't know where to insert that, by the way, so we can come back to it later. Yeah. [00:08:04]Tianqi: Actually, what I said, when we test the hypothesis, the hypothesis is kind of, I would say it's partially wrong, because the hypothesis we want to test now is, can you run tree-based models on image classification tasks, where deep learning is certainly a no-brainer right [00:08:17]Swyx: now today, right? [00:08:18]Tianqi: But if you try to run it on tabular data, still, you'll find that most people opt for tree-based models. And there's a reason for that, in the sense that when you are looking at tree-based models, the decision boundaries are naturally rules that you're looking at, right? And they also have nice properties, like being able to be agnostic to scale of input and be able to automatically compose features together. And I know there are attempts on building neural network models that work for tabular data, and I also sometimes follow them. I do feel like it's good to have a bit of diversity in the modeling space. Actually, when we're building TVM, we build cost models for the programs, and actually we are using XGBoost for that as well. I still think tree-based models are going to be quite relevant, because first of all, it's really to get it to work out of the box. And also, you will be able to get a bit of interoperability and control monotonicity [00:09:18]Swyx: and so on. [00:09:19]Tianqi: So yes, it's still going to be relevant. I also sometimes keep coming back to think about, are there possible improvements that we can build on top of these models? And definitely, I feel like it's a space that can have some potential in the future. [00:09:34]Swyx: Are there any current projects that you would call out as promising in terms of merging the two directions? [00:09:41]Tianqi: I think there are projects that try to bring a transformer-type model for tabular data. I don't remember specifics of them, but I think even nowadays, if you look at what people are using, tree-based models are still one of their toolkits. So I think maybe eventually it's not even a replacement, it will be just an ensemble of models that you can call. Perfect. [00:10:07]Alessio: Next up, about three years after XGBoost, you built this thing called TVM, which is now a very popular compiler framework for models. Let's talk about, so this came out about at the same time as ONNX. So I think it would be great if you could maybe give a little bit of an overview of how the two things work together. Because it's kind of like the model, then goes to ONNX, then goes to the TVM. But I think a lot of people don't understand the nuances. I can get a bit of a backstory on that. [00:10:33]Tianqi: So actually, that's kind of an ancient history. Before XGBoost, I worked on deep learning for two years or three years. I got a master's before I started my PhD. And during my master's, my thesis focused on applying convolutional restricted Boltzmann machine for ImageNet classification. That is the thing I'm working on. And that was before AlexNet moment. So effectively, I had to handcraft NVIDIA CUDA kernels on, I think, a GTX 2070 card. I have a 22070 card. It took me about six months to get one model working. And eventually, that model is not so good, and we should have picked a better model. But that was like an ancient history that really got me into this deep learning field. And of course, eventually, we find it didn't work out. So in my master's, I ended up working on recommender system, which got me a paper, and I applied and got a PhD. But I always want to come back to work on the deep learning field. So after XGBoost, I think I started to work with some folks on this particular MXNet. At that time, it was like the frameworks of CAFE, Ciano, PyTorch haven't yet come out. And we're really working hard to optimize for performance on GPUs. At that time, I found it's really hard, even for NVIDIA GPU. It took me six months. And then it's amazing to see on different hardwares how hard it is to go and optimize code for the platforms that are interesting. So that gets me thinking, can we build something more generic and automatic? So that I don't need an entire team of so many people to go and build those frameworks. So that's the motivation of starting working on TVM. There is really too little about machine learning engineering needed to support deep learning models on the platforms that we're interested in. I think it started a bit earlier than ONNX, but once it got announced, I think it's in a similar time period at that time. So overall, how it works is that TVM, you will be able to take a subset of machine learning programs that are represented in what we call a computational graph. Nowadays, we can also represent a loop-level program ingest from your machine learning models. Usually, you have model formats ONNX, or in PyTorch, they have FX Tracer that allows you to trace the FX graph. And then it goes through TVM. We also realized that, well, yes, it needs to be more customizable, so it will be able to perform some of the compilation optimizations like fusion operator together, doing smart memory planning, and more importantly, generate low-level code. So that works for NVIDIA and also is portable to other GPU backends, even non-GPU backends [00:13:36]Swyx: out there. [00:13:37]Tianqi: So that's a project that actually has been my primary focus over the past few years. And it's great to see how it started from where I think we are the very early initiator of machine learning compilation. I remember there was a visit one day, one of the students asked me, are you still working on deep learning frameworks? I tell them that I'm working on ML compilation. And they said, okay, compilation, that sounds very ancient. It sounds like a very old field. And why are you working on this? And now it's starting to get more traction, like if you say Torch Compile and other things. I'm really glad to see this field starting to pick up. And also we have to continue innovating here. [00:14:17]Alessio: I think the other thing that I noticed is, it's kind of like a big jump in terms of area of focus to go from XGBoost to TVM, it's kind of like a different part of the stack. Why did you decide to do that? And I think the other thing about compiling to different GPUs and eventually CPUs too, did you already see some of the strain that models could have just being focused on one runtime, only being on CUDA and that, and how much of that went into it? [00:14:50]Tianqi: I think it's less about trying to get impact, more about wanting to have fun. I like to hack code, I had great fun hacking CUDA code. Of course, being able to generate CUDA code is cool, right? But now, after being able to generate CUDA code, okay, by the way, you can do it on other platforms, isn't that amazing? So it's more of that attitude to get me started on this. And also, I think when we look at different researchers, myself is more like a problem solver type. So I like to look at a problem and say, okay, what kind of tools we need to solve that problem? So regardless, it could be building better models. For example, while we build extra boots, we build certain regularizations into it so that it's more robust. It also means building system optimizations, writing low-level code, maybe trying to write assembly and build compilers and so on. So as long as they solve the problem, definitely go and try to do them together. And I also see it's a common trend right now. Like if you want to be able to solve machine learning problems, it's no longer at Aggressor layer, right? You kind of need to solve it from both Aggressor data and systems angle. And this entire field of machine learning system, I think it's kind of emerging. And there's now a conference around it. And it's really good to see a lot more people are starting to look into this. [00:16:10]Swyx: Yeah. Are you talking about ICML or something else? [00:16:13]Tianqi: So machine learning and systems, right? So not only machine learning, but machine learning and system. So there's a conference called MLsys. It's definitely a smaller community than ICML, but I think it's also an emerging and growing community where people are talking about what are the implications of building systems for machine learning, right? And how do you go and optimize things around that and co-design models and systems together? [00:16:37]Swyx: Yeah. And you were area chair for ICML and NeurIPS as well. So you've just had a lot of conference and community organization experience. Is that also an important part of your work? Well, it's kind of expected for academic. [00:16:48]Tianqi: If I hold an academic job, I need to do services for the community. Okay, great. [00:16:53]Swyx: Your most recent venture in MLsys is going to the phone with MLCLLM. You announced this in April. I have it on my phone. It's great. I'm running Lama 2, Vicuña. I don't know what other models that you offer. But maybe just kind of describe your journey into MLC. And I don't know how this coincides with your work at CMU. Is that some kind of outgrowth? [00:17:18]Tianqi: I think it's more like a focused effort that we want in the area of machine learning compilation. So it's kind of related to what we built in TVM. So when we built TVM was five years ago, right? And a lot of things happened. We built the end-to-end machine learning compiler that works, the first one that works. But then we captured a lot of lessons there. So then we are building a second iteration called TVM Unity. That allows us to be able to allow ML engineers to be able to quickly capture the new model and how we demand building optimizations for them. And MLCLLM is kind of like an MLC. It's more like a vertical driven organization that we go and build tutorials and go and build projects like LLM to solutions. So that to really show like, okay, you can take machine learning compilation technology and apply it and bring something fun forward. Yeah. So yes, it runs on phones, which is really cool. But the goal here is not only making it run on phones, right? The goal is making it deploy universally. So we do run on Apple M2 Macs, the 17 billion models. Actually, on a single batch inference, more recently on CUDA, we get, I think, the most best performance you can get out there already on the 4-bit inference. Actually, as I alluded earlier before the podcast, we just had a result on AMD. And on a single batch, actually, we can get the latest AMD GPU. This is a consumer card. It can get to about 80% of the 4019, so NVIDIA's best consumer card out there. So it's not yet on par, but thinking about how diversity and what you can enable and the previous things you can get on that card, it's really amazing that what you can do with this kind of technology. [00:19:10]Swyx: So one thing I'm a little bit confused by is that most of these models are in PyTorch, but you're running this inside a TVM. I don't know. Was there any fundamental change that you needed to do, or was this basically the fundamental design of TVM? [00:19:25]Tianqi: So the idea is that, of course, it comes back to program representation, right? So effectively, TVM has this program representation called TVM script that contains more like computational graph and operational representation. So yes, initially, we do need to take a bit of effort of bringing those models onto the program representation that TVM supports. Usually, there are a mix of ways, depending on the kind of model you're looking at. For example, for vision models and stable diffusion models, usually we can just do tracing that takes PyTorch model onto TVM. That part is still being robustified so that we can bring more models in. On language model tasks, actually what we do is we directly build some of the model constructors and try to directly map from Hugging Face models. The goal is if you have a Hugging Face configuration, we will be able to bring that in and apply optimization on them. So one fun thing about model compilation is that your optimization doesn't happen only as a soft language, right? For example, if you're writing PyTorch code, you just go and try to use a better fused operator at a source code level. Torch compile might help you do a bit of things in there. In most of the model compilations, it not only happens at the beginning stage, but we also apply generic transformations in between, also through a Python API. So you can tweak some of that. So that part of optimization helps a lot of uplifting in getting both performance and also portability on the environment. And another thing that we do have is what we call universal deployment. So if you get the ML program into this TVM script format, where there are functions that takes in tensor and output tensor, we will be able to have a way to compile it. So they will be able to load the function in any of the language runtime that TVM supports. So if you could load it in JavaScript, and that's a JavaScript function that you can take in tensors and output tensors. If you're loading Python, of course, and C++ and Java. So the goal there is really bring the ML model to the language that people care about and be able to run it on a platform they like. [00:21:37]Swyx: It strikes me that I've talked to a lot of compiler people, but you don't have a traditional compiler background. You're inventing your own discipline called machine learning compilation, or MLC. Do you think that this will be a bigger field going forward? [00:21:52]Tianqi: First of all, I do work with people working on compilation as well. So we're also taking inspirations from a lot of early innovations in the field. Like for example, TVM initially, we take a lot of inspirations from Halide, which is just an image processing compiler. And of course, since then, we have evolved quite a bit to focus on the machine learning related compilations. If you look at some of our conference publications, you'll find that machine learning compilation is already kind of a subfield. So if you look at papers in both machine learning venues, the MLC conferences, of course, and also system venues, every year there will be papers around machine learning compilation. And in the compiler conference called CGO, there's a C4ML workshop that also kind of trying to focus on this area. So definitely it's already starting to gain traction and becoming a field. I wouldn't claim that I invented this field, but definitely I helped to work with a lot of folks there. And I try to bring a perspective, of course, trying to learn a lot from the compiler optimizations as well as trying to bring in knowledges in machine learning and systems together. [00:23:07]Alessio: So we had George Hotz on the podcast a few episodes ago, and he had a lot to say about AMD and their software. So when you think about TVM, are you still restricted in a way by the performance of the underlying kernel, so to speak? So if your target is like a CUDA runtime, you still get better performance, no matter like TVM kind of helps you get there, but then that level you don't take care of, right? [00:23:34]Swyx: There are two parts in here, right? [00:23:35]Tianqi: So first of all, there is the lower level runtime, like CUDA runtime. And then actually for NVIDIA, a lot of the mood came from their libraries, like Cutlass, CUDN, right? Those library optimizations. And also for specialized workloads, actually you can specialize them. Because a lot of cases you'll find that if you go and do benchmarks, it's very interesting. Like two years ago, if you try to benchmark ResNet, for example, usually the NVIDIA library [00:24:04]Swyx: gives you the best performance. [00:24:06]Tianqi: It's really hard to beat them. But as soon as you start to change the model to something, maybe a bit of a variation of ResNet, not for the traditional ImageNet detections, but for latent detection and so on, there will be some room for optimization because people sometimes overfit to benchmarks. These are people who go and optimize things, right? So people overfit the benchmarks. So that's the largest barrier, like being able to get a low level kernel libraries, right? In that sense, the goal of TVM is actually we try to have a generic layer to both, of course, leverage libraries when available, but also be able to automatically generate [00:24:45]Swyx: libraries when possible. [00:24:46]Tianqi: So in that sense, we are not restricted by the libraries that they have to offer. That's why we will be able to run Apple M2 or WebGPU where there's no library available because we are kind of like automatically generating libraries. That makes it easier to support less well-supported hardware, right? For example, WebGPU is one example. From a runtime perspective, AMD, I think before their Vulkan driver was not very well supported. Recently, they are getting good. But even before that, we'll be able to support AMD through this GPU graphics backend called Vulkan, which is not as performant, but it gives you a decent portability across those [00:25:29]Swyx: hardware. [00:25:29]Alessio: And I know we got other MLC stuff to talk about, like WebLLM, but I want to wrap up on the optimization that you're doing. So there's kind of four core things, right? Kernel fusion, which we talked a bit about in the flash attention episode and the tiny grab one memory planning and loop optimization. I think those are like pretty, you know, self-explanatory. I think the one that people have the most questions, can you can you quickly explain [00:25:53]Swyx: those? [00:25:54]Tianqi: So there are kind of a different things, right? Kernel fusion means that, you know, if you have an operator like Convolutions or in the case of a transformer like MOP, you have other operators that follow that, right? You don't want to launch two GPU kernels. You want to be able to put them together in a smart way, right? And as a memory planning, it's more about, you know, hey, if you run like Python code, every time when you generate a new array, you are effectively allocating a new piece of memory, right? Of course, PyTorch and other frameworks try to optimize for you. So there is a smart memory allocator behind the scene. But actually, in a lot of cases, it's much better to statically allocate and plan everything ahead of time. And that's where like a compiler can come in. We need to, first of all, actually for language model, it's much harder because dynamic shape. So you need to be able to what we call symbolic shape tracing. So we have like a symbolic variable that tells you like the shape of the first tensor is n by 12. And the shape of the third tensor is also n by 12. Or maybe it's n times 2 by 12. Although you don't know what n is, right? But you will be able to know that relation and be able to use that to reason about like fusion and other decisions. So besides this, I think loop transformation is quite important. And it's actually non-traditional. Originally, if you simply write a code and you want to get a performance, it's very hard. For example, you know, if you write a matrix multiplier, the simplest thing you can do is you do for i, j, k, c, i, j, plus, equal, you know, a, i, k, times b, i, k. But that code is 100 times slower than the best available code that you can get. So we do a lot of transformation, like being able to take the original code, trying to put things into shared memory, and making use of tensor calls, making use of memory copies, and all this. Actually, all these things, we also realize that, you know, we cannot do all of them. So we also make the ML compilation framework as a Python package, so that people will be able to continuously improve that part of engineering in a more transparent way. So we find that's very useful, actually, for us to be able to get good performance very quickly on some of the new models. Like when Lamato came out, we'll be able to go and look at the whole, here's the bottleneck, and we can go and optimize those. [00:28:10]Alessio: And then the fourth one being weight quantization. So everybody wants to know about that. And just to give people an idea of the memory saving, if you're doing FB32, it's like four bytes per parameter. Int8 is like one byte per parameter. So you can really shrink down the memory footprint. What are some of the trade-offs there? How do you figure out what the right target is? And what are the precision trade-offs, too? [00:28:37]Tianqi: Right now, a lot of people also mostly use int4 now for language models. So that really shrinks things down a lot. And more recently, actually, we started to think that, at least in MOC, we don't want to have a strong opinion on what kind of quantization we want to bring, because there are so many researchers in the field. So what we can do is we can allow developers to customize the quantization they want, but we still bring the optimum code for them. So we are working on this item called bring your own quantization. In fact, hopefully MOC will be able to support more quantization formats. And definitely, I think there's an open field that's being explored. Can you bring more sparsities? Can you quantize activations as much as possible, and so on? And it's going to be something that's going to be relevant for quite a while. [00:29:27]Swyx: You mentioned something I wanted to double back on, which is most people use int4 for language models. This is actually not obvious to me. Are you talking about the GGML type people, or even the researchers who are training the models also using int4? [00:29:40]Tianqi: Sorry, so I'm mainly talking about inference, not training, right? So when you're doing training, of course, int4 is harder, right? Maybe you could do some form of mixed type precision for inference. I think int4 is kind of like, in a lot of cases, you will be able to get away with int4. And actually, that does bring a lot of savings in terms of the memory overhead, and so on. [00:30:09]Alessio: Yeah, that's great. Let's talk a bit about maybe the GGML, then there's Mojo. How should people think about MLC? How do all these things play together? I think GGML is focused on model level re-implementation and improvements. Mojo is a language, super sad. You're more at the compiler level. Do you all work together? Do people choose between them? [00:30:32]Tianqi: So I think in this case, I think it's great to say the ecosystem becomes so rich with so many different ways. So in our case, GGML is more like you're implementing something from scratch in C, right? So that gives you the ability to go and customize each of a particular hardware backend. But then you will need to write from CUDA kernels, and you write optimally from AMD, and so on. So the kind of engineering effort is a bit more broadened in that sense. Mojo, I have not looked at specific details yet. I think it's good to start to say, it's a language, right? I believe there will also be machine learning compilation technologies behind it. So it's good to say, interesting place in there. In the case of MLC, our case is that we do not want to have an opinion on how, where, which language people want to develop, deploy, and so on. And we also realize that actually there are two phases. We want to be able to develop and optimize your model. By optimization, I mean, really bring in the best CUDA kernels and do some of the machine learning engineering in there. And then there's a phase where you want to deploy it as a part of the app. So if you look at the space, you'll find that GGML is more like, I'm going to develop and optimize in the C language, right? And then most of the low-level languages they have. And Mojo is that you want to develop and optimize in Mojo, right? And you deploy in Mojo. In fact, that's the philosophy they want to push for. In the ML case, we find that actually if you want to develop models, the machine learning community likes Python. Python is a language that you should focus on. So in the case of MLC, we really want to be able to enable, not only be able to just define your model in Python, that's very common, right? But also do ML optimization, like engineering optimization, CUDA kernel optimization, memory planning, all those things in Python that makes you customizable and so on. But when you do deployment, we realize that people want a bit of a universal flavor. If you are a web developer, you want JavaScript, right? If you're maybe an embedded system person, maybe you would prefer C++ or C or Rust. And people sometimes do like Python in a lot of cases. So in the case of MLC, we really want to have this vision of, you optimize, build a generic optimization in Python, then you deploy that universally onto the environments that people like. [00:32:54]Swyx: That's a great perspective and comparison, I guess. One thing I wanted to make sure that we cover is that I think you are one of these emerging set of academics that also very much focus on your artifacts of delivery. Of course. Something we talked about for three years, that he was very focused on his GitHub. And obviously you treated XGBoost like a product, you know? And then now you're publishing an iPhone app. Okay. Yeah. Yeah. What is his thinking about academics getting involved in shipping products? [00:33:24]Tianqi: I think there are different ways of making impact, right? Definitely, you know, there are academics that are writing papers and building insights for people so that people can build product on top of them. In my case, I think the particular field I'm working on, machine learning systems, I feel like really we need to be able to get it to the hand of people so that really we see the problem, right? And we show that we can solve a problem. And it's a different way of making impact. And there are academics that are doing similar things. Like, you know, if you look at some of the people from Berkeley, right? A few years, they will come up with big open source projects. Certainly, I think it's just a healthy ecosystem to have different ways of making impacts. And I feel like really be able to do open source and work with open source community is really rewarding because we have a real problem to work on when we build our research. Actually, those research bring together and people will be able to make use of them. And we also start to see interesting research challenges that we wouldn't otherwise say, right, if you're just trying to do a prototype and so on. So I feel like it's something that is one interesting way of making impact, making contributions. [00:34:40]Swyx: Yeah, you definitely have a lot of impact there. And having experience publishing Mac stuff before, the Apple App Store is no joke. It is the hardest compilation, human compilation effort. So one thing that we definitely wanted to cover is running in the browser. You have a 70 billion parameter model running in the browser. That's right. Can you just talk about how? Yeah, of course. [00:35:02]Tianqi: So I think that there are a few elements that need to come in, right? First of all, you know, we do need a MacBook, the latest one, like M2 Max, because you need the memory to be big enough to cover that. So for a 70 million model, it takes you about, I think, 50 gigahertz of RAM. So the M2 Max, the upper version, will be able to run it, right? And it also leverages machine learning compilation. Again, what we are doing is the same, whether it's running on iPhone, on server cloud GPUs, on AMDs, or on MacBook, we all go through that same MOC pipeline. Of course, in certain cases, maybe we'll do a bit of customization iteration for either ones. And then it runs on the browser runtime, this package of WebLM. So that will effectively... So what we do is we will take that original model and compile to what we call WebGPU. And then the WebLM will be to pick it up. And the WebGPU is this latest GPU technology that major browsers are shipping right now. So you can get it in Chrome for them already. It allows you to be able to access your native GPUs from a browser. And then effectively, that language model is just invoking the WebGPU kernels through there. So actually, when the LATMAR2 came out, initially, we asked the question about, can you run 17 billion on a MacBook? That was the question we're asking. So first, we actually... Jin Lu, who is the engineer pushing this, he got 17 billion on a MacBook. We had a CLI version. So in MLC, you will be able to... That runs through a metal accelerator. So effectively, you use the metal programming language to get the GPU acceleration. So we find, okay, it works for the MacBook. Then we asked, we had a WebGPU backend. Why not try it there? So we just tried it out. And it's really amazing to see everything up and running. And actually, it runs smoothly in that case. So I do think there are some kind of interesting use cases already in this, because everybody has a browser. You don't need to install anything. I think it doesn't make sense yet to really run a 17 billion model on a browser, because you kind of need to be able to download the weight and so on. But I think we're getting there. Effectively, the most powerful models you will be able to run on a consumer device. It's kind of really amazing. And also, in a lot of cases, there might be use cases. For example, if I'm going to build a chatbot that I talk to it and answer questions, maybe some of the components, like the voice to text, could run on the client side. And so there are a lot of possibilities of being able to have something hybrid that contains the edge component or something that runs on a server. [00:37:47]Alessio: Do these browser models have a way for applications to hook into them? So if I'm using, say, you can use OpenAI or you can use the local model. Of course. [00:37:56]Tianqi: Right now, actually, we are building... So there's an NPM package called WebILM, right? So that you will be able to, if you want to embed it onto your web app, you will be able to directly depend on WebILM and you will be able to use it. We are also having a REST API that's OpenAI compatible. So that REST API, I think, right now, it's actually running on native backend. So that if a CUDA server is faster to run on native backend. But also we have a WebGPU version of it that you can go and run. So yeah, we do want to be able to have easier integrations with existing applications. And OpenAI API is certainly one way to do that. Yeah, this is great. [00:38:37]Swyx: I actually did not know there's an NPM package that makes it very, very easy to try out and use. I want to actually... One thing I'm unclear about is the chronology. Because as far as I know, Chrome shipped WebGPU the same time that you shipped WebILM. Okay, yeah. So did you have some kind of secret chat with Chrome? [00:38:57]Tianqi: The good news is that Chrome is doing a very good job of trying to have early release. So although the official shipment of the Chrome WebGPU is the same time as WebILM, actually, you will be able to try out WebGPU technology in Chrome. There is an unstable version called Canary. I think as early as two years ago, there was a WebGPU version. Of course, it's getting better. So we had a TVM-based WebGPU backhand two years ago. Of course, at that time, there were no language models. It was running on less interesting, well, still quite interesting models. And then this year, we really started to see it getting matured and performance keeping up. So we have a more serious push of bringing the language model compatible runtime onto the WebGPU. [00:39:45]Swyx: I think you agree that the hardest part is the model download. Has there been conversations about a one-time model download and sharing between all the apps that might use this API? That is a great point. [00:39:58]Tianqi: I think it's already supported in some sense. When we download the model, WebILM will cache it onto a special Chrome cache. So if a different web app uses the same WebILM JavaScript package, you don't need to redownload the model again. So there is already something there. But of course, you have to download the model once at least to be able to use it. [00:40:19]Swyx: Okay. One more thing just in general before we're about to zoom out to OctoAI. Just the last question is, you're not the only project working on, I guess, local models. That's right. Alternative models. There's gpt4all, there's olama that just recently came out, and there's a bunch of these. What would be your advice to them on what's a valuable problem to work on? And what is just thin wrappers around ggml? Like, what are the interesting problems in this space, basically? [00:40:45]Tianqi: I think making API better is certainly something useful, right? In general, one thing that we do try to push very hard on is this idea of easier universal deployment. So we are also looking forward to actually have more integration with MOC. That's why we're trying to build API like WebILM and other things. So we're also looking forward to collaborate with all those ecosystems and working support to bring in models more universally and be able to also keep up the best performance when possible in a more push-button way. [00:41:15]Alessio: So as we mentioned in the beginning, you're also the co-founder of Octomel. Recently, Octomel released OctoAI, which is a compute service, basically focuses on optimizing model runtimes and acceleration and compilation. What has been the evolution there? So Octo started as kind of like a traditional MLOps tool, where people were building their own models and you help them on that side. And then it seems like now most of the market is shifting to starting from pre-trained generative models. Yeah, what has been that experience for you and what you've seen the market evolve? And how did you decide to release OctoAI? [00:41:52]Tianqi: One thing that we found out is that on one hand, it's really easy to go and get something up and running, right? So if you start to consider there's so many possible availabilities and scalability issues and even integration issues since becoming kind of interesting and complicated. So we really want to make sure to help people to get that part easy, right? And now a lot of things, if we look at the customers we talk to and the market, certainly generative AI is something that is very interesting. So that is something that we really hope to help elevate. And also building on top of technology we build to enable things like portability across hardwares. And you will be able to not worry about the specific details, right? Just focus on getting the model out. We'll try to work on infrastructure and other things that helps on the other end. [00:42:45]Alessio: And when it comes to getting optimization on the runtime, I see when we run an early adopters community and most enterprises issue is how to actually run these models. Do you see that as one of the big bottlenecks now? I think a few years ago it was like, well, we don't have a lot of machine learning talent. We cannot develop our own models. Versus now it's like, there's these great models you can use, but I don't know how to run them efficiently. [00:43:12]Tianqi: That depends on how you define by running, right? On one hand, it's easy to download your MLC, like you download it, you run on a laptop, but then there's also different decisions, right? What if you are trying to serve a larger user request? What if that request changes? What if the availability of hardware changes? Right now it's really hard to get the latest hardware on media, unfortunately, because everybody's trying to work on the things using the hardware that's out there. So I think when the definition of run changes, there are a lot more questions around things. And also in a lot of cases, it's not only about running models, it's also about being able to solve problems around them. How do you manage your model locations and how do you make sure that you get your model close to your execution environment more efficiently? So definitely a lot of engineering challenges out there. That we hope to elevate, yeah. And also, if you think about our future, definitely I feel like right now the technology, given the technology and the kind of hardware availability we have today, we will need to make use of all the possible hardware available out there. That will include a mechanism for cutting down costs, bringing something to the edge and cloud in a more natural way. So I feel like still this is a very early stage of where we are, but it's already good to see a lot of interesting progress. [00:44:35]Alessio: Yeah, that's awesome. I would love, I don't know how much we're going to go in depth into it, but what does it take to actually abstract all of this from the end user? You know, like they don't need to know what GPUs you run, what cloud you're running them on. You take all of that away. What was that like as an engineering challenge? [00:44:51]Tianqi: So I think that there are engineering challenges on. In fact, first of all, you will need to be able to support all the kind of hardware backhand you have, right? On one hand, if you look at the media library, you'll find very surprisingly, not too surprisingly, most of the latest libraries works well on the latest GPU. But there are other GPUs out there in the cloud as well. So certainly being able to have know-hows and being able to do model optimization is one thing, right? Also infrastructures on being able to scale things up, locate models. And in a lot of cases, we do find that on typical models, it also requires kind of vertical iterations. So it's not about, you know, build a silver bullet and that silver bullet is going to solve all the problems. It's more about, you know, we're building a product, we'll work with the users and we find out there are interesting opportunities in a certain point. And when our engineer will go and solve that, and it will automatically reflect it in a service. [00:45:45]Swyx: Awesome. [00:45:46]Alessio: We can jump into the lightning round until, I don't know, Sean, if you have more questions or TQ, if you have more stuff you wanted to talk about that we didn't get a chance to [00:45:54]Swyx: touch on. [00:45:54]Alessio: Yeah, we have talked a lot. [00:45:55]Swyx: So, yeah. We always would like to ask, you know, do you have a commentary on other parts of AI and ML that is interesting to you? [00:46:03]Tianqi: So right now, I think one thing that we are really pushing hard for is this question about how far can we bring open source, right? I'm kind of like a hacker and I really like to put things together. So I think it's unclear in the future of what the future of AI looks like. On one hand, it could be possible that, you know, you just have a few big players, you just try to talk to those bigger language models and that can do everything, right? On the other hand, one of the things that Wailing Academic is really excited and pushing for, that's one reason why I'm pushing for MLC, is that can we build something where you have different models? You have personal models that know the best movie you like, but you also have bigger models that maybe know more, and you get those models to interact with each other, right? And be able to have a wide ecosystem of AI agents that helps each person while still being able to do things like personalization. Some of them can run locally, some of them, of course, running on a cloud, and how do they interact with each other? So I think that is a very exciting time where the future is yet undecided, but I feel like there is something we can do to shape that future as well. [00:47:18]Swyx: One more thing, which is something I'm also pursuing, which is, and this kind of goes back into predictions, but also back in your history, do you have any idea, or are you looking out for anything post-transformers as far as architecture is concerned? [00:47:32]Tianqi: I think, you know, in a lot of these cases, you can find there are already promising models for long contexts, right? There are space-based models, where like, you know, a lot of some of our colleagues from Albert, who he worked on this HIPPO models, right? And then there is an open source version called RWKV. It's like a recurrent models that allows you to summarize things. Actually, we are bringing RWKV to MOC as well, so maybe you will be able to see one of the models. [00:48:00]Swyx: We actually recorded an episode with one of the RWKV core members. It's unclear because there's no academic backing. It's just open source people. Oh, I see. So you like the merging of recurrent networks and transformers? [00:48:13]Tianqi: I do love to see this model space continue growing, right? And I feel like in a lot of cases, it's just that attention mechanism is getting changed in some sense. So I feel like definitely there are still a lot of things to be explored here. And that is also one reason why we want to keep pushing machine learning compilation, because one of the things we are trying to push in was productivity. So that for machine learning engineering, so that as soon as some of the models came out, we will be able to, you know, empower them onto those environments that's out there. [00:48:43]Swyx: Yeah, it's a really good mission. Okay. Very excited to see that RWKV and state space model stuff. I'm hearing increasing chatter about that stuff. Okay. Lightning round, as always fun. I'll take the first one. Acceleration. What has already happened in AI that you thought would take much longer? [00:48:59]Tianqi: Emergence of more like a conversation chatbot ability is something that kind of surprised me before it came out. This is like one piece that I feel originally I thought would take much longer, but yeah, [00:49:11]Swyx: it happens. And it's funny because like the original, like Eliza chatbot was something that goes all the way back in time. Right. And then we just suddenly came back again. Yeah. [00:49:21]Tianqi: It's always too interesting to think about, but with a kind of a different technology [00:49:25]Swyx: in some sense. [00:49:25]Alessio: What about the most interesting unsolved question in AI? [00:49:31]Swyx: That's a hard one, right? [00:49:32]Tianqi: So I can tell you like what kind of I'm excited about. So, so I think that I have always been excited about this idea of continuous learning and lifelong learning in some sense. So how AI continues to evolve with the knowledges that have been there. It seems that we're getting much closer with all those recent technologies. So being able to develop systems, support, and be able to think about how AI continues to evolve is something that I'm really excited about. [00:50:01]Swyx: So specifically, just to double click on this, are you talking about continuous training? That's like a training. [00:50:06]Tianqi: I feel like, you know, training adaptation and it's all similar things, right? You want to think about entire life cycle, right? The life cycle of collecting data, training, fine tuning, and maybe have your local context that getting continuously curated and feed onto models. So I think all these things are interesting and relevant in here. [00:50:29]Swyx: Yeah. I think this is something that people are really asking, you know, right now we have moved a lot into the sort of pre-training phase and off the shelf, you know, the model downloads and stuff like that, which seems very counterintuitive compared to the continuous training paradigm that people want. So I guess the last question would be for takeaways. What's basically one message that you want every listener, every person to remember today? [00:50:54]Tianqi: I think it's getting more obvious now, but I think one of the things that I always want to mention in my talks is that, you know, when you're thinking about AI applications, originally people think about algorithms a lot more, right? Our algorithm models, they are still very important. But usually when you build AI applications, it takes, you know, both algorithm side, the system optimizations, and the data curations, right? So it takes a connection of so many facades to be able to bring together an AI system and be able to look at it from that holistic perspective is really useful when we start to build modern applications. I think it's going to continue going to be more important in the future. [00:51:35]Swyx: Yeah. Thank you for showing the way on this. And honestly, just making things possible that I thought would take a lot longer. So thanks for everything you've done. [00:51:46]Tianqi: Thank you for having me. [00:51:47]Swyx: Yeah. [00:51:47]Alessio: Thanks for coming on TQ. [00:51:49]Swyx: Have a good one. [00:51:49] Get full access to Latent Space at www.latent.space/subscribe
Ben Bajarin and Jay Goldberg discuss new competitive dynamics emerging in semis thanks to AI. They also highlight a few takeaways from AMDs recent AI and Datacenter day they attended.
EU-Parlament stimmt für AI Act, AMDs neue KI Chips, Bard lernt Coden, Spannungen zwischen Microsoft und OpenAI, Und Google SGE im Test https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/
Ohne Frage, Star Wars - Jedi Survivor hat auf dem PC eine technische Bruchlandung hingelegt: Selbst High-End-CPUs sind aktuell nicht in der Lage, schnelle Grafikkarten auszulasten, je nach Szene fallen die FPS deutlich, es hakt. Immerhin: Grafisch macht das Spiel so einiges her, wie Jan auf dem Asus ROG Strix Scar 17 mit AMDs Dragon-Range-CPU feststellen konnte und dann auch gleich noch mit Fabian den Test des Ryzen 9 7945HX mit 16 Kernen bespricht. Ebenfalls noch einmal zur Sprache kommen müssen der (viel zu) späte Schlussstrich unter eine enttäuschende 170-Euro-Maus von Roccat und am Ende gibt es mal wieder eine Kelle aus der Gaming-Grafikkarten-Gerüchteküche.
Denna vecka pratar vi om de handhållna spelkonsolernas renässans som till synes är i full sving: Steam Deck, Switch och nya Asus Ally. Och om 2023 års bästa mellanklassmobiler och AMDs sämsta lösning på CPU-namn. Detta och mycket mer i veckans avsnitt av TechBubbel. 00:00:00 – Välkommen till TechBubbel 00:02:22 – Tack till våra patrons! 00:05:53 – Vad är framtiden för handhållna spelkonsoler? 00:28:26 – Bästa mellanklass-telefonen just nu 00:43:58 – Nvidia fyller 30 år 00:53:56 – Veckans Facepalm 00:58:17 – Veckans bubbel 01:02:57 – Stöd oss på Patreon.com/techbubbel! Exekutiva producenter: Oskar Eriksson Joa War Mathias Alexandersson Mattias Ctrl Enqvist Emil Råsmark Rikner Oscar Wahlberg Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Daniel Timm Mats Jidaker
In this week's episode, we're so excited to bring Lynn Geautraux to you guys! Lynn is also going to be a presenter at the symposium. He self-published his book called Comprehensive Cane Tip and AMD Modifications, Design and Refurbishing where he shared all his knowledge and his many years of experience in tinkering with AMDs which we can learn from and even hear about it in person during his symposium presentation that's coming up. If this doesn't get you super excited already, come join us in the podcast and listen to what he has to share! Links: Allied Independence, Website | Instagram | Facebook Comprehensive Cane Tip and AMD Modifications, Design and Refurbishing, Book
Die CES 2023 ist vorrüber und hat zahlreiche technische Neuvorstellungen mit sich gebracht. Die Redakteure Jan und Fabian diskutieren sie und gewähren Einblick hinter die Kulissen.
Denna veckan pratar vi om CES 2023 och de största nyheterna, inklusive MagSafe som kommer till alla, Nvidias dåliga lansering som vi inte trodde kunde bli sämre, nya processorer från Intel och AMD samt nya ljusstarka OLED-TV-apparater. Detta och mycket mer i veckans avsnitt av TechBubbel. 00:02:39 – Tack till våra patrons 00:04:45 – Intel tar 24 kärnor till laptops 00:15:32 – Nvidia är sämst 00:34:55 – AMDs gaming-CPUer kan vinna 2023 00:53:16 – Största OLED-nyheterna någonsin 01:02:27 – Veckans Facepalm 01:04:23 – Veckans bubbel Exekutiv producent: Mattias Ctrl Enqvist Mathias Alexandersson Joa War Oskar Eriksson Emil Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Mats Jidaker Daniel Timm
Eine wilde Woche. Es fing an mit den Reviews und dem Verkaufsstart von AMDs Radeon RX 7900 XT(X): Treiber nicht fertig, Leistungswerte all over the place, Stromverbrauch viel zu hoch, kaum Bestand bei den Händlern. Trumps große Ankündigung waren… Sammelbildchen-NFTs! Und Elno. Elno, Elno, Elno. Erst sperrt er zahlreiche Journalisten vorgeblich wegen Doxxing, dann verbietet […] The post Folge 131: Verkorkster Start von AMDs 7900XT(X), Trump-NFTs, Fusionsreaktoren dauern noch und Elno dreht am Rad appeared first on Technikquatsch.
Eine wilde Woche. Es fing an mit den Reviews und dem Verkaufsstart von AMDs Radeon RX 7900 XT(X): Treiber nicht fertig, Leistungswerte all over the place, Stromverbrauch viel zu hoch, kaum Bestand bei den Händlern. Trumps große Ankündigung waren... Sammelbildchen-NFTs! Und Elno. Elno, Elno, Elno. Erst sperrt er zahlreiche Journalisten vorgeblich wegen Doxxing, dann verbietet er das Posten von Links zu anderen Social-Media-Netzwerken. Das hat er natürlich nicht einmal zwei Stunden (!) nach unserer Aufnahme am Sonntag Abend, 18.12.2022, wieder zurückgenommen und dann per Twitter-Umfrage die "Vertrauensfrage" gestellt, ob er zurücktreten soll als CEO (57% stimmten für "ja"). Wir haben jetzt kein Update dazu aufgenommen, irgendwann reichts auch. Könnte eh schon wieder überholt sein, wenn die Folge jetzt online geht. Viel Spaß mit Folge 131! Sprecher: Meep, Michael Kister Besucht unsim Discord https://discord.gg/SneNarVCBMauf Twitter https://twitter.com/technikquatschauf Youtube https://www.youtube.com/channel/UCm7FRJku8ZzrZkmeY79j0WQ 00:00:49 Vorweihnachtsplanungen und AvatarTasting History with Max Miller: The true story of the First Thanksgiving https://www.youtube.com/watch?v=ixTkzBuD-cw 00:08:20 Muskeradehttps://www.businessinsider.de/wirtschaft/twitter-hat-keine-chance-elon-musk-war-schon-immer-ein-blender-jetzt-fliegt-er-auf-a/https://twitter.com/JanAlbrecht/status/1604589609303900160https://www.nbcnews.com/tech/social-media/twitter-suspends-journalists-covering-elon-musk-company-rcna62032https://www.zdf.de/nachrichten/digitales/twitter-sperre-journalisten-musk-julian-jaursch-100.html#xtor=CS5-62https://twitter.com/az_rww/status/1604008769397981185Lambrecht, Musk - und eine Adelige: Verleihung der Goldenen Vollpfosten | heute-show vom 16.12.2022 https://www.youtube.com/watch?v=h8AUBxaxHtQ#t=7m56s 00:23:24 Trump NFTshttps://collecttrumpcards.com/ 00:27:18 Steam Deck u.a. in Japan gelandet, weitere Pläne von Valvehttps://www.theverge.com/23499215/valve-steam-deck-interview-late-2022 00:30:26 verkorkster Start für AMD Radeon RX 7900 XT(X)https://www.computerbase.de/2022-12/amd-radeon-rx-7900-xtx-xt-review-test/Gamer's Nexus: AMD Radeon RX 7900 XTX Review & GPU Benchmarks https://www.youtube.com/watch?v=We71eXwKODwDigital Foundry: Radeon RX 7900 XTX/ RX 7900 XT vs RTX 4080 Review https://www.youtube.com/watch?v=8RN9J6cE08cHardware Unboxed: RTX 4080 Killer? Radeon RX 7900 XTX Review & Benchmarks https://www.youtube.com/watch?v=4UFiG7CwpHk 00:41:20 gewaltiger Sprung für die Fusions-Forschung, ein kleiner für die Menschheithttps://bigthink.com/the-future/fusion-power-nif-hype-lose-energy/https://www.spektrum.de/news/ist-in-der-fusionsforschung-ein-durchbruch-gelungen/2087187 00:59:15 Vorsicht bei Wissenschafts-News oder: Besserer Sex mit Schokolade 01:10:45 Intel möchte mehr Geld für die geplante Fab bei Magdeburghttps://www.wiwo.de/technologie/digitale-welt/halbleiterindustrie-intel-will-mehr-geld-baustart-von-werk-in-magdeburg-koennte-sich-verschieben/28874806.html 01:19:25 Satelliten-Internet für Rechenzentrenhttps://www.handelsblatt.com/technik/it-internet/satelliten-internet-rivale-fuer-starlink-europaeische-satelliten-fuer-neues-hochleistungsnetz-gestartet/28874130.html 01:28:56 Elno-Zwischen-Update (ja, aber nicht das Neueste) 01:34:30 M&M müde
Vi pratar bl.a. om AMDs nya kort, ZA/UM-soppan, PSVR2, Metroid Prime 4, Jedi: Survivor, Gotham Knights, Case of the Golden Idol, Signalis, Animal Crossing: New Horizons, Bayonetta 3 och God of War: Ragnarök Stötta oss på Patreon! För 50kr i månaden får du tillgång till podden oklippt, direkt när den har spelats in, ett exklusivt extraavsnitt kallat KB+(PLUS) i månaden och dessutom tillgång till allt tidigare exklusivt content samt allt material under jul/nyår och sommarledigheter. För 100kr i månaden får du allt som 50kr-patrons får men du får OCKSÅ vara en del av podden genom det RAFFLANDE segmentet "Audio Log" där vi spelar upp ett kort ljudmeddelande från våra 100kr Patreons som vi bemöter, besvarar eller dömer ut. På den här nivån får du dessutom första tjing på alla eventuella koder & gratisgrejer som vi ibland erbjuder! www.patreon.com/kontrollbehov Köp vår merch på Podstore.se! https://www.podstore.se/podstore/kontrollbehov/ Besök vår Youtube-kanal och prenumerera: https://www.youtube.com/channel/UClQ2sTbiCcR0dqNFHwcTB0g Gå med i gruppen Kontrollbehov - Eftersnack på Facebook: https://www.facebook.com/groups/1104625369694949/?ref=bookmarks Vi finns såklart också på Discord! https://discord.gg/848F6TWXDY Hör av er: kontrollbehovpodcast@gmail.com
Today we talk about the insane prices of the new GPU lineup from NVIDIA, AMDs hot CPUs, Intel ARC, Google Stadia and Portal RTX. Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
Today we talk about AMDs announcement of the longevity of AM5, Intel reveals A770 performance and Ethereum POW. Gaming news covers Overwatch ditching loot boxes, No split screen Halo and Valve games in development. https://www.youtube.com/watch?v=p0yjlORHYYg Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
It's Episode 70 of the Zenspath 4 Button Podcast! Join Jeremy (@Zenspath), Rachel (@Out_Racheous), Kelly (@MississippiSci) & Joie (@fromforestgreen) as we discuss the recent Nintendo Direct Mini, Xenoblade 3 CE debacle, AMDs new game dev tools, John Williams finishes up, Kojima does good, Kotick does bad, & more!
Denna veckan pratar vi om det bästa och hetaste från världens största datormässa. Hur hett kommer AMD Zen 4 bli egentligen, varför gör Corsair en laptop och vilka nyheter uteblev på årets mässa? Detta och mycket mer i veckans avsnitt av TechBubbel. 00:04:04 – AMDs 5,5 GHz-flaggskepp 00:28:04 – Corsairs fula laptop 00:32:08 – 500 Hz spelbildskärm 00:37:57 – Nvidia, Qualcomm och Intel 00:46:50 – Vi sammanfattar Computex 2022 00:48:52 – Veckans Facepalm 00:51:42 – Veckans bubbel Exekutiv producent: Mattias Ctrl Enqvist Joa War Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Oskar Eriksson Mats Jidaker Daniel Timm Mathias Alexandersson Emil Råsmark Rikner
In the regular show we talk about AMDs announcements at computex. Mainly the much anticipated 7000 series CPUs which will debut in the fall with the new AM5 motherboards. Also, AMD driver updates doing great for DX11, Steam Deck support on the rise, Star Citizen gouging and Duke Nukem Forever. Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
Denna veckan pratar vi om hur bra Metas planer för VR- och AR-hårdvara är och hur framtiden för XR ser ut. Och om AMDs enorma tillväxt samt ovanligt snabba framtid med Zen 4 i laptops. Detta och mycket mer i veckans avsnitt av TechBubbel. 00:03:08 – Tack till våra patrons 00:07:30 – Är AMDs framgångar hotade? 00:32:36 – Har Meta tagit VR-vatten över huvudet? 00:55:36 – Veckans Facepalm 00:59:04 – Veckans bubbel Exekutiv producent: Mattias Ctrl Enqvist Joa War Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Oskar Eriksson Mats Jidaker Daniel Timm Mathias Alexandersson Emil Råsmark Rikner
Nintendo Delays Advance Wars Reboot Camp Indefinetly, due to the war in Ukraine Square Enix did a bait and switch with Chocobo GP A long Stress test reveals nothing to fear with Switch oled burn in Steam released WIndows drivers for the steam deck, should you do it? Someone installed windows 11 on a surface duo Google hints at windows games coming to stadia... but why... reports show 4 new AMD CPUs coming AMDs new high end CPU may not be BLW Intel GPUs may be coming soon, between may and june Nvidia RTX 3090 ti at the end of the month? Android to introduce app archiving Intel wants the end of DDR4 Android wants tablets to be the future Masturbation pods as a work perk? --- Send in a voice message: https://anchor.fm/eagleeyesontech/message
Season 3 of The Panel Clubhouse kicks off with a discussion about the now controversial Air Max Day (month). @rain_nelson_445, @blcklistd, @millzgp and @vanwilljamz are joined with some of the UK's more prominent sneaker figures to explore the energy, motivations and releases of past and future AMDs. Head over to our socials to get involved with the conversation and have your say, all are welcome! Make sure to follow us on your podcast app, on Clubhouse and on Instagram www.instagram.com/thepanel.online. Finally, remember to use the hashtag #thepanel on your sneaker pics! Stay safe, stay blessed. The Panel
Daniel Buitrago, Brandon Fifield & Jack Lau pin the throttle with Nick Olzenak of AMDS & PWS Books Beard combs & Beardos, Wild Game Meat party, Drop Anchor get boarded, Sea Trials with the Commish, Future boat design, New Zealand's marine boat innovation, Coastal Craft to Kingfisher, Tags on Friday, buying a Cali boat, I need that boat for biz, History of AMDS and commercial drive, Snow check spring fever, SkiDoo time, sleds for kids, muff pots and cooker cans, COOP sled option, What's up with Arctic Cat, everyone needs a tractor, treasure hunters, carrying on a legacy, not getting blown off the hook, winter boating and prep, sea lions hunting B Ranger Danger, Tastes so good smallest deer story, sketchy winter boating story www.alaskawildproject.com https://www.youtube.com/channel/UCbYEEV6swi2yZWWuFop73LQ https://www.instagram.com/alaskawildproject/
Während alle Welt über den Chipmangel spricht, legt Intel äußerst ambitionierte Pläne für gleich vier neue Prozessorgenerationen für Desktop-PCs und Notebooks bis 2025 vor. Trotz der jahrelangen Verzögerungen beim Umstieg auf den 10-nm-Prozess gibt sich der US-Konzern optimistisch, ab dem kommenden Jahr jährlich eine neue Prozessgeneration in die Massenproduktion bringen zu können. Damit will Intel auch dem immer weiter aufholenden Konkurrenten AMD etwas entgegensetzen. Der hatte Intel zuletzt beim Börsenwert überholt, obwohl er nur ein Viertel des Umsatzes macht. Gleichzeitig schickt sich Apple an, den Abschied von Intels Prozessoren in diesem Jahr abzuschließen. Was uns angesichts all dessen auf dem CPU-Markt bevorsteht, besprechen wir in einer neuen #heiseshow. Was hat Intel für die kommenden drei Jahre angekündigt? Wo steht der Konzern aktuell, wie realistisch ist der ambitionierte CPU-Fahrplan bis 2025? Was plant AMD und wie groß ist die Konkurrenz für Intel wirklich. Wie will AMD den immer noch großen Vorsprung weiter aufholen? Wie unterschieden sich die technischen Ansätze von AMD und Intel, die Strukturbreiten sind ja nur ein Aspekt? Was machen die beiden Konzerne sonst noch unterschiedlich? Wodurch lässt sich das gegensätzliche Bild erklären, das sie abgeben? Und wie sieht es bei der Konkurrenz aus, was bedeutet Apples Wechsel auf eigene CPU-Technik? Darüber und über viele weitere Fragen auch aus dem Publikum spricht Martin Holland (@fingolas) mit c't-Redakteur Carsten Spille (@carstenspille) und Mark Mantel von heise online in einer neuen Folge der #heiseshow. === Anzeige / Sponsorenhinweis === Schnapp dir den Exklusiv-Deal + Geschenk zum nordVPN-Geburtstag: nordvpn.com/heiseshow Jetzt mit der risikofreien 30-Tage-Geld-zurück-Garantie. === Anzeige / Sponsorenhinweis Ende ===
Während alle Welt über den Chipmangel spricht, legt Intel äußerst ambitionierte Pläne für gleich vier neue Prozessorgenerationen für Desktop-PCs und Notebooks bis 2025 vor. Trotz der jahrelangen Verzögerungen beim Umstieg auf den 10-nm-Prozess gibt sich der US-Konzern optimistisch, ab dem kommenden Jahr jährlich eine neue Prozessgeneration in die Massenproduktion bringen zu können. Damit will Intel auch dem immer weiter aufholenden Konkurrenten AMD etwas entgegensetzen. Der hatte Intel zuletzt beim Börsenwert überholt, obwohl er nur ein Viertel des Umsatzes macht. Gleichzeitig schickt sich Apple an, den Abschied von Intels Prozessoren in diesem Jahr abzuschließen. Was uns angesichts all dessen auf dem CPU-Markt bevorsteht, besprechen wir in einer neuen #heiseshow. Was hat Intel für die kommenden drei Jahre angekündigt? Wo steht der Konzern aktuell, wie realistisch ist der ambitionierte CPU-Fahrplan bis 2025? Was plant AMD und wie groß ist die Konkurrenz für Intel wirklich. Wie will AMD den immer noch großen Vorsprung weiter aufholen? Wie unterschieden sich die technischen Ansätze von AMD und Intel, die Strukturbreiten sind ja nur ein Aspekt? Was machen die beiden Konzerne sonst noch unterschiedlich? Wodurch lässt sich das gegensätzliche Bild erklären, das sie abgeben? Und wie sieht es bei der Konkurrenz aus, was bedeutet Apples Wechsel auf eigene CPU-Technik? Darüber und über viele weitere Fragen auch aus dem Publikum spricht Martin Holland (@fingolas) mit c't-Redakteur Carsten Spille (@carstenspille) und Mark Mantel von heise online in einer neuen Folge der #heiseshow. === Anzeige / Sponsorenhinweis === Schnapp dir den Exklusiv-Deal + Geschenk zum nordVPN-Geburtstag: nordvpn.com/heiseshow Jetzt mit der risikofreien 30-Tage-Geld-zurück-Garantie. === Anzeige / Sponsorenhinweis Ende ===
Während alle Welt über den Chipmangel spricht, legt Intel äußerst ambitionierte Pläne für gleich vier neue Prozessorgenerationen für Desktop-PCs und Notebooks bis 2025 vor. Trotz der jahrelangen Verzögerungen beim Umstieg auf den 10-nm-Prozess gibt sich der US-Konzern optimistisch, ab dem kommenden Jahr jährlich eine neue Prozessgeneration in die Massenproduktion bringen zu können. Damit will Intel auch dem immer weiter aufholenden Konkurrenten AMD etwas entgegensetzen. Der hatte Intel zuletzt beim Börsenwert überholt, obwohl er nur ein Viertel des Umsatzes macht. Gleichzeitig schickt sich Apple an, den Abschied von Intels Prozessoren in diesem Jahr abzuschließen. Was uns angesichts all dessen auf dem CPU-Markt bevorsteht, besprechen wir in einer neuen #heiseshow. Was hat Intel für die kommenden drei Jahre angekündigt? Wo steht der Konzern aktuell, wie realistisch ist der ambitionierte CPU-Fahrplan bis 2025? Was plant AMD und wie groß ist die Konkurrenz für Intel wirklich. Wie will AMD den immer noch großen Vorsprung weiter aufholen? Wie unterschieden sich die technischen Ansätze von AMD und Intel, die Strukturbreiten sind ja nur ein Aspekt? Was machen die beiden Konzerne sonst noch unterschiedlich? Wodurch lässt sich das gegensätzliche Bild erklären, das sie abgeben? Und wie sieht es bei der Konkurrenz aus, was bedeutet Apples Wechsel auf eigene CPU-Technik? Darüber und über viele weitere Fragen auch aus dem Publikum spricht Martin Holland (@fingolas) mit c't-Redakteur Carsten Spille (@carstenspille) und Mark Mantel von heise online in einer neuen Folge der #heiseshow. === Anzeige / Sponsorenhinweis === Schnapp dir den Exklusiv-Deal + Geschenk zum nordVPN-Geburtstag: nordvpn.com/heiseshow Jetzt mit der risikofreien 30-Tage-Geld-zurück-Garantie. === Anzeige / Sponsorenhinweis Ende ===
John and Laura Wells join Adrian and Dave to discuss the new Association for Metal Detecting Sport. A new UK based organisation, representing hobbyists in the hobby.
In this episode Aubrey shares her passion for dance and it's therapeutic effect for those who experience mental health issues. She also shares how she fits dance into her marriage. Enjoy!
In this episode of the Cudos Cast, we chat about the disruptive nature of Blockchain and AMD's strategy for blockchain. We dive into NFTs and how NFT's can be integrated into an ecosystem. We then shift the focus to Eth 2.0 and how it is moving to full proof of stake, the increased demand for hardware in the market, and balancing the demand and politics between miners and gamers. Show Notes: [00:04:38] Blockchain as a disrupting factor to the industry [00:10:40] AMD's strategy on Blockchain [00:18:12] Other applications that can boost AMDs value to the gaming base [00:20:24] Ethereum 2.0 as a threat? [00:24:55] Balancing the political landscape between miners and gamers [00:31:55] Earning passive income from Hardware Links: Website: https://www.cudoventures.com/ Twitter: https://twitter.com/cudoventures?lang=en Instagram: https://www.instagram.com/cudoscast/ AMD: https://www.amd.com/en Algorand: https://www.algorand.com/
Erst kürzlich hat Nvidia ein neues Grafikkarten-Monster vorgestellt, nun zieht AMD nach und setzt sogar noch einen drauf. Beim neuen Modell kommt zudem eine ganz besondere Technologie zum Einsatz.
In the show we talk about Lisa Su and her statements regarding the continued chip shortage. RTX 3060s are hitting the used market after Chinese crypto crackdown. AMDs upcoming RX 6600 XT release. A DLSS performance 'hack' that shows promise and more gaming news. https://blog.feedspot.com/computer_hardware_podcasts/ Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
In the show we talk about Nics new acquisition (3080 ti) The lacklustre roll out of AMDs new FidelityFX Super Resolution. Windows 11 and what we hope to see in it. The performance improvements of the AMD 6000 cards though driver updates. Like usual we touch on a myriad of different topics including direct storage and....F1 Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
Hallo ihr lieben,heute, wie angekündigt, erst Dienstags. Wieso und weshalb erfahrt ihr in der Folge.Wie im Titel beschrieben, reden wir über Windows 11, Photovoltaik, AMDs neueste Erfindung und viel mehr von diesen Sachen. Hört gerne rein und lasst ein Follow da. Bis Sonntag!Euer Flo und Sven
Denna veckan pratar vi om vad som händer i datorvärlden och om de största nyheterna från den digitala Computex-mässan. Intel och Nvidia bjöd men AMD bjöd på mest. Var det då AMD som vann mässan? Detta och mycket mer i veckans avsnitt av TechBubbel. 00:04:05 – Tack till TechBubbels patrons! 00:07:03 – Nvidia RTX 3070 Ti och 3080 Ti 00:23:26 – Intel med 5G och spel-NUC 00:36:18 – AMDs stora nyheter! 01:00:45 – Veckans Facepalm 01:02:15 – Veckans bubbel Exekutiva producenter: Mattias Ctrl Enqvist Fai Yu Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Daniel Timm Felix Möller Oskar Eriksson Gabriel Paues
Denna veckan pratar vi om vad som egentligen är AMDs största framgångar när de tar processor-andelar från Intel. Svaret är inte så givet som man kan tro. Och vad har vi lärt oss om spelbutiker för PC från Epic och Apples tvist? Detta och mycket mer i veckans avsnitt av TechBubbel. 00:01:54 – Hur bra går AMD, egentligen? 00:32:24 – Hur Epic Games Store köper sig fram 00:49:11 – Veckans tips 00:52:44 – Veckans Facepalm 00:56:14 – Veckans bubbel Exekutiv producent: Mattias Ctrl Enqvist Tack till TechBubbels producenter som bidrar på Patreon.com/techbubbel: Kim Karlsson Daniel Timm Felix Möller Oskar Eriksson Gabriel Paues
In this week's episode, @vanwilljamz, @rain_nelson_445, @millzgp and @blcklistd host Part 2 of our Air Max Day discussions with participants from both sides of the Atlantic. Questions up for debate for this week: - Bacon am90 - Clot airmax1 pt.2 - Air max day air max 1 scratch n sniff: are we fed up of this way of hunting for kicks, what else could they do?! - Can we as the community show Nike power and change the date of Air Max day. Could we change the date? - Where does this AMD sit in comparison to other years? - What are your thoughts surroundjng the AMD event. Should we anticipate an extra surprises? and for future AMDs do we want the month to be filled with retro classics or new air max innovation (or models) Join in the discussion about trainers/sneakers every Wednesday at 8PM BST on Clubhouse and follow us on www.instagram.com/thepanel.online.
Today we discuss many different aspect of the latest GPU launch from NVIDIA, the RTX 3060. We re-hash a few points, including the mining NERF (which only effects Ethereum), lack of a Founders Edition (Or MSRP card) and the uninspiring performance. Next we discuss some of the things we expect to see from AMDs upcoming event, "Where Gaming Begins: ep. 3". A DLSS competitor perhaps, as well as the 6700XT. Finally we discuss the latest benchmark leaks of Intel Rocket Lake. A little extra talk is saved at the end for Diablo 2: resurrected, and Blizzards remaster train. Along with EAs acquisition of codemasters and the implication of the giant influence on such a well established developer (Loot boxes anyone?). Help Support Us and Get Protected With PIA. https://www.privateinternetaccess.com/pages/buy-vpn/Techill1 tilliterate@gmail.com
Medverkande i detta avsnitt är: Fredrik, Danny och Lotta.Här är frågorna som berörs i dagens nyårsavsnitt!Största händelserna under 2020 i spelvärlden?Vad för spel gjorde störst påverkan i spelvärlden? (Utan att gå in på GOTY)Vad hoppas vi se ske under 2021?Hårdvarumarknaden och hur den påverkats under året!AMDs stora comeback!Konsolgenerationen säger hej!Streamingkanalernas år och framtid!Detta och mycket mer i det årskrönikeliknande nyårsavsnittet med våra sköna Nördlivare!Vi i Nördliv önskar er alla ett gott nytt år.Hoppa gärna med i vår Discord, där vi har omröstningar på olika ämnen och diskussioner om allt nördigt. Låter det intressant, ja tryck här i så fall.★ Support this podcast on Patreon ★
Nach dem großen Assassin’s Creed Podcast kann Tobi sich mit Valhalla endlich wieder der Reihe widmen! Zur Seite stehen ihm dabei Kristina, Daniel und Nino. Wie zu erwarten sind sie sich nicht in allen Punkten einig... Im Hardwareteil, geht um es AMDs neue Grafikkartengeneration, Big Navi. -------------------------------------------------------------------------------------------------------------------------------------------- (00:00) - Einstieg (01:35) - Hardware: Big Navi (31:20) - Short News (32:40) - Assassin’s Creed: Valhalla -------------------------------------------------------------------------------------------------------------------------------------------- PCGC Podcast Discord Server: https://discord.gg/WJ9mH76
9:50 Starlink25:06 Benchmarks bestätigen AMDs neue Übermacht46:14 meeps Retrorechner1:12:01 Apples neue ARM CPU M11:48:12 Dom und OK COOL! Nun sind wir zu dritt und reden über Internet, CPUs und irgendwie dann doch über alles. Außerdem ganz viel Unterstützung für Dom und OK COOL Podcast! Sprecher: meep, Mohammed Ali Dad, Michael Kister Shownotes: https://www.golem.de/news/satelliteninternet-starlink-will-noch-in-diesem-jahr-in-deutschland-starten-2011-152015.html https://www.computerbase.de/2020-11/amd-ryzen-5000-test/ https://www.pcgameshardware.de/Radeon-RX-6800-XT-Grafikkarte-276951/News/Smart-Access-Memory-fuer-Ryzen-5000-soll-bis-11-Prozent-mehr-Leistung-bringen-1360977/ https://www.ebay-kleinanzeigen.de/s-anzeige/3dfx-voodoo-2-sli-bruecke-neu/1524448895-225-2898 https://www.golem.de/news/apple-silicon-was-der-m1-chip-kann-und-bedeutet-2011-152029-3.html
9:50 Starlink25:06 Benchmarks bestätigen AMDs neue Übermacht46:14 meeps Retrorechner1:12:01 Apples neue ARM CPU M11:48:12 Dom und OK COOL! Nun sind wir zu dritt und reden über Internet, CPUs und irgendwie dann doch über alles. Außerdem ganz viel Unterstützung für Dom und OK COOL Podcast! Sprecher: meep, Mohammed Ali Dad, Michael Kister Shownotes: https://www.golem.de/news/satelliteninternet-starlink-will-noch-in-diesem-jahr-in-deutschland-starten-2011-152015.html https://www.computerbase.de/2020-11/amd-ryzen-5000-test/ https://www.pcgameshardware.de/Radeon-RX-6800-XT-Grafikkarte-276951/News/Smart-Access-Memory-fuer-Ryzen-5000-soll-bis-11-Prozent-mehr-Leistung-bringen-1360977/ https://www.ebay-kleinanzeigen.de/s-anzeige/3dfx-voodoo-2-sli-bruecke-neu/1524448895-225-2898 […] The post Folge 28: Starlink in Deutschland, AMDs Übermacht wird bestätigt, meeps RetroPC, Apples neuer ARM Chip M1 beeindruckt appeared first on Technikquatsch.
In der heutigen Folge sprechen wir mit CPU-Fachmann Dave über die neue Zen-Generation, die seit vor sechs Tagen mit den Ryzen-5000-Chips vorgestellt wurde. Die neuen CPUs können mit einem besseren Takt-pro-Kern punkten und scheinen Intel in Sachen "bester Performer in Videospielen" den Rang abzulaufen. Dave gibt uns ein paar Einblicke, warum diese neuen Chips so gut sind.
Denna veckan pratar vi om AMDs nya grafikkort som kan bli en dröm för vissa och ett dåligt val för många. Vi dyker också ner i världen av självkörande bilar. Löftena har varit många men hur långt har vi kommit egentligen? Detta och mycket mer i veckans avsnitt av TechBubbel. 00:07:22 – AMD RX 6000 överraskar, också 00:40:34 – Självkörande bilar år 2020 01:14:24 – Veckans tips 01:36:10 – Veckans Facepalm 01:36:53 – Veckans bubbel
Heute dreht sich alles um Pikmin 3 Deluxe , den GPU Vergleich zwischen nVIDIAs 3000er Serie und AMDs 6000er Serie und um Zustelldienste, die besser vom Hund gefressen werden sollten. Euer nerdkeller.eu TeamHomepage: www.nerdkeller.euKontakt: info@nerdkeller.euWir freuen uns über Kommentare, konstruktive Kritik und euer Abo!
Vi tar bl.a. upp Cyberpunks senaste delay, Nintendos senaste direct, AMDs nya grafikkort, face plate-gate, Katana Zero, Age of Calamity-demon, Ghostrunner, Control i molnet och Watch Dogs Legion --- Stötta oss på Patreon! För 5$ i månaden får du Kontrollbehov+, 4 extrapoddar i månaden med totalt en hel timme extra content! För 10$ i månaden får du dessutom tillgång till podden oklippt direkt när den har spelats in! www.patreon.com/kontrollbehov --- Besök vår Youtube-kanal och prenumerera: https://www.youtube.com/channel/UClQ2sTbiCcR0dqNFHwcTB0g --- Gå med i gruppen Kontrollbehov - Eftersnack på Facebook: https://www.facebook.com/groups/1104625369694949/?ref=bookmarks --- Dragon Slayer by 魔界Symphony | https://soundcloud.com/makai-symphony Music promoted by https://www.free-stock-music.com Creative Commons Attribution-ShareAlike 3.0 Unported https://creativecommons.org/licenses/by-sa/3.0/deed.en_US --- Hör av er: kontrollbehovpodcast@gmail.com
Das Trio Infernale ist wieder vereint. Phil ist aus der Vaterschaftsurlaub zurück und hat gleich einiges zu berichten. Vom Pixel-5-Test über die neue Xbox Series S/X bis zum Openworld-Hacker-Abenteuer «Watch Dogs Legion» werden alle Fronten abgedeckt. Derweil schwärmt Simon von der HBO-Serie «I may destroy you» und Luca hält uns mit Grossleinwand-News auf dem Laufenden. Hauptthema sind jedoch AMDs neue Grafikkarten, die Nvidia das Fürchten lehren könnten. Themen[00:03:17] AMD RX 6000[00:22:30] RTX 3070 Reviews[00:23:11] Hdmi-2.1-Bug in AV-Receivern[00:33:24] «Cyberpunk 2077» verschoben[00:40:01] «Control» und «Hitman 3» via Cloud auf Switch[00:41:06] Netflix arbeitet an Live-Action-Serie zu «Assassin's Creed»[00:47:20] «James Bond»-Film: Kein Release via Streamingdienste[00:55:10] I may destroy you[01:02:26] Ted Lasso[01:08:24] On the rocks[01:09:25] Pixel 5[01:17:00] Hands on: Xbox Series S [01:21:20] Watch Dogs LegionMehr über die Redaktoren oder den digitec Podcast findest du auf der Website digitec.ch. Du kannst uns auch direkt folgen indem du in einem Artikel von uns am Ende des Textes auf «Autor folgen» klickst.Philipp Rüegg auf digitec oder Twitter unter laz0rbrainSimon Balissat auf digitec oder Twitter unter en_graveLuca Fontana auf digitec oder Twitter unter LFonta88E-MailWebsiteDiscordYoutube Musik von Claudio Beck
This week we discuss the return of Supernatural!The Boys Finale!Why Johnny's internet is so bad!Baseballs playoffs burning as the astros tear through them again.Jurassic World pushed!Schitts Creek in a first watch almost entire binge!New AMD Chips!
The Big Bells contest results, we announce the prize winners. Radio Delta 171 having second thoughts about longwave. NASA has been flying a Boeing plane over the VOA Greenville transmitter site to understand electromagnetic interference. DJ Wolfman Jack wants recordings of his early shows from Mexico. Blue Danube Radio is cutting back. Wolf Harranth reports on its origins as the Blue Danube Network. Having a station labelling system. We revisit the IDLogic idea from Pierre Schwab and a competing system called AMDS developed by Deutsche Welle. Also, do you remember when stations were thinking of adopting single sideband (SSB) in order to save bandwidth on the shortwave dial? A few years later it was dead. Mike Bird reports we got the propagation forecast this week. Lovely signoff jingle from Jim Cutler. (Diana Janssen's partner is a lawyer).
Unsere Kollegen haben AMDs neue CPUs ebenso getestet wie das neue Ubuntu. Außerdem zeigen wir, wie man mit Webcam und Greenscreen arbeitet. AMD hat wieder neue Prozessoren vorgestellt. Die Chips sind eher im Mittelklasse-Segment angesiedelt und kratzen weiter an Intels Marktmacht. Dabei hat AMD aber nicht nur neue Ryzen-3000er-Modelle mit Zen-2-Architektur vorgestellt, sondern auch einen älteren Chip neu aufgelegt. Aber auch im Mobilsektor tut sich was, denn es gibt die ersten Notebooks mit Ryzen 4000U. Nun ist auch Intels bislang sichere Domäne der Mobilprozessoren in Gefahr. Wie sich die Prozessoren schlagen und was Intel AMD entgegenzusetzen hat, erklärt Christian Hirsch. (ab 1:37) Canonical hat eine neue Ubuntu-Version vorgestellt. Ubuntu 20.04 ist wieder eine Fassung mit Langzeit-Support und löst damit 18.04 ab. Es gibt zwar keine revolutionären Umstürze, aber ein paar spannende neue Details auf dem Desktop. Manche Dinge sind besser geworden in Ubuntu, wie etwa die Skalierung für hochauflösende Monitore, andere, wie etwa Snaps und Flatpaks, bereiten aber neue Probleme. Aber nicht nur Ubuntu war fleißig, auch die Entwickler von Fedora haben ein neues Release fertiggestellt und viele Details verbessert. Über die Neuerungen und kleinen Detailprobleme der Distros weiß Keywan Tonekaboni Bescheid. (ab 15:33) Videokonferenzen haben groß an Popularität gewonnen. Dabei ist aber nicht jeder damit glücklich. Wer nicht möchte, dass jeder gleich sein privates Arbeitszimmer sieht, kann mit der kostenlosen Software Open Broadcast Studio, einem günstigen Greenscreen (oder zur Not einer blauen Mülltüte) und ein paar Handgriffen sein Büro durch einen beliebigen Hintergrund ersetzen. Das Ganze funktioniert dabei sowohl unter Linux als auch unter Windows. Wie genau man vorgeht, erklärt unsere Profi-Streamerin Liane Dubowy. Wie gut das Setup beim Spielen funktioniert, seht ihr auch unter https://www.youtube.com/ctzockt. (ab 37:01) Angebotsnennung anlässlich der c't Desinfec‘t Du willst einem Trojaner den Kampf ansagen? Dann sichere dir mit unserem Aktionsangebot die neue c't Desinfec‘t und gehe auf Virenjagd: Teste bis zum 4. Juni die neue c't 12/2020 inklusive Desinfec‘t sowie 5 weitere Ausgaben und freue dich zusätzlich auf 66 nerdige c't-Sticker und ein weiteres Geschenk deiner Wahl. Einfach www.ct.de/virenjagd aufrufen und bestellen!
Unsere Kollegen haben AMDs neue CPUs ebenso getestet wie das neue Ubuntu. Außerdem zeigen wir, wie man mit Webcam und Greenscreen arbeitet. AMD hat wieder neue Prozessoren vorgestellt. Die Chips sind eher im Mittelklasse-Segment angesiedelt und kratzen weiter an Intels Marktmacht. Dabei hat AMD aber nicht nur neue Ryzen-3000er-Modelle mit Zen-2-Architektur vorgestellt, sondern auch einen älteren Chip neu aufgelegt. Aber auch im Mobilsektor tut sich was, denn es gibt die ersten Notebooks mit Ryzen 4000U. Nun ist auch Intels bislang sichere Domäne der Mobilprozessoren in Gefahr. Wie sich die Prozessoren schlagen und was Intel AMD entgegenzusetzen hat, erklärt Christian Hirsch. (ab 1:37) Canonical hat eine neue Ubuntu-Version vorgestellt. Ubuntu 20.04 ist wieder eine Fassung mit Langzeit-Support und löst damit 18.04 ab. Es gibt zwar keine revolutionären Umstürze, aber ein paar spannende neue Details auf dem Desktop. Manche Dinge sind besser geworden in Ubuntu, wie etwa die Skalierung für hochauflösende Monitore, andere, wie etwa Snaps und Flatpaks, bereiten aber neue Probleme. Aber nicht nur Ubuntu war fleißig, auch die Entwickler von Fedora haben ein neues Release fertiggestellt und viele Details verbessert. Über die Neuerungen und kleinen Detailprobleme der Distros weiß Keywan Tonekaboni Bescheid. (ab 15:33) Videokonferenzen haben groß an Popularität gewonnen. Dabei ist aber nicht jeder damit glücklich. Wer nicht möchte, dass jeder gleich sein privates Arbeitszimmer sieht, kann mit der kostenlosen Software Open Broadcast Studio, einem günstigen Greenscreen (oder zur Not einer blauen Mülltüte) und ein paar Handgriffen sein Büro durch einen beliebigen Hintergrund ersetzen. Das Ganze funktioniert dabei sowohl unter Linux als auch unter Windows. Wie genau man vorgeht, erklärt unsere Profi-Streamerin Liane Dubowy. Wie gut das Setup beim Spielen funktioniert, seht ihr auch unter https://www.youtube.com/ctzockt. (ab 37:01) Angebotsnennung anlässlich der c't Desinfec‘t Du willst einem Trojaner den Kampf ansagen? Dann sichere dir mit unserem Aktionsangebot die neue c't Desinfec‘t und gehe auf Virenjagd: Teste bis zum 4. Juni die neue c't 12/2020 inklusive Desinfec‘t sowie 5 weitere Ausgaben und freue dich zusätzlich auf 66 nerdige c't-Sticker und ein weiteres Geschenk deiner Wahl. Einfach www.ct.de/virenjagd aufrufen und bestellen!
Unsere Kollegen haben AMDs neue CPUs ebenso getestet wie das neue Ubuntu. Außerdem zeigen wir, wie man mit Webcam und Greenscreen arbeitet. AMD hat wieder neue Prozessoren vorgestellt. Die Chips sind eher im Mittelklasse-Segment angesiedelt und kratzen weiter an Intels Marktmacht. Dabei hat AMD aber nicht nur neue Ryzen-3000er-Modelle mit Zen-2-Architektur vorgestellt, sondern auch einen älteren Chip neu aufgelegt. Aber auch im Mobilsektor tut sich was, denn es gibt die ersten Notebooks mit Ryzen 4000U. Nun ist auch Intels bislang sichere Domäne der Mobilprozessoren in Gefahr. Wie sich die Prozessoren schlagen und was Intel AMD entgegenzusetzen hat, erklärt Christian Hirsch. (ab 1:37) Canonical hat eine neue Ubuntu-Version vorgestellt. Ubuntu 20.04 ist wieder eine Fassung mit Langzeit-Support und löst damit 18.04 ab. Es gibt zwar keine revolutionären Umstürze, aber ein paar spannende neue Details auf dem Desktop. Manche Dinge sind besser geworden in Ubuntu, wie etwa die Skalierung für hochauflösende Monitore, andere, wie etwa Snaps und Flatpaks, bereiten aber neue Probleme. Aber nicht nur Ubuntu war fleißig, auch die Entwickler von Fedora haben ein neues Release fertiggestellt und viele Details verbessert. Über die Neuerungen und kleinen Detailprobleme der Distros weiß Keywan Tonekaboni Bescheid. (ab 15:33) Videokonferenzen haben groß an Popularität gewonnen. Dabei ist aber nicht jeder damit glücklich. Wer nicht möchte, dass jeder gleich sein privates Arbeitszimmer sieht, kann mit der kostenlosen Software Open Broadcast Studio, einem günstigen Greenscreen (oder zur Not einer blauen Mülltüte) und ein paar Handgriffen sein Büro durch einen beliebigen Hintergrund ersetzen. Das Ganze funktioniert dabei sowohl unter Linux als auch unter Windows. Wie genau man vorgeht, erklärt unsere Profi-Streamerin Liane Dubowy. Wie gut das Setup beim Spielen funktioniert, seht ihr auch unter https://www.youtube.com/ctzockt. (ab 37:01) Angebotsnennung anlässlich der c't Desinfec‘t Du willst einem Trojaner den Kampf ansagen? Dann sichere dir mit unserem Aktionsangebot die neue c't Desinfec‘t und gehe auf Virenjagd: Teste bis zum 4. Juni die neue c't 12/2020 inklusive Desinfec‘t sowie 5 weitere Ausgaben und freue dich zusätzlich auf 66 nerdige c't-Sticker und ein weiteres Geschenk deiner Wahl. Einfach www.ct.de/virenjagd aufrufen und bestellen!
In dieser Folge geht es um AMDs neue Notebook Prozessoren, Astro Slide, Google Apps in Huaweis AppGallery, e.Go, Sailfish OS 3.3 und vieles mehr Themen: AMD deklassiert Intel nun bei Notebook Prozessoren Astro Slide - neues Sliderkeyboard Smartphone Google Apps in Huaweis App Gallery eGo in trouble Pfeife der Woche: Corona App der Telekom Sailfish der Woche: Version 3.3 Wie immer wünsche ich viel Spaß beim reinhören ;)
What kind of Adaptive Mobility Device (AMD) does my student need? How will I know when they are ready for a different type, or to transition to a long cane? In this episode, we break down what AMDs are, the types of AMD's, and how to know which type is right for your students. Find the rest of the show notes on our Allied Independence blog page.
Oh ja, da sind wir wieder. Hier ist er, der Metercast 151. Ihr fragt euch worum es ging? Es ging um das überaus fantastische StudioLink. Mit diesem großartigen Programm ist es uns erst möglich, den Metercast in der vorhandenen Güte zu senden und aufzuzeichnen. Dann war da die Rückschau auf den HDMI-Dummy von Martin und dessen lauten 16" MacBook Pro. Weitere Themen entnehmt bitte den Kapiteln. Wir wünschen wie immer gute Unterhaltung. 00:00:00 #met151 00:06:22 Firefly T-Shirt 00:21:29 HDMI-Dummy Bericht 00:22:53 StudioLink 00:31:20 16" MacBook Pro: Hitze und Lüfterlärm 00:34:29 Wind und Stromausfall = Kein Internet = Skandal 00:38:30 NAS 00:44:05 Microsofts neue deutsche Rechenzentren Office 365 00:52:00 Coronavirus = Keine Handys 01:15:02 AMDs 64-Kerne - THREADRIPPER 3990X 01:20:07 Seti 01:24:00 Fido-Sticks 01:32:52 Neuer Rhythmus
Nach der teilweise heftigen Kritik an dem Entwurf für ein Gesetz gegen Hass-kriminalität lenkt Bundesjustizministerin Christine Lambrecht von der SPD etwas ein: Der Gesetzentwurf solle um eine Klarstellung ergänzt werden, dass Passwörter auch weiterhin verschlüsselt abgelegt und gespeichert werden müssten, versicherte die Ministerin gegenüber dem ZDF. Außerdem solle deutlich gemacht werden, dass die Herausgabe der Passwörter nur zur Verfolgung schwerster Straftaten erlaubt werde, also beispielsweise Kindesmissbrauch, Mord oder Terrorismus. Der Streit um die Bonpflicht geht weiter. Dabei müsste rein theoretisch gar kein Bon ausgedruckt werden. So sehen die Gesetze ausdrücklich vor, dass die neuen Bons auch elektronisch übermittelt werden können. Die Gesetzeslage ist technikoffen: Was immer der Kunde empfangen kann, ist erlaubt – es müssen nur alle relevanten Angaben samt kryptografischer Signatur lesbar übermittelt werden. Einige Startups bemühen sich bereits um digitale Lösungen, doch das ist Interesse gering. Neben der Telekom und Vodafone wertet nun auch Telefónica sein Prepaid-Angebot auf. Die Tarife "O2 My Prepaid" bekommen mehr Datenvolumen ohne Aufpreis. Ein Tarif wird nach der Umstellung sogar deutlich günstiger. So steigt das Datenvolumen beim größten Tarif My Prepaid L von 5 GByte auf 7,5 GByte Datenvolumen, dabei wird der Tarif günstiger: Nutzer zahlen 5 Euro weniger als vorher. Der Chiphersteller AMD hat im vierten Quartal 2019 erstmals die Marke von 2 Milliarden US-Dollar überschritten und rund 2,13 Milliarden US-Dollar eingenommen, wovon 170 Millionen US-Dollar als Gewinn übrigblieben. AMDs verbesserte Geschäftszahlen kommen maßgeblich von der Sparte Computing and Graphics, die alle Endkundenprodukte wie Ryzen-Prozessoren und Radeon-Grafikkarte zusammenfasst. Nicht nur das Datenschutz- und PR-Desaster bei der Buchbinder-Gruppe oder die Millionenbußgelder für 1&1 oder die DeutscheWohnen zeigen: Die Schonzeit für die Umsetzung der DSGVO ist vorbei. Die Neuauflage des Sonderhefts "c’t DSGVO 2020 will mit Praxisleitfäden, Checklisten und Beispieldokumenten Verantwortliche vor teuren Bußgeldern oder Schadensersatzforderungen bewahren. Diese und weitere aktuelle Nachrichten finden Sie ausführlich auf heise.de
In der aktuellen Uplink-Folge diskutieren wir, wie man Fakes im Netz erkennt, was der AMD Ryzen 9 3950X taugt und wie Linux auf unseren optimalen PCs läuft. ///////////////////////////// Im Netz wird gelogen, dass sich die Balken biegen: Nicht nur in Form von Desinformations-Artikeln, sondern auch mit Bild- und Videofälschungen. Jo Bager erklärt, wie man solche Fakes erkennen und im Zweifel überprüfen kann. Dabei muss man nicht auf langwierige technische Analysen wie beispielweise das Abgleichen von Kamerasensor-Rauschmustern zurückgreifen, sondern kann zuerst auch ganz einfache Dinge ausprobieren. Die neue 16-Kern-CPU AMD Ryzen 9 3950X auf Herz und Nieren getestet hat Christian Hirsch. In der Uplink-Diskussion berichtet er, was der teure AMD-Prozessor besser kann als seine aktuellen Intel-Mitbewerber. Und vor allem: Ob der happige Preis von 820 Euro gerechtfertigt ist. Außerdem erklärt er, welche Anwendungen von 16 Kernen profitieren und welche eher nicht. (Spoiler: Wer vor allem spielen will, braucht keine 16 Kerne.) Der c't-Linux-Experte Thorsten Leemhuis hat Linux installiert -- und zwar auf den aktuellen Versionen des von c't jährlich konzipierten optimalen PC. Bei den Linux-Tests hat er interessante Erkenntnisse gewonnen, zum Beispiel dass bei aktuellen Nvidia-RTX-Grafikkarten ein meist gar nicht genutzter USB-C-Controller Probleme bereitet. Außerdem genehmigen sich einige der Konfigurationen unter Linux zu viel Strom oder produzieren Fehler mit NVMe-SSDs -- glücklicherweise keine, die zu Datenverlust führen. Mit dabei: Jo Bager, Christian Hirsch, Thorsten Leemhuis, Jan-Keno Janssen Hintergrund zur am Ende der Folge erwähnten Demonstration: https://www.ndr.de/nachrichten/niedersachsen/hannover_weser-leinegebiet/Verwaltungsgericht-Hannover-erlaubt-NPD-Demo,npddemo146.html Die c't 25/2019 gibt's am Kiosk, im Browser und in der c't-App für iOS und Android.
In der aktuellen Uplink-Folge diskutieren wir, wie man Fakes im Netz erkennt, was der AMD Ryzen 9 3950X taugt und wie Linux auf unseren optimalen PCs läuft. ///////////////////////////// Im Netz wird gelogen, dass sich die Balken biegen: Nicht nur in Form von Desinformations-Artikeln, sondern auch mit Bild- und Videofälschungen. Jo Bager erklärt, wie man solche Fakes erkennen und im Zweifel überprüfen kann. Dabei muss man nicht auf langwierige technische Analysen wie beispielweise das Abgleichen von Kamerasensor-Rauschmustern zurückgreifen, sondern kann zuerst auch ganz einfache Dinge ausprobieren. Die neue 16-Kern-CPU AMD Ryzen 9 3950X auf Herz und Nieren getestet hat Christian Hirsch. In der Uplink-Diskussion berichtet er, was der teure AMD-Prozessor besser kann als seine aktuellen Intel-Mitbewerber. Und vor allem: Ob der happige Preis von 820 Euro gerechtfertigt ist. Außerdem erklärt er, welche Anwendungen von 16 Kernen profitieren und welche eher nicht. (Spoiler: Wer vor allem spielen will, braucht keine 16 Kerne.) Der c't-Linux-Experte Thorsten Leemhuis hat Linux installiert -- und zwar auf den aktuellen Versionen des von c't jährlich konzipierten optimalen PC. Bei den Linux-Tests hat er interessante Erkenntnisse gewonnen, zum Beispiel dass bei aktuellen Nvidia-RTX-Grafikkarten ein meist gar nicht genutzter USB-C-Controller Probleme bereitet. Außerdem genehmigen sich einige der Konfigurationen unter Linux zu viel Strom oder produzieren Fehler mit NVMe-SSDs -- glücklicherweise keine, die zu Datenverlust führen. Mit dabei: Jo Bager, Christian Hirsch, Thorsten Leemhuis, Jan-Keno Janssen Hintergrund zur am Ende der Folge erwähnten Demonstration: https://www.ndr.de/nachrichten/niedersachsen/hannover_weser-leinegebiet/Verwaltungsgericht-Hannover-erlaubt-NPD-Demo,npddemo146.html Die c't 25/2019 gibt's am Kiosk, im Browser und in der c't-App für iOS und Android.
In der aktuellen Uplink-Folge diskutieren wir, wie man Fakes im Netz erkennt, was der AMD Ryzen 9 3950X taugt und wie Linux auf unseren optimalen PCs läuft. ///////////////////////////// Im Netz wird gelogen, dass sich die Balken biegen: Nicht nur in Form von Desinformations-Artikeln, sondern auch mit Bild- und Videofälschungen. Jo Bager erklärt, wie man solche Fakes erkennen und im Zweifel überprüfen kann. Dabei muss man nicht auf langwierige technische Analysen wie beispielweise das Abgleichen von Kamerasensor-Rauschmustern zurückgreifen, sondern kann zuerst auch ganz einfache Dinge ausprobieren. Die neue 16-Kern-CPU AMD Ryzen 9 3950X auf Herz und Nieren getestet hat Christian Hirsch. In der Uplink-Diskussion berichtet er, was der teure AMD-Prozessor besser kann als seine aktuellen Intel-Mitbewerber. Und vor allem: Ob der happige Preis von 820 Euro gerechtfertigt ist. Außerdem erklärt er, welche Anwendungen von 16 Kernen profitieren und welche eher nicht. (Spoiler: Wer vor allem spielen will, braucht keine 16 Kerne.) Der c't-Linux-Experte Thorsten Leemhuis hat Linux installiert -- und zwar auf den aktuellen Versionen des von c't jährlich konzipierten optimalen PC. Bei den Linux-Tests hat er interessante Erkenntnisse gewonnen, zum Beispiel dass bei aktuellen Nvidia-RTX-Grafikkarten ein meist gar nicht genutzter USB-C-Controller Probleme bereitet. Außerdem genehmigen sich einige der Konfigurationen unter Linux zu viel Strom oder produzieren Fehler mit NVMe-SSDs -- glücklicherweise keine, die zu Datenverlust führen. Mit dabei: Jo Bager, Christian Hirsch, Thorsten Leemhuis, Jan-Keno Janssen Hintergrund zur am Ende der Folge erwähnten Demonstration: https://www.ndr.de/nachrichten/niedersachsen/hannover_weser-leinegebiet/Verwaltungsgericht-Hannover-erlaubt-NPD-Demo,npddemo146.html Die c't 25/2019 gibt's am Kiosk, im Browser und in der c't-App für iOS und Android.
In this milestone episode, we put host Hugh Duffy in the hot seat! Over the last 50 episodes we’ve covered a number of niches, from cannabis to craft beer to divorce to fitness and talked with the accounting profession’s biggest influencers and thought leaders. Tune into this special show to hear Hugh share his highlights and most memorable takeaways, what surprised him the most and what he’s learned. Get ready for a trip down memory lane as we talk about some of our guests and the incredible insight they’ve brought AMDS. And you’ll also learn more about Hugh! Join us!
In this episode, JC talks about the recent volatility driven by the Fed meeting last week, the new China tariff threats, and the inverting yield curve. In addition, he talks about a couple of the earnings reports we have seen over the past couple of weeks including Advanced Micro Devices (AMD), Apple (AAPL), and Beyond Meat (BYND). He also talks about AMDs new chip development which sent the stock soaring this week. www.ChamesEntertainment.com
Intellectual property is the most valuable asset of a company that is why it should always be protected. Otherwise, employees come in and learn all your systems and processes, and then they leave. The next thing you realize, there’s a competitor down the road that is being run by that former employee. However, many companies don’t do a good job of securing their intellectual property because they don’t understand that. Art Nutter, the founder, chairman, and CEO of PatentBooks and a company called TAEUS, delves deep into the subject of intellectual property, bringing to light the significance of patents. He also shares how PatentBooks and TAEUS were created and what they are all about. — Watch the episode here: Listen to the podcast: How To Protect Your Intellectual Property with Art Nutter We have Art Nutter. He’s the Founder, Chairman and CEO of (https://taeus.com/) . People will go, “PatentBooks, I got. What is TAEUS?” People think it’s some Greek god of knowledge. I was going to invent that, but it’s an acronym for Take Apart Everything Under the Sun. That company was one that I started back in 1992 when the United States and big companies were thinking that foreign competition was copying their intellectual property. I approached AT&T one day and said, “You acquired NCR. Do you know what patents you acquired in that acquisition?” They said, “No.” I said, “I can tell you.” We proceeded to do that and turned to AT&T and then subsequently IBM into patent licensing powerhouses. IBM went from $100 million licensing income to $2 billion in licensing income over the space of ten years.” I think about that and say, “You started TAEUS.” Yes, I did. Out of the garage with frequent flyer miles, a laptop computer, a laser printer and that was it. Did you have experience in patent or anything or taking things apart? Yes, I’ve always taken things apart. We grew up on a farm. If something broke, we had to figure out how to fix it. I’ve had an inherent interest and curiosity in doing those things. However, the last job I had as an employee before I started TAEUS, I tended to get fired in most companies because I was usually the guy making too much money. That was my crime. I doubled the sales of this particular company in a space of nine months. This company was reverse engineering semiconductor memory chips. I discovered that there was a marketplace that was very interested in this reverse engineering data. That was this intellectual property marketplace because like I said in the late ‘80s, early ‘90s, US companies were thinking, “We’re getting killed by foreign competition. They must be copying our stuff, our ideas and our inventions.” In fact, they were and this was an easy way for them to document the other companies copying of their circuits. They literally would reverse engineer the circuits and then duplicate them exactly. It was government-funded in Japan and Korea and Taiwan and so forth. They wanted to establish semiconductor industries for themselves, which as from history, our semiconductor industry is pretty few and far between. You have the Intels, AMDs and in-house semiconductor capabilities but nowhere near likely the volume of semiconductor companies that did exist in here in the 1980s. Those businesses are now done overseas in Asia. Learn all the time because you're kidding yourself anytime you think you know everything; you'll be blindsided. Click To Tweet (https://twitter.com/intent/tweet?url=http%3A%2F%2Fbusinessleaderspodcast.com%2Fhow-to-protect-your-intellectual-property-with-art-nutter%2F&text=Learn%20all%20the%20time%20because%20you%27re%20kidding%20yourself%20anytime%20you%20think%20you%20know%20everything%3B%20you%27ll%20be%20blindsided.&related) I’m thinking about you started out with your laptop in your garage and you get called to AT&T. Walk us...
Our geologic survey of Planet CES is done for another year, cadets, and this week Charles and Mike issue their reports on what was really big and surprisingly small, the cool stuff and the "huh?" moments, and go into some detail on the big themes this year, from Thunderbolt 3 and USB-C to HomeKit stuff and of course various security solutions. They also welcome back one of the most beloved "spurious use of Bluetooth" devices we covered from back in the MacNN days, the Numi luxury toilet -- now "flush" with new features. Never fear, however, there is a lot of tech news outside of CES to cover, and of course we get to that as well. From pointless Congressional investigations into Apple's battery-replacement program to where (and where not) to get your battery replaced; from a new vote coming up to save net neutrality to the FBI's desire to make us all less secure for their own personal convenience; from investors that fear "for the children" but were apparently unaware that Apple has parental controls to Microsoft's problematic but 100 percent effective fix for the Spectre and Meltdown threats to AMD-based PCs (and AMDs incredibly two-faced responses to this security crisis); from the Jimmy Iovine rumor to our pronouncement that AirPort is on life support to the dramatic turn in the Apple vs VOIP-Pal case, the Space Javelin crew are ever-vigilant for tech news and shenanigans not involving the power or flood levels of a certain convention center in Las Vegas. All this and the products from CES we'd actually consider buying for ourselves plus a whole lot more -- including a shout out to the new cadets just joining the ship (no hazing from the senior staff, you guys), so buckle up for insight, analysis, head-scratching oddities and other silliness this week.
Following the August 30, 2017, Iowa Board of Pharmacy Meeting, episode #9 discusses several rules that the board has noticed for intended action that relate to prescriber-patient relationship, storage of prescriptions files, and AMDS requirements.
With special guest and returning host Carl Minor in the house, we talk about the possibility of Will Smith in Aladdin, Avatar coming back to the big screen, Radeon Pro Duo forging AMDs comeback, NES Classic cancellation, Starcraft for free, an Uber of a mess, and Ad-Blockers in Chrome. Next we talk games, including Warhammer 40k: Dawn of War 3, Ducktales Remastered, Unreal Tournament, and Carl’s favorite--Street Fighter V! Finally we talk a bit of tv with Cosplay Melee and Better Call Saul.
With special guest and returning host Carl Minor in the house, we talk about the possibility of Will Smith in Aladdin, Avatar coming back to the big screen, Radeon Pro Duo forging AMDs comeback, NES Classic cancellation, Starcraft for free,
Heute geht es im Podcast aus Nerdistan handfest zur Sache: Wir unterhalten uns über Hardware. Christian Hirsch gibt uns alle Infos zu AMDs neuer CPU Ryzen. Wir schauen uns an, was der Intel-Killer bringt. Wer sich einen neuen Prozessor kauft, will wahrscheinlich auch die Grafikkarte aufrüsten. Martin Fischer gibt uns einen Überblick über die aktuellen Alternativen, bis hin zum brandneuen Performance-Hammer GeForce GTX 1080 Ti. Die Karte kommt direkt aus der Post zu uns in die Sendung. Nintendos neue Konsole Switch haben wir ebenfalls da. Nintendo-Fan Dennis Schirrmacher ist vor allem vom Launch-Titel Legend of Zelda begeistert. Martin stört sich zwar an der mangelnden Grafikleistung der Transformer-Konsole, was der packenden Stimmung des Adventures allerdings keinen Abbruch tut. Mit dabei: Martin Fischer, Dennis Schirrmacher, Fabian Scherschel und Christian Hirsch Die c't 5/17 gibt's am Kiosk, im heise Shop und digital in der c't-App für iOS und Android. Alle früheren Episoden unseres Podcasts gibt es unter www.ct.de/uplink.
Heute geht es im Podcast aus Nerdistan handfest zur Sache: Wir unterhalten uns über Hardware. Christian Hirsch gibt uns alle Infos zu AMDs neuer CPU Ryzen. Wir schauen uns an, was der Intel-Killer bringt. Wer sich einen neuen Prozessor kauft, will wahrscheinlich auch die Grafikkarte aufrüsten. Martin Fischer gibt uns einen Überblick über die aktuellen Alternativen, bis hin zum brandneuen Performance-Hammer GeForce GTX 1080 Ti. Die Karte kommt direkt aus der Post zu uns in die Sendung. Nintendos neue Konsole Switch haben wir ebenfalls da. Nintendo-Fan Dennis Schirrmacher ist vor allem vom Launch-Titel Legend of Zelda begeistert. Martin stört sich zwar an der mangelnden Grafikleistung der Transformer-Konsole, was der packenden Stimmung des Adventures allerdings keinen Abbruch tut. Mit dabei: Martin Fischer, Dennis Schirrmacher, Fabian Scherschel und Christian Hirsch Die c't 5/17 gibt's am Kiosk, im heise Shop und digital in der c't-App für iOS und Android. Alle früheren Episoden unseres Podcasts gibt es unter www.ct.de/uplink.
Heute geht es im Podcast aus Nerdistan handfest zur Sache: Wir unterhalten uns über Hardware. Christian Hirsch gibt uns alle Infos zu AMDs neuer CPU Ryzen. Wir schauen uns an, was der Intel-Killer bringt. Wer sich einen neuen Prozessor kauft, will wahrscheinlich auch die Grafikkarte aufrüsten. Martin Fischer gibt uns einen Überblick über die aktuellen Alternativen, bis hin zum brandneuen Performance-Hammer GeForce GTX 1080 Ti. Die Karte kommt direkt aus der Post zu uns in die Sendung. Nintendos neue Konsole Switch haben wir ebenfalls da. Nintendo-Fan Dennis Schirrmacher ist vor allem vom Launch-Titel Legend of Zelda begeistert. Martin stört sich zwar an der mangelnden Grafikleistung der Transformer-Konsole, was der packenden Stimmung des Adventures allerdings keinen Abbruch tut. Mit dabei: Martin Fischer, Dennis Schirrmacher, Fabian Scherschel und Christian Hirsch Die c't 5/17 gibt's am Kiosk, im heise Shop und digital in der c't-App für iOS und Android. Alle früheren Episoden unseres Podcasts gibt es unter www.ct.de/uplink.
Medverkande denna gång är: Maxx, Robb, Carl och Danny. I vårt hundraåttonde avsnitt så tas följande upp! SPEL I FOKUS (Tid: 0t 03m 17s) I veckans avsnitt kommer vi få höra med Carl om hur man håller ett free2play spel som Hearthstone intressant. Och Robb berättar sina upplevelser med Black Mesa. NYHETER (Tid: 0t 20m 57s) Denna veckas nyheter är följande; Nintendo Switchs Batteri tid avslöjas, Carl förklarar AMDs nya CPU och en Streamer avled efter att ha spelat World of Tanks i 22 timmar. VECKANS DISKUSSION: (Tid: 0t 40m 27s) Bästa ljudbild & bästa soundtrack i spel? Bästa spel kompositör? ÖVRIGA NÖRDÄMNEN (Tid: 1t 18m 53s) Danny berättar hur bra han tyckte senaste Resident Evil filmen var och Robb Har sett Passengers LYSSNARMAIL (Tid: 1t 42m 08s) Vi tar upp ett par frågor, rekommendation för tredje persons actions spel och hur det ser ut för For Honor en vecka efter dess launch. info@nordlivpodcast.se★ Support this podcast on Patreon ★
CES has come and gone hear Chris' take on the coverage. Including his long winded opinion about the Nintendo Switch. He also goes into AMDs new processor chip.
Jeder hätte am Arbeitsplatz zumindest ab und zu gerne mehr Zeit. In der jüngsten Folge des c't uplink erklärt Dorothee Wiegand, welche Tools und Programme die liefern können. So gibt es nicht nur verschiedene Konzepte fürs effektivere Arbeiten sondern auch ganz praktische Werkzeuge, mit denen sich kleine und große Teams organisieren können. Am Ende muss aber auch weiterhin jeder selbst entscheiden, was ihm oder ihr wirklich helfen kann. Jürgen Schmidt warnt danach vor der nächsten heraufziehenden Gefahr: Nachdem Kryptotrojaner für PCs ja bereits eines der großen Themen des Jahres sind, werden nun auch Daten auf Smartphones verschlüsselt und angeblich nur gegen Lösegeld wieder freigegeben. Diese Malware dürfte sich in den kommenden Wochen noch weiter verbreiten. Zu guter Letzt fasst Martin Fischer zusammen, welche Probleme es nach der Einführung von AMDs neuer Grafikkarte Radeon RX 480 gab. Wir diskutieren, ob sie allein schon gegen einen Kauf sprechen. Die c't 15/16 gibts am Kiosk, im heise Shop und digital in der c't-App für iOS und Android. Alle früheren Episoden unseres Podcasts gibt es unter www.ct.de/uplink
Jeder hätte am Arbeitsplatz zumindest ab und zu gerne mehr Zeit. In der jüngsten Folge des c't uplink erklärt Dorothee Wiegand, welche Tools und Programme die liefern können. So gibt es nicht nur verschiedene Konzepte fürs effektivere Arbeiten sondern auch ganz praktische Werkzeuge, mit denen sich kleine und große Teams organisieren können. Am Ende muss aber auch weiterhin jeder selbst entscheiden, was ihm oder ihr wirklich helfen kann. Jürgen Schmidt warnt danach vor der nächsten heraufziehenden Gefahr: Nachdem Kryptotrojaner für PCs ja bereits eines der großen Themen des Jahres sind, werden nun auch Daten auf Smartphones verschlüsselt und angeblich nur gegen Lösegeld wieder freigegeben. Diese Malware dürfte sich in den kommenden Wochen noch weiter verbreiten. Zu guter Letzt fasst Martin Fischer zusammen, welche Probleme es nach der Einführung von AMDs neuer Grafikkarte Radeon RX 480 gab. Wir diskutieren, ob sie allein schon gegen einen Kauf sprechen. Die c't 15/16 gibts am Kiosk, im heise Shop und digital in der c't-App für iOS und Android. Alle früheren Episoden unseres Podcasts gibt es unter www.ct.de/uplink
Jeder hätte am Arbeitsplatz zumindest ab und zu gerne mehr Zeit. In der jüngsten Folge des c't uplink erklärt Dorothee Wiegand, welche Tools und Programme die liefern können. So gibt es nicht nur verschiedene Konzepte fürs effektivere Arbeiten sondern auch ganz praktische Werkzeuge, mit denen sich kleine und große Teams organisieren können. Am Ende muss aber auch weiterhin jeder selbst entscheiden, was ihm oder ihr wirklich helfen kann. Jürgen Schmidt warnt danach vor der nächsten heraufziehenden Gefahr: Nachdem Kryptotrojaner für PCs ja bereits eines der großen Themen des Jahres sind, werden nun auch Daten auf Smartphones verschlüsselt und angeblich nur gegen Lösegeld wieder freigegeben. Diese Malware dürfte sich in den kommenden Wochen noch weiter verbreiten. Zu guter Letzt fasst Martin Fischer zusammen, welche Probleme es nach der Einführung von AMDs neuer Grafikkarte Radeon RX 480 gab. Wir diskutieren, ob sie allein schon gegen einen Kauf sprechen. Die c't 15/16 gibts am Kiosk, im heise Shop und digital in der c't-App für iOS und Android. Alle früheren Episoden unseres Podcasts gibt es unter www.ct.de/uplink
Medverkande denna gång är: Fredrik, Danny och Calle. I vårt femtiosjunde avsnitt så tas följande upp! SPEL I FOKUS Calle tar upp en anekdot ur Squad, och Fredrik snackar Diablo 3. - NY SEKTION! - UTMANINGEN! Vi utmanar varandra på att spela ett visst spel! Varje vecka blir någon slumpmässigt utmanad! Denna vecka utmanar Danny någon! Lyssna mer för vad detta innebär :D Spelnyheter som diskuteras om; Denna gång snackar vi kring Starbreeze VR arkad hall, AMDs initiativ GPUOpen samt ang. att EA ej kommer vara med på E3. Veckans diskussion: Vad gör att spel får kultstatus? Övriga nördämnen Vi tar upp de två filmerna ”The Forest” och ”Jag är Ingrid” info@nordlivpodcast.se★ Support this podcast on Patreon ★
Denna vecka med Max, Robb, Danny, Fredrik och Lotta. I vårt elfte avsnitt så avhandlar vi vad vi spelat spel som t.ex Evolve, Fist of Jesus, Escape Dead Island och mycket mer! Vi tar också och snackar kring nyheter som AMDs nya grafikkort och snack om Total War: Warhammer. Även övriga nördämnen som denna gång tar upp perspektivet av en 11:åring, när en av poddens döttrar tar sig an animen "Full Metal Alchemist: Brotherhood" Men framförallt snack om "Veckans spel" som under veckan var spelet "FlatOut: Ultimate Carnage". info@nordlivpodcast.se★ Support this podcast on Patreon ★
This show sees us talking about NZ vs Aus broadband, Flash moving away from the mobile space, iPhone 4S on both Telecom and Vodafone, Siri, Blu-Ray players, Steam, the new Channel Partner podcast, Kindle Fire launches and AMDs new 16 core CPU. Running time : 0:57:02