POPULARITY
Bitte um Aufmerksamkeit: Auf keinen Fall die RTX 5060 Ti mit 8GB VRAM kaufen! Vielen Dank. Wenn dies das einzige ist, was ihr aus dieser Folge mitnehmt, sind wir zufrieden. Natürlich gibt es noch weitere tolle Themen, wie unser Überblick über die Reviews zur 5060 Ti mit 16GB, für 450 Euro in der aktuellen Lage irgendwie schon ok, wenn auch nicht wirklich gut. Dazu gibts noch Gerüchte zu AMD Radeon 9070 GRE, die sich etwas unter der 9070 (ohne XT) einordnen soll und darunter wiederum die 9060 (XT), die Ende Mai erscheinen sollen. Die Datenbank für Sicherheitslücken CVE (Common Vulnerabilities and Exposures) stand akut vor dem Aus, das aber noch kurzfristig per Vertragsverlängerung abgewendet werden konnte, zumindest für ein weiteres Jahr. Die EU stellte als Reaktion spontan die eigene Alternative EUVD online, ein wichtiger Schritt zur Unabhängigkeit von den USA. Dort sollen die Vorbestellungen für die Nintendo Switch 2 nun am 24. April starten, die Konsole selbst und das Bundle mit Mario Kart World bleiben bei 450 bzw. 500 Dollar, dafür wird das Zubehör etwas teurer. Viel Spaß mit Folge 253! Sprecher: Meep, Michael Kister, Mohammed Ali DadAudioproduktion: Michael KisterVideoproduktion: Mohammed Ali DadTitelbild: Bildquellen: Aufnahmedatum: 18.04.2025 Besucht unsim Discord https://discord.gg/SneNarVCBMauf Bluesky https://bsky.app/profile/technikquatsch.deauf TikTok https://www.tiktok.com/@technikquatschauf Youtube https://www.youtube.com/@technikquatschauf Instagram https://www.instagram.com/technikquatschauf Twitch https://www.twitch.tv/technikquatsch RSS-Feed https://technikquatsch.de/feed/podcast/Spotify https://open.spotify.com/show/62ZVb7ZvmdtXqqNmnZLF5uApple Podcasts https://podcasts.apple.com/de/podcast/technikquatsch/id1510030975 00:00:00 Themen: Nvidia RTX 5060 Ti mit 16GB in den Reviews, Version mit 8GB von Nvidia vorenthalten; drohendes Aus von Sicherheitslücken-Datenbank CVE kurzfristig abgewendet, europäische Alternative EUVD online gegangen; Gerüchte zu AMD Radeon RX 9070 GRE, Release 9060 (XT) wohl zur Computex Ende Mai; Switch 2 ab 24.04. auch in den USA vorbestellbar, Zubehör teurer wg. Zöllen 00:01:37 Mike baut PC, bekommt neue Sommerreifen 00:06:16 Pläne für Stay Forever Con Süd in Karlsruhe; Mo macht Urlaub in Rom 00:10:21 Mike hat jetzt Autobatterie-Ladegerät, Batterietester und Powerbank-Starthilfe und bald eine neue Autobatterie 00:13:07 Meeps Probleme mit einer Bezahl-App fürs Tanken und ihre Begegnung mit der PolizeiProblem-App https://www.ryd.one/de-de/ryd-pay/ADAC-App https://www.adac.de/services/apps/drive/ 00:26:26 Nvidia RTX 5060 Ti mit 16GB in den Tests; auf keinen Fall die 5060 Ti mit 8GB kaufen!https://www.computerbase.de/artikel/grafikkarten/nvidia-geforce-rtx-5060-ti-16-gb-test.92119/Gamers Nexus: More Marketing BS: NVIDIA GeForce RTX 5060 Ti Review & Benchmarks vs GTX 1060, 4060 Ti, & More https://www.youtube.com/watch?v=Cskegn1-D7sHardware Unboxed: GeForce RTX 5060 Ti 16GB Review & Benchmarks - Not Great, Not Terrible https://www.youtube.com/watch?v=B6qZwJsp5X4Hardware Unboxed: RTX 5060 Ti 8GB - Instantly Obsolete, Nvidia Screws Gamers https://www.youtube.com/watch?v=AdZoa6Gzl6s 00:41:22 Gerüchte zu Radeon 9070 GRE; 9060 (XT) wohl zur Computex Ende Maihttps://www.computerbase.de/news/grafikkarten/great-radeon-edition-amd-soll-noch-vor-rx-9060-xt-eine-rx-9070-gre-vorbereiten.92041/https://www.computerbase.de/news/grafikkarten/radeon-rx-9060-xt-mehr-details-zum-gegner-der-nvidia-geforce-rtx-5060-ti.92190/ 00:45:58 Nvidia Treiber 576.02 mit riesiger Liste an Fixes; Update: Temperaturüberwachung nach Sleep oder Standby fehlerhafthttps://videocardz.com/newz/nvidia-releases-massive-gpu-driver-update-addressing-stability-and-black-screen-issueshttps://www.computerbase.de/news/grafikkarten/geforce-rtx-zu-warm-kalt-neuer-treiber-hat-einen-bug-bei-der-temperatur-ueberwachung.92262/ 00:48:14 Schwachstellen-Datenbank CVE (Common Vulnerabilities and E...
All roads lead to MacBook Air.johnny@geektherapyradio.com
07/02/25 - iPhone com 16Gb, Apple e Yahoo, iPhone na Verizon, AppleCare mais caro, Apple Glasses, Apple vende peças, M5 a caminho, iPhone SE 4 semana que vem, Tarifas Apple na China, Apps com Malware removidos, Robô Apple, https://www.doctorapple.com.br
This show has been flagged as Clean by the host. What Tech would I spend my £2000 on: This episode took inspiration from episode 134 of the Linux Lads podcast Pilet £295 + Pi 5 (16GB) 114.90 = £409.90 Juno Tab 3 £631.75 FairPhone 5 £599 Donations (£89.83 each): Mastodon.me.uk Open Rights Group archive.org https://archive.org/donate HPR Hosting Provide feedback on this episode.
video: https://youtu.be/GcjI5fNAsbI This week we are going to discuss the latest Linux kernel and all it's new features…we're also going to talk about how you can win a prize from Linus himself! Welcome to Destination Linux, where we discuss the latest news, hot topics, gaming, mobile, and all things Open Source & Linux. We will also be discussing Raspberry Pi's latest hardware release and some pretty gnarly phishing scams. Now let's get this show on the road toward Destination Linux! Forum Discussion Thread (https://destinationlinux.net/forum) Download as MP3 (https://aphid.fireside.fm/d/1437767933/32f28071-0b08-4ea1-afcc-37af75bd83d6/4e858fd3-1f21-4a86-889a-7525772df672.mp3) Support the show by becoming a patron at tuxdigital.com/membership (https://tuxdigital.com/membership) or get some swag at tuxdigital.com/store (https://tuxdigital.com/store) Hosted by: Ryan (DasGeek) = dasgeek.net (https://dasgeek.net) Jill Bryant = jilllinuxgirl.com (https://jilllinuxgirl.com) Michael Tunnell = michaeltunnell.com (https://michaeltunnell.com) Chapters: 00:00 Intro 01:46 Community Feedback 05:51 Sandfly Security 08:26 Kernel Magic: What's New in Linux 6.13 18:31 AMD's Open-Source Boost for Wayland 24:28 Sweet Sixteen: Raspberry Pi 5 Gets 16GB 29:13 Text Scams Beware: Bypassing Tricks 37:40 Gaming: Slay the Princess 42:24 Software Spotlight: Open-ish SIEM software 46:39 Tip: NMAP 51:48 Support the Show 54:07 Outro Links: Community Feedback https://destinationlinux.net/comments (https://destinationlinux.net/comments) https://destinationlinux.net/forum (https://destinationlinux.net/forum) Sandfly Security https://destinationlinux.net/sandfly (https://destinationlinux.net/sandfly) Kernel Magic: What's New in Linux 6.13 https://kernelnewbies.org/Linux_6.13 (https://kernelnewbies.org/Linux_6.13) AMD's Open-Source Boost for Wayland https://gitlab.com/acs-wayland/weston/-/wikis/home/ (https://gitlab.com/acs-wayland/weston/-/wikis/home/) https://www.phoronix.com/news/AMD-AMDGPU-Composition-Stack (https://www.phoronix.com/news/AMD-AMDGPU-Composition-Stack) Sweet Sixteen: Raspberry Pi 5 Gets 16GB https://www.raspberrypi.com/news/16gb-raspberry-pi-5-on-sale-now-at-120/ (https://www.raspberrypi.com/news/16gb-raspberry-pi-5-on-sale-now-at-120/) Text Scams Beware: Bypassing Tricks https://www.msn.com/en-us/news/technology/hackers-have-devised-a-simple-text-scam-to-bypass-apple-s-iphone-protections/ar-BB1ropei (https://www.msn.com/en-us/news/technology/hackers-have-devised-a-simple-text-scam-to-bypass-apple-s-iphone-protections/ar-BB1ropei) Gaming: Slay the Princess https://store.steampowered.com/app/1989270/SlaythePrincess_ThePristine_Cut/ (https://store.steampowered.com/app/1989270/Slay_the_Princess__The_Pristine_Cut/) Software Spotlight: Open-ish SIEM software https://graylog.org/products/source-available/ (https://graylog.org/products/source-available/) Tip: NMAP https://github.com/nmap/nmap (https://github.com/nmap/nmap) Support the Show https://tuxdigital.com/membership (https://tuxdigital.com/membership) https://store.tuxdigital.com/ (https://store.tuxdigital.com/)
Themen: Das riesige Event: Samsung S25 Serie Samsung Galaxy S25 Edge im April Galaxy S25 ist langweilig und teuer aber wird ein Erfolg werden Galaxy S25 Ultra mit 16GB nur in Asien Samsung Macht Europa zur zweiten Klasse Emacs auf Android: App im F-Droid.org Store Google und das Pixel 4a Update Was ist los bei Google? Xiaomi 15 Ultra kommt weltweit u.v.m. App der Woche: Infinity Nikki Community: The Netcasts | Folgen mit: @thenetcasts@mastodon.africa
This week's Electromaker Show is now available on YouTube and everywhere you get your podcasts! Welcome to the Electromaker Show episode 166! This week we reflect on our huge Red Pitaya EduPack giveaway, ask who will buy the new Pi5 16GB variant, and look at a quite specialized ESP32 C61 devkit! Tune in for the latest maker, tech, DIY, IoT, embedded, and crowdfunding news stories from the week. Watch the show! We publish a new show every week. Subscribe here: https://www.youtube.com/channel/UCiMO2NHYWNiVTzyGsPYn4DA?sub_confirmation=1 We stock the latest products from Adafruit, Seeed Studio, Pimoroni, Sparkfun, and many more! Browse our shop: https://www.electromaker.io/shop Join us on Discord! https://discord.com/invite/w8d7mkCkxj​ Follow us on Twitter: https://twitter.com/ElectromakerIO Like us on Facebook: https://www.facebook.com/electromaker.io/ Follow us on Instagram: https://www.instagram.com/electromaker_io/ Featured in this show: Raspberry Pi 5 16GB released! Embedded World 2025 is coming up! ESP32 C61 Devkit is Here Red Pitaya Electromaker Educator Red Pitaya Prize Winner Announced! Arduino Portenta ProtoKit
Neste episódio homenageámos Steve Langasek, uma das pessoas que mais contribuiram para as comunidades de Ubuntu e Debian; falámos muito sobre a nossa experiência com Ubuntu Touch, os seus últimos desenvolvimentos de convergência baseados em Noble Numbat (muito promissores!) desenvolvidos pela sua generosa comunidade e o futuro dos telefones equipados com esse SO; ultimámos os detalhes finais para o PODES e o ECTL e ainda discutimos a utilidade de gastar uma pipa de massa com o novo Raspberry Pi5 com 16Gb?
The first Patch Tuesday of 2025 brings temporary but sweet relief. There were no preview updates last month, so this month is just security/bug fixes. Plus, will Microsoft back down from its Windows 10 EOL line in the sand? It appears not. Finally, got New Year's resolutions? Paul's got a better idea. Maybe... Windows New Canary build with nothing in it Microsoft will not support Office on Windows 10 after October EOL date Microsoft is auto-installing the new Outlook in Windows 10 too - Don't let the door hit you on the way out Parallels Desktop for Mac now supports running x86 Windows VMs. Very slowly AI, Microsoft 365 New business models for AI emerge in 2025. Pay as you go vs. pay or no pay Microsoft has shifted its business model strategy over two years 15 months of Copilot in Windows: Madness Microsoft announces pay-as-you-go AI agents Google has a different (better) take with "the best of Google AI" in Workspace What to expect: Price hikes on subscription services to pay for this AI Massive reorg of Microsoft's engineering groups is all about AI - There are PM-level layoffs happening now all over Microsoft OpenAI adds tasks in beta to ChatGPT Microsoft Excel in Windows will soon support dark mode. Wait, what? Hardware Surface teases a big announcement on January 30 AMD surges on incredible new x86 chips and Intel's epic fail Former Surface design lead Ralf Groene joins Panos at Amazon. Why? Arm Holdings plans massive licensing price hikes. Everyone needs to settle the F down There's a 16 GB Raspberry Pi 5 now. But at this price, a low-end NUC is the better choice for most Dev .NET 9.0.1 arrives - and with it, the fix for the app-crashing WPF/Windows 11 theming bug Paul tried GitHub Copilot in Visual Studio. It is MAGIC Xbox Microsoft is clearly planning something big for gaming handhelds. Windows or Xbox? Or both? Happy New Year, Xbox fans! Microsoft to bring more Xbox exclusives to PS, Switch Xbox to host a Developer Direct event next week Microsoft introduces new Xbox repair options The Nintendo Switch 2 is leaking all over the place, and there were prototypes at CES Tips and Picks Tip of the week: Think about making micro changes each month or quarter instead of huge, sweeping changes once a year App pick of the week: Start11 v2.5 RunAs Radio this week: DevOpsDocs with Mattias Karlsson Brown liquor pick of the week: Buchanan's Deluxe 12 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsor: cachefly.com/twit
The first Patch Tuesday of 2025 brings temporary but sweet relief. There were no preview updates last month, so this month is just security/bug fixes. Plus, will Microsoft back down from its Windows 10 EOL line in the sand? It appears not. Finally, got New Year's resolutions? Paul's got a better idea. Maybe... Windows New Canary build with nothing in it Microsoft will not support Office on Windows 10 after October EOL date Microsoft is auto-installing the new Outlook in Windows 10 too - Don't let the door hit you on the way out Parallels Desktop for Mac now supports running x86 Windows VMs. Very slowly AI, Microsoft 365 New business models for AI emerge in 2025. Pay as you go vs. pay or no pay Microsoft has shifted its business model strategy over two years 15 months of Copilot in Windows: Madness Microsoft announces pay-as-you-go AI agents Google has a different (better) take with "the best of Google AI" in Workspace What to expect: Price hikes on subscription services to pay for this AI Massive reorg of Microsoft's engineering groups is all about AI - There are PM-level layoffs happening now all over Microsoft OpenAI adds tasks in beta to ChatGPT Microsoft Excel in Windows will soon support dark mode. Wait, what? Hardware Surface teases a big announcement on January 30 AMD surges on incredible new x86 chips and Intel's epic fail Former Surface design lead Ralf Groene joins Panos at Amazon. Why? Arm Holdings plans massive licensing price hikes. Everyone needs to settle the F down There's a 16 GB Raspberry Pi 5 now. But at this price, a low-end NUC is the better choice for most Dev .NET 9.0.1 arrives - and with it, the fix for the app-crashing WPF/Windows 11 theming bug Paul tried GitHub Copilot in Visual Studio. It is MAGIC Xbox Microsoft is clearly planning something big for gaming handhelds. Windows or Xbox? Or both? Happy New Year, Xbox fans! Microsoft to bring more Xbox exclusives to PS, Switch Xbox to host a Developer Direct event next week Microsoft introduces new Xbox repair options The Nintendo Switch 2 is leaking all over the place, and there were prototypes at CES Tips and Picks Tip of the week: Think about making micro changes each month or quarter instead of huge, sweeping changes once a year App pick of the week: Start11 v2.5 RunAs Radio this week: DevOpsDocs with Mattias Karlsson Brown liquor pick of the week: Buchanan's Deluxe 12 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsor: cachefly.com/twit
This week we talk browsers, with coverage of the Servo updates and the new Supporters of Chromium group in the Linux Foundation. The Raspberry Pi has a 16Gb model of the Pi 5, and not everyone is happy about it. KDE Plasma 6.3 has a public beta, Flatpack has released version 1.16, and Mint is on the cusp of releasing version 22.1. For tips we have kshift for quick or automated KDE re-theming, php -S for local php site testing, a quick tar howto, and pipewire-pulse for more pipewire and oulse audio fun. You can find the show notes at https://bit.ly/3BUzLqV Enjoy! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Ken McDonald, and Jeff Massie Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
True Cheating Stories 2023 - Best of Reddit NSFW Cheating Stories 2023
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe
The monthly Q&A commenceth again, with emails and Discord Qs positively pouring in about the origin of the flange effect, why all the electrical outlets are upside down, gaming on an M4 Mac Mini and how Apple's move to a 16GB minimum affects their status as a family recommendation, the value of moving to the Bay Area for a computer science degree, the race to the bottom in electronics parts and accessories, Will's holiday board game recommendations, an impromptu ranking of charts, and more. Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
In this episode of Hands-On Tech, host Mikah Sargent tackles crucial questions about home networking, Mac mini security, and tech troubleshooting. Absorb this week's helpful tips, from analyzing Orbi mesh WiFi performance to explaining lock screen best practices and VHS video enhancement solutions. Susan asks if her Orbi AX6000 tri-band mesh WiFi 6 router is adequate for her 4,000 square foot home, given she's experiencing TV streaming delays and slower WiFi speeds compared to cellular data. DA, a new Mac Mini user transitioning from Windows, wants to know how to disable the password requirement when waking up their computer. Bruce is having issues with his Synology NAS repeatedly disconnecting from his new M4 Mac Mini and requiring password re-entry after sleep or logout, despite using AutoMounter. David asks if he should upgrade the RAM on his new HP Pavilion PC beyond 16GB for video editing, and seeks recommendations for software to improve VHS video quality. Brad inquires whether it's safe to keep his iPhone 16 Pro Max on a charger all day and night, including while using an Anker power bank. (Follow-up) John provides an update on his previous question about getting an iPhone hotspot to work with older WiFi devices, sharing what solutions worked for him. Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Catch up on all of the Week 11 Fantasy Football action! Chris Welsh, Scott Bogman, and Deepak Chona (@SportMDAnalysis) break down key injuries and everything that stood out from every game. Timestamps (may be off due to ads) Intro - 0:00:00Notable Injuries - 0:01:33Darnell Mooney - 0:02:04CeeDee Lamb - 0:04:11Nico Collins - 0:07:33Isiah Pacheco - 0:09:16My Playbook - 0:12:37IND 28 @ NYJ 27 - 0:13:57BAL 16 @ PIT 18 - 0:18:22MIN 23 @ TEN 13 - 0:23:19Signed Josh Allen Helmet Giveaway - 0:26:49CLE 14 @ NO 35 - 0:27:50JAX 6 @ DET 52 - 0:35:16GB 20 @ CHI 19 - 0:38:51FantasyPros Trade Analyzer - 0:42:47LV 19 @ MIA 34 - 0:43:36LAR 28 @ NE 22 - 0:48:04FantasyPros Discord - 0:50:10ATL 6 @ DEN 38 - 0:51:46SEA 20 @ SF 17 - 0:56:02KC 21 @ BUF 30 - 1:01:34Outro - 1:05:12 Helpful Links: My Playbook - Don't miss out on the revolutionary fantasy football software that over 1 million teams have already synced with: My Playbook. It's packed with custom advice, rankings, and analysis tailored just for your team. Discover your optimal lineup, find advantageous trades, and stay ahead with the latest player news. Join the league of winners today at fantasypros.com/myplaybook and let's secure that championship!Join Us On Discord! - Join our FantasyPros Discord Community! Chat with other fans and get access to exclusive AMAs that wind up on our podcast feed. Come get your questions answered and BE ON THE SHOW at fantasypros.com/chat. Leave a Review – If you enjoy our show and find our insight to be valuable, we'd love to hear from you! Your reviews fuel our passion and help us tailor content specifically for YOU. Head to Apple Podcasts, Spotify, or wherever else you get your podcasts and leave an honest review. Let's make this show the ultimate destination for fantasy football enthusiasts like us. Thank you for watching and for showing your support – https://fantasypros.com/review/ BettingPros Podcast – For advice on the best picks and props across both the NFL and college football each and every week, check out the BettingPros Podcast at bettingpros.com/podcast, our BettingPros YouTube channel at youtube.com/bettingpros, or wherever you listen to podcastsSee omnystudio.com/listener for privacy information.
This week, we cover OpenCost's big incubation milestone, CNCF's graduation rules, and a flurry of tech acquisitions. Plus, some thoughts on teaching kids about passwords. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=nWPR3HLPjfI) 493 (https://www.youtube.com/watch?v=nWPR3HLPjfI) Runner-up Titles Yes, No, Maybe Infinite Password Loop Bring your kids to work day: passwords. Password Talk Escaping characters Stone Cold Steve Austin Don't hire people with pets Eats AWS stuff natively. I compete on my ASCII character set.Stay in the sandbox Enron for cloud purchasing Rundown OpenCost Advances to CNCF Incubation (https://www.opencost.io/blog/cncf-incubation) Episode 492: Aran Khanna on Cloud Insurance (https://www.softwaredefinedtalk.com/492) VMware Reflections from Explore Barcelona and the Challenges of Modern App Delivery (https://news.broadcom.com/app-dev/reflections-from-explore-barcelona-and-the-challenges-of-modern-app-delivery) New SMB subscription may not end VMware migrations (https://arstechnica.com/information-technology/2024/11/new-smb-friendly-subscription-tier-may-be-too-late-to-stop-vmware-migrations/) M&A Apple to Acquire Pixelmator, Maker of Popular Photo-Editing Apps (https://www.bloomberg.com/news/articles/2024-11-01/apple-to-acquire-pixelmator-maker-of-popular-photo-editing-apps?utm_medium=email&utm_source=author_alert&utm_term=241101&utm_campaign=author_19842959) Red Hat acquires AI optimization startup Neural Magic (https://techcrunch.com/2024/11/12/red-hat-acquires-ai-optimization-startup-neural-magic/) IBM's Red Hat Acquisition Will Pay For Itself By Early Next Year (https://www.nextplatform.com/2024/10/24/ibms-red-hat-acquisition-will-pay-for-itself-by-early-next-year/) Snyk Acquires Developer-First DAST Provider Probely (https://www.globenewswire.com/news-release/2024/11/12/2979082/0/en/Snyk-Acquires-Developer-First-DAST-Provider-Probely.html) IBM's Red Hat Acquisition Will Pay For Itself By Early Next Year (https://www.nextplatform.com/2024/10/24/ibms-red-hat-acquisition-will-pay-for-itself-by-early-next-year/) VMware Reflections from Explore Barcelona and the Challenges of Modern App Delivery (https://news.broadcom.com/app-dev/reflections-from-explore-barcelona-and-the-challenges-of-modern-app-delivery) New SMB subscription may not end VMware migrations (https://arstechnica.com/information-technology/2024/11/new-smb-friendly-subscription-tier-may-be-too-late-to-stop-vmware-migrations/) Coté's take on Explore, in last week's Cloud Foundry Weekly (https://www.youtube.com/watch?v=Wkgwl9mKL2Y). RTO Amazon employees are a flight risk after the new return-to-office mandate, research reveals (https://finance.yahoo.com/news/amazon-exec-says-9-10-103742343.html) Remote work reduces child penalties by roughly half (https://x.com/arpitrage/status/1849530101035160031) Read the letter sent to AWS CEO Matt Garman, signed by 500 employees, (https://www.businessinsider.com/amazon-employees-open-letter-aws-ceo-office-return-rto-2024-10) Amazon CEO Andy Jassy denies that 5-day office mandate is a 'backdoor layoff' (https://www.cnbc.com/2024/11/05/amazon-ceo-andy-jassy-5-day-office-mandate-isnt-a-backdoor-layoff.html) Washington Post Employees Ordered Back to Office 5 Days a Week (https://www.nytimes.com/2024/11/07/business/media/washington-post-return-to-office.html?smid=nytcore-ios-share&referringSource=articleShare) Everyone agrees: A shorter workweek is great! (https://thehustle.co/news/everyone-agrees-a-shorter-workweek-is-great) Return-to-office mandates are more than “backdoor layoffs” (https://overcast.fm/+AAQLdtAb8Tc) Relevant to your Interests Google CEO says over 25% of new Google code is generated by AI (https://arstechnica.com/ai/2024/10/google-ceo-says-over-25-of-new-google-code-is-generated-by-ai/) Threads has 275 M Monthly Users (https://www.threads.net/@alexheath/post/DBw02uLSE53?xmt=AQGzqxkKe87WI9ToiqUrcEIU6mxhBohSO8BNX4ve1zqRHQ) Dropbox is laying off 20% of its global workforce (https://www.threads.net/@cnbc/post/DBwYF88uYSr?xmt=AQGz-t_BCEcQFjjZwD05xps9bJGHO7FL25RD1h6JIauuOQ) From IaC to Cloud Management: Pulumi's Evolution Story (https://thenewstack.io/from-iac-to-cloud-management-pulumis-evolution-story/) For Jeff Bezos and his businesses, Washington has become more important (https://www.washingtonpost.com/nation/2024/10/30/bezos-business-federal-government/) Russian court fines Google $2 decillion (https://www.theregister.com/2024/10/29/russian_court_fines_google/) GitHub Next | GitHub Spark (https://githubnext.com/projects/github-spark) The MacBook Air gets a surprise upgrade to 16GB of RAM (https://www.theverge.com/2024/10/30/24282981/apple-macbook-air-m2-m3-16gb-ram-minimum-price-unchanged) Meta says open sourcing Llama models will be a money-saver (https://www.theregister.com/2024/10/31/meta_q3_2024/) Google employees pressure costumed execs at all-hands meeting for clarity on cost cuts (https://www.cnbc.com/2024/11/01/google-employees-pressure-execs-at-all-hands-for-clarity-on-cost-cuts.html) Intel's future laptops will have memory sticks again (https://www.theverge.com/2024/11/1/24285513/intel-ceo-lunar-lake-one-off-memory-package-discrete-gpu) Against Incident Severities and in Favor of Incident Types (https://www.honeycomb.io/blog/against-incident-severities-favor-incident-types) Nintendo Just Launched a Music Streaming App, and It's Surprisingly Good (https://gizmodo.com/nintendo-just-launched-a-music-streaming-app-and-its-surprisingly-good-2000518802) Why The US Military Chose Silicon-Graphene Batteries (https://www.youtube.com/watch?v=l60hjFvj64s) Warren Buffett's GEICO repatriates work from the cloud (https://www.thestack.technology/warren-buffetts-geico-repatriates-work-from-the-cloud-continues-ambitious-infrastructure-overhaul/) Google Confirms Jarvis AI Is Real by Accidentally Leaking It (https://gizmodo.com/google-confirms-jarvis-ai-is-real-by-accidentally-leaking-it-2000521089) Curbside charging is coming to Michigan. (https://www.theverge.com/2024/11/6/24289516/curbside-charging-is-coming-to-michigan) Nintendo says the Switch successor will be compatible with Switch games (https://www.theverge.com/2024/11/5/24284745/switch-2-backward-compatibility-nintendo-online-preservation) Platform vs. DevEx teams: What's the difference? (https://newsletter.getdx.com/p/platform-vs-devex-teams) Why Strava Is a Privacy Risk for the President (and You Too) (https://lifehacker.com/health/stravas-heatmap-privacy-problem) Thunderbolt 5: Only Necessary for the Most Demanding Uses (https://tidbits.com/2024/11/06/thunderbolt-5-only-necessary-for-the-most-demanding-uses/) Guide to Selling Your Company (https://www.onlycfo.io/p/guide-to-selling-your-company) The mystery of Masayoshi Son, SoftBank's great disrupter (https://on.ft.com/3ADujb9) IronCalc (https://www.ironcalc.com/?utm_source=changelog-news) Neptyne is shutting down (https://www.neptyne.com/blog/neptyne-is-shutting-down) OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI (https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai) Matt Mullenweg says Automattic is 'very short-staffed' amid WordPress vs. WP Engine drama (https://techcrunch.com/2024/10/30/matt-mullenweg-says-automattic-is-very-short-staffed-amid-wordpress-vs-wp-engine-drama/) Automattic offered employees another chance to quit — this time with nine months' severance (https://techcrunch.com/2024/10/17/automattic-offered-employees-another-chance-to-quit-this-time-with-nine-months-severance/) Automattic's new site tracks how many websites left WP Engine following feud (https://techcrunch.com/2024/11/07/automattics-new-site-tracks-how-many-websites-left-wp-engine-following-feud-with-matt-mullenweg/) Cloudflare Blocks Automattic's WP Engine Tracker For Phishing (https://www.searchenginejournal.com/cloudflare-blocks-automattics-wp-engine-tracker-for-phishing/532244/) We're leaving Kubernetes - Blog (https://www.gitpod.io/blog/we-are-leaving-kubernetes) Nonsense 'Infinite monkey theorem' challenged by Australian mathematicians (https://www.bbc.com/news/articles/c748kmvwyv9o) Listener Feedback Anova Precision™ Oven 2.0 (https://anovaculinary.com/products/anova-precision-oven?adnet=g&gad_source=1&gbraid=0AAAAADhfRrCJj9bTdq3Z1e0hmcx0uuIQ5&gclid=Cj0KCQiAlsy5BhDeARIsABRc6Zsk_vcmd7dVaCIchSV2jLrJZSMXP3XPo34xTxNMGiCB3cxtJHwzFzIaAob8EALw_wcB) Conferences SREday Amsterdam (https://sreday.com/2024-amsterdam/), Nov 21, 2024, Coté speaking (https://sreday.com/2024-amsterdam/Michael_Cote_VMwarePivotal_We_Fear_Change), 20% off with code SRE20DAY CfgMgmtCamp (https://cfgmgmtcamp.org/ghent2025/), February 2rd to 5th. DevOpsDayLA (https://www.socallinuxexpo.org/scale/22x/events/devopsday-la) at SCALE22x (https://www.socallinuxexpo.org/scale/22x), March 6-9, 2025, discount code DEVOP SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Overcast (https://overcast.fm) features: Queue (https://www.reddit.com/r/OvercastFm/comments/1ehwixl/add_tomove_to_whats_the_difference/) and Uploads (https://thesweetsetup.com/upload-mp3-files-overcast/) Pixelmater Pro (https://www.pixelmator.com/pro/) Matt: Hardcore History: Wrath of the Khans (https://www.dancarlin.com/product/hardcore-history-wrath-of-the-khans-series/) podcast Wiz Ugly Sweaters Giveaway (https://www.linkedin.com/posts/wizsecurity_you-can-get-one-of-our-exclusive-2025-activity-7262464003807887362-fzNY?utm_source=share&utm_medium=member_desktop) Coté: Political Wire (https://politicalwire.com) Photo Credits Header (https://unsplash.com/photos/switched-on-iphone-dk4en2rFOIE) Artwork (https://unsplash.com/photos/person-holding-black-academic-hat-oTglG1D4hRA)
iMac、Mac mini、MacBook Proと立て続けに3日間の発表があったM4 Mac。LAのローンチイベントに参加した数少ない日本人である大石結花さんをゲストに、ホストのMACお宝鑑定団DANBO総裁と、backspace.fmの松尾が3人で新世代Macについて語りました。M4チップ搭載「iMac (24‑inch, M4, 2024)」「Mac mini(2024)」「MacBook Pro (2024)」ハンズオン新型M4 Macハンズオン!新世代iMac, Mac mini, MacBook Pro登場✨
This week, Apple finally makes 16GB the base memory in all Macs, X is still not the everything app that Elon promised, and Angelo may have seen a UFO on Halloween.
This week, they're here — ALL the M4 Macs. The iMac, the Mac mini, the MacBook Pros, the new accessories. What did we order? This episode supported by: Listeners like you. Your support helps us fund CultCast Off-Topic, a new weekly podcast of bonus content available for everyone; and helps us secure the future of the podcast. You also get access to The CultClub Discord, where you can chat with us all week long, give us show topics, and even end up on the show. Support The CultCast at support.thecultcast.com — OR at CultOf9to5MacRumors.com Take back control of your personal information and reduce the risk of spam, scams and identity theft with Incogni. Get 60% off an annual plan with code CULTCAST at incogni.com/cultcast Notion AI can now give you instant answers to your questions, using information from across your wiki, projects, docs and meeting notes. Go to notion.com/cultcast to try the powerful Notion AI today. This week's stories: Mac mini radically redesigned with M4 and M4 Pro chip Apple unveiled the radically redesigned Mac mini on Tuesday, with versions powered by the M4 chip and a new M4 Pro chip. Hacks and jokes ‘fix' M4 Mac mini's odd power-button placement New M4 iMac delivers major speed boost Apple launched an upgraded iMac with an M4 chip and support for Apple Intelligence on Monday, calling it “the world's best all-in-one for AI.” M4 MacBook Pros deliver big performance and battery life gains While the new laptops sport the same design as their predecessors, they deliver faster performance and a staggering 24 hours of battery life, the company says. Shocker! Apple doubles RAM in M2 and M3 MacBook Air Apple on Wednesday increased the RAM in the base model M2 and M3 MacBook Air from 8GB to 16GB. And the enhancement comes without a price increase. New Magic Keyboard, mouse and trackpad spell end for Lightning Apple's new Magic Keyboard, Magic Mouse and Magic Trackpad showed up for sale Monday. As expected, they switch from Lightning connector to USB-C.
As Apple announces its latest earnings, it's also brought out new Macs and launched Apple Intelligence to the world. There's so much to examine, including the future of the iMac.Contact your hosts:@williamgallagher_ on Threads@WGallagher on TwitterWilliam's 58keys on YouTubeWilliam Gallagher on email@hillithreads on Threads@Hillitech on TwitterWes on MastodonWes Hilliard on emailSponsored by:Oracle: Take a free test drive of Oracle Cloud Infrastructure at oracle.com/appleinsiderNotion: Try out the incredible power of Notion AI today! For a limited time, try Notion AI for free when you visit: notion.com/appleinsiderLinks from the Show:Collect 'em all in Pokemon TCG Pocket, now available for iPad, iPhoneNintendo Music app now available on iPhone Nintendo shutting down 'Animal Crossing: Pocket Camp'Apple takes a three-day week for its Mac launchesMacBook Air doubles base memory to 16GB for same $999New MacBook Pro arrives with M4 Pro, M4 Max, and a black colorwayNew Mac mini arrives with redesign, powerful M4 & M4 Pro processors, more USB-CNew 24-inch iMac adds M4 chip, nano-texture glass optionApple updates 'Magic' accessories to USB-C, included with M4 iMacApple's Magic Mouse charging port design has never been a big dealA clever hack fixes the new Mac mini power button's awkward locationYouTuber makes fake Apple event video for new MacsApple Intelligence Image Playground, Genmoji testers face long waitApple Intelligence arrives on the Mac with macOS Sequoia updateiOS 18.1 with the first wave of Apple Intelligence features is out now iOS 18.1 & iPadOS 18.1 review: baby steps with Apple IntelligenceSupport the show:Support the show on Patreon or Apple Podcasts to get ad-free episodes every week, access to our private Discord channel, and early release of the show! We would also appreciate a 5-star rating and review in Apple PodcastsMore AppleInsider podcastsTune in to our HomeKit Insider podcast covering the latest news, products, apps and everything HomeKit related. Subscribe in Apple Podcasts, Overcast, or just search for HomeKit Insider wherever you get your podcasts.Subscribe and listen to our AppleInsider Daily podcast for the latest Apple news Monday through Friday. You can find it on Apple Podcasts, Overcast, or anywhere you listen to podcasts.Podcast artwork from Basic Apple Guy. Download the free wallpaper pack here.Those interested in sponsoring the show can reach out to us at: advertising@appleinsider.com (00:00) - Intro (02:04) - Apple earnings (04:53) - Nintendo (13:12) - New Macs (19:56) - Magic Mouse (26:18) - Portable Mac mini (49:03) - Image Playground (01:08:07) - Apple Intelligence ads (01:17:28) - AppleInsider+ ★ Support this podcast on Patreon ★
On this week's episode of The MacRumors Show, we talk through all of Apple's major Mac announcements from this week. Over the first three days of the week, Apple unveiled the new iMac, Mac mini, and MacBook Pro with the M4, M4 Pro, and M4 Max chips. The chips offer significantly better CPU, GPU, and Neural Engine performance, higher amounts of unified memory, and more. The new iMac features USB-C Magic accessories and a refreshed palette of color options. The new Mac mini has been completely redesigned for the first time in over a decade with a radically smaller enclosure and two front-facing USB-C ports. Finally, the new MacBook Pro has a bolstered base model and a brighter display. The new iMac and MacBook Pro gain a nano-texture display option and a 12-megapixel front-facing camera with Center Stage and Desk View for the first time. Models with the M4 Pro or M4 Max support Thunderbolt 5 connectivity, and almost all of the new Macs feature better external display support. All of Apple's Macs now start with 16GB of unified memory as standard, including the MacBook Air, with no increase in price. This episode is sponsored by Notion. Try Notion for free by visiting https://www.notion.com/macrumors
Benjamin and Chance react to this week's trio of Apple announcements, with the exciting launch of the newly redesigned M4 Mac mini, and all the changes in the updated M4 MacBook Pro and iMac lineups. And in Happy Hour Plus, we consider how successful this three-day event format was for Apple, and whether we'd like them to do it again. Subscribe at 9to5mac.com/join. Sponsored by Shopify: Grow your business no matter what stage you're in. Sign up for a $1 per month trial at shopify.com/happyhour. Sponsored by Oracle: Take a free test drive of OCI at oracle.com/HAPPYHOUR Hosts Chance Miller @ChanceHMiller on Twitter @chancehmiller@mastodon.social @ChanceHMiller on Instagram @ChanceHMiller on Threads Benjamin Mayo @bzamayo on Twitter @bzamayo@mastodon.social @bzamayo on Threads Subscribe, Rate, and Review Apple Podcasts Overcast Spotify 9to5Mac Happy Hour Plus Subscribe to 9to5Mac Happy Hour Plus! Support Benjamin and Chance directly with Happy Hour Plus! 9to5Mac Happy Hour Plus includes: Ad-free versions of every episode Pre- and post-show content Bonus episodes Join for $5 per month or $50 a year at 9to5mac.com/join. Feedback Submit #Ask9to5Mac questions on Twitter, Mastodon, or Threads Email us feedback and questions to happyhour@9to5mac.com Links Apple unveils M4 iMac in new colors, nano-texture display option, 16GB base RAM, more Apple unveils redesigned Mac mini with M4 and M4 Pro, Thunderbolt 5, more Apple launches entry MacBook Pro 14-inch with M4 chip, 16GB RAM, better battery life, more Apple unveils new MacBook Pro line with M4, nano-texture display, Center Stage camera, more Watch: Hands-on with M4 MacBook Pro, iMac and the new Mac mini
The Apple release week continues with new MacBook pros. GitHub goes multi-model. Alphabet earnings were good, but Reddit earnings were massive. Why Samsung is having such a hard time since the summer. And a summary of the color Kindle reviews. Here's what you missed today in the world of Tech.Links:Apple updates the MacBook Pro with M4 Pro and M4 Max chips (The Verge)Every MacBook Air now starts with 16GB of RAM at no extra cost (Engadget)GitHub Copilot will support models from Anthropic, Google, and OpenAI (The Verge)PlayStation Shutters Studio Behind ‘Concord' Video-Game Flop (Bloomberg)More than a quarter of new code at Google is generated by AI (The Verge)Reddit shares soar 22% on earnings beat and better-than-expected forecast (CNBC)Samsung's Sudden $122 Billion Wipeout Shows the Cost of Sleeping on AI (Bloomberg)Russian Hackers Are Targeting US Officials, Microsoft Says (Bloomberg)Kindle Colorsoft review: The missing link in Amazon's ereader lineup (Engadget)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Listen to a recap of the top stories of the day from 9to5Mac. 9to5Mac Daily is available on iTunes and Apple's Podcasts app, Stitcher, TuneIn, Google Play, or through our dedicated RSS feed for Overcast and other podcast players. Sponsored by Dreametech: Industry-leading smart cleaning products, available now with massive savings. New episodes of 9to5Mac Daily are recorded every weekday. Subscribe to our podcast in Apple Podcast or your favorite podcast player to guarantee new episodes are delivered as soon as they're available. Stories discussed in this episode: Apple unveils new MacBook Pro line with M4, nano-texture display, Center Stage camera, more Apple launches entry MacBook Pro 14-inch with M4 chip, 16GB RAM, better battery life, more M4 Max chip has 16-core CPU, 40-core GPU and 35% increase in memory bandwidth Apple doubles MacBook Air base RAM to 16GB on M2 and M3 models iOS 18.2 beta: New daily sudoku games come to Apple News+ Listen & Subscribe: Apple Podcasts Overcast RSS Spotify TuneIn Google Podcasts Subscribe to support Chance directly with 9to5Mac Daily Plus and unlock: Ad-free versions of every episode Bonus content Catch up on 9to5Mac Daily episodes! Don't miss out on our other daily podcasts: Quick Charge 9to5Toys Daily Share your thoughts! Drop us a line at happyhour@9to5mac.com. You can also rate us in Apple Podcasts or recommend us in Overcast to help more people discover the show.
Apple anunció el nuevo iMac con chip M4 que por primera vez comienza con 16GB de RAM y un precio bastante razonable. Estas son las novedades que debes conocer, así como una comparativa rápida versus sus predecesoras, para que sepas si realmente merece la pena el cambio. Sigue nuestro canal de YouTube. - YouTube Tech - YouTube Dinero
Listen to a recap of the top stories of the day from 9to5Mac. 9to5Mac Daily is available on iTunes and Apple's Podcasts app, Stitcher, TuneIn, Google Play, or through our dedicated RSS feed for Overcast and other podcast players. Sponsored by Dreametech: Industry-leading smart cleaning products, available now with massive savings. New episodes of 9to5Mac Daily are recorded every weekday. Subscribe to our podcast in Apple Podcast or your favorite podcast player to guarantee new episodes are delivered as soon as they're available. Stories discussed in this episode: Apple unveils M4 iMac in new colors, nano-texture display option, 16GB base RAM, more Apple Intelligence is here: iOS 18.1 includes Writing Tools, a new look for Siri, notification summaries, and more iOS 18.1 now available: Here's everything you need to know Listen & Subscribe: Apple Podcasts Overcast RSS Spotify TuneIn Google Podcasts Subscribe to support Chance directly with 9to5Mac Daily Plus and unlock: Ad-free versions of every episode Bonus content Catch up on 9to5Mac Daily episodes! Don't miss out on our other daily podcasts: Quick Charge 9to5Toys Daily Share your thoughts! Drop us a line at happyhour@9to5mac.com. You can also rate us in Apple Podcasts or recommend us in Overcast to help more people discover the show.
Aquí tienes un resumen atractivo y profesional del episodio de podcast, adaptado a un tono más relajado:¡Prepárate para darle candela a tus oídos con este nuevo episodio! Los cuatro fantásticos del podcast se reúnen para hablar sobre los flamantes lanzamientos de Apple. El plato fuerte es el nuevo iMac con chip M4, que llega con una paleta de colores vibrantes para alegrar tu casa u oficina. Además, sorprende con 16GB de RAM de serie, aunque el puerto Ethernet genera debate. Pero no todo es hardware, también analizamos las novedades del iOS 18.1 y la esperada Apple Intelligence. Eso sí, en Europa todavía nos toca esperar para probarla. Mientras tanto, nos conformamos con pequeñas mejoras como los cambios en el Centro de Control o la app Salud.Entre risas y bromas, el equipo especula sobre los próximos lanzamientos de la semana: el Mac Mini, MacBook Pro y Mac Studio. ¿Caerá alguno en la tentación de renovar equipo? No faltaron las anécdotas tech, como la "traición" de Notion o la misteriosa app Ulises que tiene a los podcasters revolucionados. Y por supuesto, hubo tiempo para las quejas contra los anunciantes que no quieren que les quiten de en medio en Safari.En resumen, una charla desenfadada y llena de humor sobre las últimas novedades de Apple. ¡No te lo pierdas y suscríbete para más!
Control Body Odor ANYWHERE with @shop.mando and get $5 off off your Starter Pack (that's over 40% off) with promo code MAC at shopmando.com! #mandopod #ad This episode is sponsored by Notion. Try Notion for free by visiting https://www.notion.com/macrumors On this week's episode of The MacRumors Show, we discuss the unprecedented leak of Apple's M4 MacBook Pro models and the company's rumored move to more staggered hardware and software releases. Multiple leaks surrounding Apple's unannounced 14-inch MacBook Pro with the M4 chip recently surfaced online. The leaks began with unboxing videos shared by several Russian YouTube channels, showcasing the new entry-level MacBook Pro ahead of its official announcement. These leaks were followed by a listing on a Russian classifieds site, where multiple units were allegedly being sold after what appears to be theft from a warehouse. This marks one of Apple's most significant leaks in recent memory, drawing comparisons to the 2010 iPhone 4 prototype incident. The leaked MacBook Pro reveals several notable upgrades. The M4 chip with a 10-core CPU is 25% faster than the M3's 8-core CPU. As rumored, it also comes with 16GB of RAM as the new base configuration, doubling the previous standard of 8GB, and support for up to two external displays with the lid open. Another key improvement is the addition of a third Thunderbolt port, bringing the entry-level model up to parity with the higher-end configurations. It also looks like the entry-level MacBook Pro will be available in the Space Black color option for the first time. Despite these upgrades, the display, general design, and other features remain unchanged from the current MacBook Pro. These revelations come just weeks before Apple is expected to officially unveil its first M4-powered Macs, with a rumored release date of November 1. According to Bloomberg's Mark Gurman, Apple appears to be slowly moving away from its traditional annual release schedule for hardware and software, favoring a more staggered approach. This shift is evident with the introduction of iOS 18, where key features like Apple Intelligence are delayed and due to be rolled out in subsequent updates throughout 2025. As Apple's product lineup grows more complex, this strategy would allow for better quality control and innovation by releasing products and features when they are fully developed, rather than adhering to fixed timelines. While iPhones are expected to maintain their yearly updates, other products like the Apple Watch and Mac lineup may follow a multi-year or staggered release schedule to make the company's launches less predictable and more flexible. We discuss whether this is a good move for Apple and take stock of its product strategy as a whole in light of recent releases.
In the ever-evolving landscape of laptop technology, Intel's recent announcement of the Lunar Lake processors, officially known as the Core Ultra 200 V series, has stirred considerable interest among consumers and tech enthusiasts alike. However, this new architecture brings with it a notable limitation regarding RAM options, which could significantly impact user experience and purchasing decisions. Avram explores the implications of the Lunar Lake architecture on RAM configurations, particularly the constraints it imposes on consumers seeking higher memory capacities.Lunar Lake limits RAM options significantlyOne of the most striking features of the Lunar Lake processors is the integration of RAM directly onto the CPU package. This marks a significant shift in the design of PC CPUs, as it restricts laptop manufacturers to offer only specific RAM configurations - namely, 16GB and 32GB options. While this may suffice for many users, it presents a substantial limitation for power users who require more memory for demanding tasks. For instance, someone who regularly runs virtual machines or engages in heavy multitasking may find 32GB inadequate, especially as software requirements continue to grow.The absence of an option for 64GB of RAM in the Lunar Lake lineup raises concerns about the long-term viability of these laptops for users who tend to keep their devices for several years. Avram notes that the need for higher RAM capacities is becoming increasingly crucial, particularly for those who utilize their laptops for resource-intensive applications or who maintain numerous browser tabs and software programs concurrently. For power users, the inability to upgrade beyond 32GB could lead to frustration and dissatisfaction as their computing needs evolve.Moreover, he highlights the implications of this limitation for specific use cases, such as running virtual machines. Users who want to experiment with different operating systems or software configurations often require a significant portion of their RAM to be allocated to these virtual environments. In this scenario, a laptop with only 32GB of RAM can quickly become restrictive, as allocating 16GB to a virtual machine leaves only 16GB for the host operating system and other applications. This could result in sluggish performance and hinder the overall user experience.Despite the impressive advancements in processing power and efficiency that Lunar Lake processors promise - such as improved battery life and enhanced AI capabilities-the RAM limitation remains a critical drawback. Avram emphasizes that while many consumers may find 32GB sufficient for their needs today, the rapid pace of software development and increasing demands for memory-intensive applications suggest that this may not hold true in the near future. Thus, the lack of flexibility in RAM configurations could deter potential buyers who are looking for a laptop that will remain relevant and capable over time.Conclucion: Lack of upgradability limited choicesIn conclusion, while Intel's Lunar Lake processors bring notable advancements in processing power and efficiency, the significant limitation on RAM options cannot be overlooked. The integration of RAM directly onto the CPU package, capping at 32GB, poses a substantial challenge for power users and those with evolving computing needs. As software demands continue to grow, the inability to upgrade beyond 32GB could lead to dissatisfaction among users who require higher memory capacities for resource-intensive tasks. Therefore, despite the promising features of Lunar Lake, the restricted RAM configurations may ultimately influence purchasing decisions and the long-term viability of these laptops for a broader range of consumers.
True Cheating Stories 2023 - Best of Reddit NSFW Cheating Stories 2023
On this week's episode of The MacRumors Show, we discuss Apple's recently announced "It's Glowtime" event for September 9 and what we're expecting from this year's M4 Mac models. Apple this week sent out invitations for the "It's Glowtime" event that it is set to host on Monday, September 9. It is highly likely to unveil the iPhone 16 and iPhone 16 Pro, Apple Watch Series 10, Apple Watch Ultra 3, Apple Watch SE 3, and AirPods 4. We take a look at the event invite that clearly hints at Apple Intelligence's prominent presence at the event and weigh up what sort of role it could play in each of the devices that are set to be revealed. We also look at the upcoming M4 Mac models expected to launch this year: an entry-level 14-inch MacBook Pro with the M4 chip, new 14- and 16-inch MacBook Pro models with the M4 Pro and M4 Max chips, a new iMac with the M4 chip, and a completely redesigned Mac mini with the M4 and M4 Pro chips. With the exception of the new Mac mini, these devices are expected to be minor refreshes that focus on chip upgrades, but they could come with 16GB of memory as standard across the board for the first time. Some of these machines are now believed to be in mass production, so while they probably won't be announced at Apple's "Glowtime" event, launch is likely to take place soon.
On this week's episode, I cover the possibility of security changes to the Windows OS being announced soon, I dive into fixes for some of the previously reported issues caused by Windows Updates and much more! Reference Links: https://www.rorymon.com/blog/sonicwall-vulnerability-disclosed-macs-move-to-16gb-standard-new-crowdstrike-incident/
Timestamps: 0:00 Sometimes there's a RAM 0:13 M4 Macbooks with 16GB base RAM? 1:49 Telegram founder arrested 3:09 Win11 hotpatching, Control Panel lives 5:20 QUICK BITS INTRO 5:30 AI bodycams write police reports 6:13 Former Intel staff form RISC-V startup 6:52 Amazon 'joint employer' of drivers 7:35 DeepCool posing as "Shaking Tank" 8:18 SpaceX will save Starliner astronauts News Sources: https://lmg.gg/DExdh Learn more about your ad choices. Visit megaphone.fm/adchoices
Listen to a recap of the top stories of the day from 9to5Mac. 9to5Mac Daily is available on iTunes and Apple's Podcasts app, Stitcher, TuneIn, Google Play, or through our dedicated RSS feed for Overcast and other podcast players. Sponsored by Ulysses: The ultimate writing app for iPhone, iPad, and Mac. Learn more and get started today for free. New episodes of 9to5Mac Daily are recorded every weekday. Subscribe to our podcast in Apple Podcast or your favorite podcast player to guarantee new episodes are delivered as soon as they're available. Stories discussed in this episode: M4 Macs might start with 16GB of RAM for the first time Cast negotiations underway for Ted Lasso season 4, production expected to start early next year Apple officially announces iPhone 16 event Listen & Subscribe: Apple Podcasts Overcast RSS Spotify TuneIn Google Podcasts Subscribe to support Chance directly with 9to5Mac Daily Plus and unlock: Ad-free versions of every episode Bonus content ailCatch up on 9to5Mac Daily episodes! Don't miss out on our other daily podcasts: Quick Charge 9to5Toys Daily Share your thoughts! Drop us a line at happyhour@9to5mac.com. You can also rate us in Apple Podcasts or recommend us in Overcast to help more people discover the show.
Hi, Al, the co-host of Tuxjam here, I share some personal insights and then proceed to discuss the following episodes. hpr4136 - Pi Samba Share Here is the command I utilize to set up Samba as a service, eliminating the need for automatic console login: sudo apt install samba samba-common-bin Run the following command to restart the service after you have edited the Samba config file. sudo systemctl restart smbd hpr4148 - Cheap Computers Upon listening to this episode, I acquired a ThinkCentre M700 Tiny from eBay which came barebone, equipped it with an i5-6400T (also from eBay), and 16GB of RAM from CEX in the UK. I share my experience of running it with KDE Neon with Plasma 6.1, and after we tested Ultramarine. Next, I discuss my media server, which is an HP laptop powered by an 8th generation Intel® Core™ i7-8650U processor, equipped with 16GB of RAM, and running Jellyfin. Finally I discuss my plans to acquire a ThinkCentre M720q, which can support up to two drives, including a 2.5" HDD/SSD and an M.2 SSD. I also share my intentions to install Proxmox on this system.
Deze talkshow wordt mede mogelijk gemaakt door MSI. Alle meningen in deze video zijn onze eigen. MSI heeft inhoudelijk geen inspraak op de content en zien de video net als jullie hier voor het eerst op de website. Het weekend staat te trappelen om bij jou naar binnen te treden. We hebben het idee dat jullie daar wel voor openstaan. Om dit weekend keihard in te luiden, hebben we een verse editie van Einde van de Week Live voor je klaar staan. Jelle, JJ & Koos zitten achter de desk om het laatste game gerelateerde nieuws met je door te nemen. En er gebeurde een hoop. Leuke en minder leuke zaken. Zo werd bekend dat Resident Evil 9 in de maak is en wie dat gaat doen. We hoorden dat het vervolg van Cyberpunk 2077 'meer Amerikaans' moet worden. Wat bedoelt CD Projekt Red daar mee? Waarom wil Konami dat Kojima nieuwe Metal Gear Solid games gaat maken? Is die wens überhaupt haalbaar? De drie heren behandelen al deze kwesties en meer in de Einde van de Week Live van vrijdag 5 juli 2024. We gaan het voorts hebben over de geluiden dat de productie van de Horizon Zero Dawn-serie op Netflix stop is gezet. Waarom is dat gebeurd? Is dat het einde van de serie? En eindelijk komen de eerste oude games van Activision/Blizzard naar Game Pass. Zit er wat tofs bij en is het voor nu genoeg? Je gaat het allemaal zien en horen in deze video. Daar is die weer, de Cyborg 15. Met wederom een leuke aanbieding voor deze laptop. Een laptop die kan dienen als een perfect instapmodel met een 12e generatie i5 processor, een RTX 4050 videokaart met DLSS3 voor extra frames, 16GB werkgeheugen, een 512GB SSD en een 144HZ Full HD Scherm. Deze Cyborg 15 kun je hier voor 849 euro scoren. Resident Evil 9 komt er aanScoor de Cyborg 15 gaming laptop voor minder dan 850 euro
El Motorola Moto G84 5G es un smartphone Android con una pantalla pOLED de 6.55 pulgadas con tasa de refresco de 120Hz. El Moto G84 cuenta con un procesador Snapdragon 695, en cuento a la configuración de memoria, podemos encontrar variantes de 8 y de 12GB de RAM y 128 como también de 256GB de almacenamiento interno. En cuanto a cámaras, el Moto G84 5G cuenta con una dual en su posterior, con un sensor principal de 50MP con estabilización óptica OIS, y su cámara selfie es de 16GB. Completando las características del Moto G84 cuenta con una batería de 5000 mAh con soporte para carga rápida, parlantes stereo con sonido Dolby Atmos, resistencia al agua IP54 y Android 13, ya actualizado con la versión Android 14. APOYANOS DESDE PAYPAL https://www.paypal.me/arielmcorg APOYANOS DESDE PATREON https://www.patreon.com/radiogeek APOYANOS DESDE CAFECITO https://cafecito.app/arielmcorg Podes seguirme desde Twitter @arielmcorg (www.twitter.com/arielmcorg) También desde Instagram @arielmcorg (www.instagram.com/arielmcorg) Sumate al canal de Telegram #Radiogeekpodcast (http://telegram.me/Radiogeekpodcast) Sumate al canal de WhatsApp #Radiogeek (https://whatsapp.com/channel/0029VaFdW0DGZNCwhP5rVd17)
Reviewing the Radeon RX 7600 XT is like returning to the original Radeon RX 7600 I covered last year, to the point that most of what I said back then still applies here too. The RX 7600 was a great entry-level 1080p GPU that ticked the essential boxes for most 1080p / 1440p needs but struggled when it came to maxing out graphically intense games.
We have RTX 4070 SUPER Benchmarks, Intel i9 Leaks, and more to discuss!!! [SPON: Use "brokensilicon“ at CDKeyOffer $16 Win10: https://www.cdkeyoffer.com/cko/Moore10 ] [SPON: Get 10% off Tasty Vite Ramen with code BROKENSILICON: https://bit.ly/3wKx6v1 ] 0:00 Minnesota vs Tennessee Winters (Intro Banter) 6:17 PS3 MLAA, Intel Foundry Services, AMD Laptop Support (Corrections) 15:30 RTX 4070 SUPER Analysis 26:13 RTX 4080 SUPER & RTX 4070 Ti SUPER Announced 42:22 How Nvidia plans to push 4070 Ti Sales after SUPER "launches"... 45:34 RX 7600 XT Releases next to BAD Mobile RADEON Sales 58:56 How can AMD afford to give a $329 GPU 16GB of VRAM? 1:00:11 Ryzen 7 8700G Announced, Hawk Point gets Rapid Adoption 1:11:40 AMD Hawk Point Benchmarks vs Meteor Lake Claims 1:18:13 Lunar Lake & Arrow Lake Details (kinda) Announced 1:25:06 Did Intel Arrow Lake once have Hyper-Threading? 1:31:52 Intel i9-14900KS Drama Leak, APO comes to 13th & 12th Gen 1:38:21 4090D, 7800M, 3050 6GB, MSI Claw, ARM Windows Exclusivity (Wrap-Up) 1:49:26 AMD mandating OCuLink, Devs Thoughts on FSR, Vite Vitality (Final RM) https://videocardz.com/newz/nvidia-rtx-4070-super-ad104-gpu-features-48mb-of-l2-cache-not-36mb-as-claimed-earlier https://www.techspot.com/review/1865-geforce-rtx-super/ https://videocardz.com/newz/custom-geforce-rtx-4070-super-cards-appear-at-retailers-for-up-to-650 https://youtu.be/gA-eKbi1QWU?si=x71xqQuaxlJ5dSGy https://www.nvidia.com/en-us/geforce/news/geforce-rtx-4080-4070-ti-4070-super-gpu/ https://www.computerbase.de/2024-01/gaming-notebooks-fuer-amd-radeon-rx-7000m-war-die-ces-ein-desaster/ https://www.anandtech.com/show/21215/amd-adds-radeon-rx-7600-xt-to-product-stack-16gb-1080p-gaming-card-for-329 https://videocardz.com/newz/ayaneo-and-gpd-launch-first-handhelds-with-ryzen-7-8840u-processor https://videocardz.com/newz/gpd-to-update-all-handheld-products-with-amd-ryzen-7-8840u-apu https://www.tomshardware.com/pc-components/cpus/amd-launches-ryzen-8000g-phoenix-apus-brings-ai-to-the-desktop-pc-reveals-zen-4c-clocks-for-the-first-time https://videocardz.com/newz/intel-shows-off-lunar-lake-with-memory-on-package-reaffirms-its-2024-plans-for-lunar-arrow-lake https://twitter.com/OneRaichu/status/1744537140451844344 https://www.pcgamer.com/intel-to-roll-out-14th-gens-game-optimization-software-to-older-1213th-gen-hybrid-cpus-after-all/ https://videocardz.com/newz/alleged-intel-core-i9-14900ks-6-2-ghz-cpu-has-been-pictured https://twitter.com/9550pro/status/1742151746598944892 https://www.youtube.com/watch?v=BGZMOK9l2Dc&ab_channel=KitGuruTech https://videocardz.com/newz/nvidia-geforce-rtx-4090d-is-6-slower-than-rtx-4090-in-first-test-oc-support-limited https://videocardz.com/newz/shipping-manifests-reveal-amd-cuarzo-gpus-as-navi-3x-series-hint-at-navi-32-mobile-rx-7800m https://videocardz.com/newz/nvidia-geforce-rtx-3050-6gb-to-feature-2304-cuda-cores-and-70w-tdp https://videocardz.com/newz/msi-claw-gaming-handheld-leaked-features-intel-core-ultra-7-155h-with-arc-graphics-and-32gb-memory https://www.youtube.com/watch?v=S1R08Qx6Fvs&ab_channel=Windows https://videocardz.com/newz/amd-enables-fluid-motion-frames-afmf-for-integrated-radeon-700m-series-through-preview-driver https://www.tomshardware.com/pc-components/cpus/windows-on-arm-may-be-a-thing-of-the-past-soon-arm-ceo-confirms-qualcomms-exclusivity-agreement-with-microsoft-expires-this-year#:~:text=The%20exact%20date%20the%20exclusivity,coming%20from%20AMD%20and%20Nvidia https://www.bleepingcomputer.com/news/security/framework-discloses-data-breach-after-accountant-gets-phished/ https://www.youtube.com/watch?v=eONWY3kbZc0&ab_channel=DigitalTrends https://www.youtube.com/watch?v=S1R08Qx6Fvs&ab_channel=Windows https://www.howtogeek.com/what-is-oculink/ https://www.amd.com/en/product/14066
A Game Developer joins to discuss PlayStation 5 Pro, XBOX, and CES Announcements. [SPON: Download Royal Match to Support MLID: https://strms.net/royal_match_mooreslawisdead ] [SPON: Get 10% off Tasty Vite Ramen with code BROKENSILICON: https://bit.ly/3wKx6v1 ] [SPON: Use "brokensilicon“ at CDKeyOffer for $16 Win10: https://www.cdkeyoffer.com/cko/Moore10 ] 0:00 Introducing Bryan 5:48 Intel's CES Keynote and 2024 Laptop CPU Competition 25:45 Lunar Lake AI, Odd Arrow Lake & Battlemage Messaging 30:29 Nvidia & AMD Keynotes and AI Messaging 35:10 How a weaker Intel could solve Nvidia's CPU Problem 37:28 Ryzen 7 8700G – Should gamers care about the NPU? 44:20 AMD Strix iGPU Performance - Is it a big deal for Devs? 49:53 AMD's 2 Year Plan to Rehabilitate RADEON's Brand 54:13 AMD RDNA 4 - Performance Expectations & Ideal Lineup 1:09:59 PlayStation 5 Pro Specifications Analysis 1:30:43 XSS Longevity and the Future of XBOX 1:45:54 RX 7600 XT 16GB – How long will 16GB last? 1:59:50 FSR 3, DLSS, Generative AI 2:13:12 Managing a Future Dominated by AI Previous Bryan Episode: https://youtu.be/NDEka3tBE1g Bryan's Twitter Account: https://twitter.com/bryanheemskerk Bryan's Next Game: https://store.steampowered.com/app/2469200/Fera_The_Sundered_Tribes/ Intel's “Keynote”: https://www.intel.com/content/www/us/en/events/ces.html Intel Keynote Summary: https://www.anandtech.com/show/21225/the-intel-ces-2024-ai-everywhere-keynote-live-blog-starts-at-5pm-pt0100-utc Intel Open House Replay: https://youtu.be/PD9xBaQhaA4?si=Bks38VGogBWwiFWJ AMD Advancing AI PCs: https://youtu.be/LlTpLD0whIo?si=HGpzL-nVrPPAKA1h Nvidia CES Special Address: https://www.youtube.com/live/-JbSg2UnK2k?si=4sXbNIiYNK7OxWn9 MLID PS5 Pro Rumor Analysis: https://youtu.be/aowqsIKcYPc https://www.resetera.com/threads/tom-henderson-ps5-pro-specs-and-release-window-details-codenamed-trinity-30wgps-18000mts-memory-speed-november-2024-target.744703/page-63?post=116078280#post-116078280 https://twitter.com/aschilling/status/1744534376593953178 https://videocardz.com/newz/intel-shows-off-lunar-lake-with-memory-on-package-reaffirms-its-2024-plans-for-lunar-arrow-lake https://videocardz.com/newz/intel-lunar-lake-mx-leak-44-cpu-cores-8-xe2-gpu-cores-tsmc-n3b-node-and-displayport-2-1-support https://youtu.be/BGZMOK9l2Dc?si=l_abj0T_A3sqxewk https://videocardz.com/newz/nvidia-geforce-rtx-40-super-review-and-sales-embargo-information-leaks-out https://www.tomshardware.com/pc-components/gpus/nvidia-rtx-40-series-super-models-revealed-4070-super-coming-jan-17-at-dollar599 https://www.tomshardware.com/pc-components/gpus/nvidia-rtx-40-series-super-models-revealed-4070-super-coming-jan-17-at-dollar599 https://www.youtube.com/watch?v=JS5N41F4fZQ&t=26079s https://www.resetera.com/threads/insomniac-comparing-aaa-games-to-so-called-mid-sized-games.797607/ https://www.tomshardware.com/reviews/amd-ryzen-7-pro-4750g-renoir-review/5
Reviews for the M3 MacBook Pro and 24-inch iMac are out, plus Final Cut Pro gets new features on Mac and iPad, iOS 17.1.1 brings bug fixes, and Humane officially announces the AI pin.Contact our hosts@stephenrobles on Threads@stephenrobles on TwitterStephen on Mastodon@williamgallagher_ on Threads@WGallagher on TwitterWilliam on MastodonSponsored by:Zocdoc: Go to zocdoc.com/appleinsider and download the app to sign-up for FREE. Find doctors and specialists that take your insurance and even book appointments online!Links from the showMactracker on the App StoreM3 MacBook Pro review roundupiMac 24-inch M3 review: Performance, specs, priceThere is no Apple Silicon iMac 27-inch comingApple insists 8GB unified memory equals 16GB regular RAMApple updates HomePod software to version 17.1.1Apple releases iOS 17.1.1 update to fix iPhone issuesFinal Cut Pro November update brings improved navigationApple updates Logic Pro, GarageBand and MainStageScans reveal the flimsy, cheap components in fake AirPodsHumane Pin Orders begin 11.16Apple Watch Ultra saves unconscious diabetic's lifeApple shares trailer for Hannah Waddingham holiday specialSupport the showSupport the show on Patreon or Apple Podcasts to get ad-free episodes every week, access to our private Discord channel, and early release of the show! We would also appreciate a 5-star rating and review in Apple PodcastsMore AppleInsider podcastsTune in to our HomeKit Insider podcast covering the latest news, products, apps and everything HomeKit related. Subscribe in Apple Podcasts, Overcast, or just search for HomeKit Insider wherever you get your podcasts.Subscribe and listen to our AppleInsider Daily podcast for the latest Apple news Monday through Friday. You can find it on Apple Podcasts, Overcast, or anywhere you listen to podcasts.Podcast artwork from Basic Apple Guy. Download the free wallpaper pack here.Those interested in sponsoring the show can reach out to us at: steve@appleinsider.com (00:00) - Intro (01:33) - M3 MacBook Pro Reviews (09:58) - M3 iMac Reviews (11:08) - Scary Fast Final Thoughts (15:05) - 8GB = 16GB (20:11) - Sponsor: Zocdoc (22:14) - iOS 17.1.1 (25:27) - Mandela Effect (29:56) - Final Cut Updates (31:44) - AirPod CT Scan (33:23) - Humane Pin (38:08) - Stephen's Fast Internet (46:59) - Watch Saves Lives (49:32) - Holiday Special ★ Support this podcast on Patreon ★