POPULARITY
SPOILER WARNING! When you're lost in the darkness… tune in to What the Hype?! Today's episode will begin our coverage of MAX's “The Last of Us,” which has its Season 2 premiere on Sunday April 13. In this episode, the hosts recap the events of Season 1, as well as the video game on which it's based. Excitingly, hosts Laura and Eric are joined by Chelsea Hopkins, and the whole panel has a lot to say. Chelsea and Eric previously covered Season 1 of the HBO show on their special event podcast, “Thank You for Spore-ing”! Videos of that podcast are available on YouTube (https://www.youtube.com/playlist?list=PLrDCenqGSDK9aFvG8L6Z65OR9CSkdTMBp) or wherever podcasts are found (like the show home page) https://thankyouforspieling.libsyn.com/ After saying what The Last of Us means to them, and revisiting the episodes of Season 1, the hosts also briefly look forward, in a non-spoiler way, to the forthcoming season and attempt to predict what we can expect from the show. Next week, our episode will be ALL-Spoilers for the Last of Us Part II, looking ahead to Season 2's adapting at least part of the story! Thanks for listening. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this next installment of our How To Do Tantrik Ritual series, we look at three very important moments in Tantrik ritual worship (namely, bhūta-śuddhi, nyāsa and gandhādi pūjā) that all serve to reinforce the idea of Advaita, non-duality in the context of worship. Over and over we affirm that the One being worshipped and the One worshipping as well as the very tools and items of the worship themselves are all One and the Same Reality: Consciousness-Bliss, Shiva. As such, we discuss how pūjā (ritual worship) and japa (mantra recitation) can act as powerful sādhanas (spiritual techniques) for truly realizing non-duality beyond mere intellectual concepts. We also talk a little bit about the importance of the sacred pilgrimage places known as śākta pīthas, and how these might be mapped unto the body in the process of nyāsa. Excitingly, we compare the rather similar mythological structures of the "Dismemberment of Osiris" episode in the Egyptian pantheon to the "Dismember of Sati" incident in the Purānas to make a point about sacred geography and the Immanence of God, following from last week's discussion on Nyāsa as a Sacred Pilgrimage Practice!A truly exciting discussion! Thank you all for coming! Jai Mā!Support the show
**Disclaimer: Due to the LIVE nature of the show with a guest, the description you read below may or may not be the show you get. Instead, you'll get Husker memories and stories of Nebraska memorabilia. We think you'll enjoy it anyway. Join us for a positive and engaging live stream on Monday Night Therapy from Corn Nation! Hosted by Minnie Hunt and featuring longtime Nebraska football fan James Boardman, we'll explore the bright future of Nebraska football under head coach Matt Rhule. In this episode, we'll discuss the past coaching failures that have impacted the program and how the dedicated fanbase has continued to show up through the challenges. Excitingly, we'll highlight the highly anticipated recruits who are now on campus for spring mat drills, setting the stage for a dynamic transformation. We'll discuss Matt Rhule's impressive coaching history and his remarkable ability to revitalize programs, showcasing his successful stints at Temple and Baylor. Rhule's innovative offensive and defensive strategies and focus on player development are key to building a winning culture. With the Cornhuskers aiming for success in the competitive Big Ten conference, we'll share fan perspectives and the excitement surrounding Rhule's vision. Don't miss this chance to engage with fellow fans and join in on all the fun in our live chat. Let's come together and be part of the optimistic journey ahead for Nebraska football! Tune in and share your fans experience with us! ⚡️
Welcome to the March episode of "Me, Myself & Darlene" where we're checking back in with your questions from my 8-year Trimiversary Celebration. We saw the reoccurring theme that emerged from your inquiries: "How do you stay the course?" and "What keeps you consistent?" showing why mindset is not just a part of the journey, but the backbone of it. Let's talk about it! And I added a sprinkling of tips that I hope encourages and inspires you today. Enjoy!Excitingly, today marks the opening of Spring Buds '25, where we've integrated comprehensive mindset strategies right from the start to ensure you're primed for success. Check out my Spring Buds '25 Program and discover how we're preparing you to thrive in your health journey, available now on my website: https://southernandhealthycoaching.com/spring-buds-25*Season 5 Episode 4Enjoying these podcast episodes? Support My Work with a Tip! If you love the content and value the tips, recipes, and advice I share, you can show your appreciation with a tip in my jar. Your support helps me continue creating content that inspires and empowers you on your journey. Thank you for being a part of my community!
"Parenting is not just a role; it's a profound influence that shapes the heart and soul of a home. We promise you'll gain invaluable insights from JR Miller's "Homemaking" as we discuss parenting's pivotal role in nurturing a loving and Christ-like environment. We challenge the cultural norm of outsourcing parental duties, emphasizing the irreplaceable value of being present and intentional in our children's growth and education. By prioritizing character development over merely enforcing good behavior, we explore how parents can truly transform their homes into sanctuaries of growth and love.Transitioning from living for oneself to dedicating life to raising children can be an emotional journey filled with silent struggles and unspoken triumphs. We candidly address the tension between nurturing children and maintaining one's identity, acknowledging the shame and sadness that sometimes accompanies this shift. Through personal reflections and cultural insights, we aim to lift the veil on these struggles, sharing a poignant image of unseen support in the form of angels completing a weary parent's work. Accepting our imperfections while striving to do our best is a recurring theme, offering comfort and understanding to parents navigating this complex journey.A nurturing home environment goes beyond aesthetics, involving active engagement and shared experiences that build treasured memories. We highlight the significance of fathers actively participating in their children's lives and the power of simple courtesies and affection in family interactions. The dynamics between parents significantly influence children's behavior, and we advocate for an atmosphere of love and cultural enrichment within the home. Excitingly, we ponder the idea of involving children in future segments, especially with Wendell's anticipated enthusiasm, leaving us eager for the joyful possibilities this could bring to our discussions."Find the book Home-Making by J.R. Miller here.Find us Elsewhere:Instagram - @_ACommonLife - MorganCommunity Newsletter - The CommonTaylor on SubstackMorgan on SubstackDM us on the Socials or email us at Taylor@acommonlife.co or Morgan@acommonlife.coMusic on the podcast was composed by Kevin Dailey. The artist is Garden Friend. The track is the instrumental version of “On a Cloud”
To start off this exciting new season of "Maroon and bold", sports editor Sydney Neal is joined by BasketBall Beat reporter, Noah Henson, to talk about CMU's men's BasketBall MAC play match ups along with the 2025 toilet paper toss. Excitingly, in this episode you'll hear interviews from head coach Tony Barbee along with the brother-duo Quentin and Jakobi Heady after their big matchup against Western Michigan.
This week, we're re-airing one of our favourite episodes featuring Felix Böck, Founder & CEO of ChopValue, a certified B Corp based in Vancouver, Canada, that creates high-performance circular economy designs made entirely from recycled bamboo chopsticks. Hailing from South Germany, Felix came to Canada to complete a PhD at the University of British Columbia in Innovation on Composite Materials with bamboo as the main natural fibre resource. Since its founding, ChopValue has recycled and transformed over fifty million chopsticks, diverting them from landfills, and now operates microfactories globally with partners like Vancouver Airport and Cadillac Fairview. Excitingly, since this episode first aired, ChopValue announced a $15 million investment to launch its Microfactory Venture Platform, aiming to scale its unique microfactory model and further its impact worldwide. This milestone represents a significant step in the company's mission to combine sustainability with innovation, proving that waste can truly be turned into value. Follow us on Instagram: @someonelikeyoupodcast
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe
BUFFALO, NY- November 26, 2024 – This #editorial was #published by Aging (listed by MEDLINE/PubMed as "Aging (Albany NY)" and "Aging-US" by Web of Science) in Volume 16, Issue 17, titled, “The silent protector: Nucleoporin93's role in vascular health.” Written by Julia Michalkiewicz, Tung D. Nguyen, and Monica Y. Lee from The University of Illinois at Chicago College of Medicine, this editorial highlights the critical role of a protein called Nucleoporin93 (Nup93) in maintaining blood vessel health as we age. The authors review new research suggesting that Nup93 could be a key target for treatments to prevent or reduce aging-related diseases, including heart disease and stroke. Cardiovascular diseases remain the leading causes of death worldwide, with aging identified as a major risk factor. Vascular health declines as endothelial cells (EC)—the protective lining of blood vessels—lose their functionality with age. This deterioration leads to inflammation, arterial stiffening, and reduced blood flow, significantly increasing the risk of life-threatening diseases. The authors underscore the urgent need to uncover the molecular mechanisms driving these changes. Nup93 plays an essential role within nuclear pore complexes (NPCs)—gateways that regulate molecular exchanges between the cell nucleus and cytoplasm. Age-related loss of Nup93 disrupts this delicate system, weakening endothelial cells function and accelerating vascular aging. Researchers identified Nup93 as a crucial protector of endothelial health, preventing harmful protein build-ups such as Yes-associated protein (Yap), a known driver of inflammation and cellular aging. Excitingly, scientists have discovered that restoring Nup93 levels in damaged endothelial cells can reverse some of these harmful effects. They also found that blocking Yap can prevent issues caused by low Nup93 levels. These findings highlight the potential for new medicines or therapies to protect blood vessels as people age. The authors propose that future treatments could involve delivering Nup93 directly to damaged blood vessels to restore their health and prevent cardiovascular diseases. They emphasize the importance of further research to uncover why Nup93 levels decrease with age and how restoring it might improve blood vessel function. “These latest discoveries provide a fresh and innovative perspective of EC biology, highlighting NPCs as major regulators of EC health that may underlie mechanisms of vascular aging and disease progression.” In conclusion, the editorial encourages scientists to focus on understanding how endothelial cells stay strong and the role of NPCs in keeping blood vessels healthy. This research could lead to important breakthroughs in slowing down aging and improving people's quality of life. DOI - https://doi.org/10.18632/aging.206097 Corresponding author - Monica Y. Lee - monicaYL@uic.edu Video short - https://www.youtube.com/watch?v=as6opv9_FYM Subscribe for free publication alerts from Aging - https://www.aging-us.com/subscribe-to-toc-alerts About Aging-US The mission of the journal is to understand the mechanisms surrounding aging and age-related diseases, including cancer as the main cause of death in the modern aged population. The journal aims to promote 1) treatment of age-related diseases by slowing down aging, 2) validation of anti-aging drugs by treating age-related diseases, and 3) prevention of cancer by inhibiting aging. (Cancer and COVID-19 are age-related diseases.) Please visit our website at https://www.Aging-US.com and connect with us: Facebook - https://www.facebook.com/AgingUS/ X - https://twitter.com/AgingJrnl Instagram - https://www.instagram.com/agingjrnl/ YouTube - https://www.youtube.com/@AgingJournal LinkedIn - https://www.linkedin.com/company/aging/ Pinterest - https://www.pinterest.com/AgingUS/ Spotify - https://open.spotify.com/show/1X4HQQgegjReaf6Mozn6Mc MEDIA@IMPACTJOURNALS.COM
Join us in this episode of Noob School as we dive deep into the extraordinary career of Greg Bennett, a trailblazer in integrated advertising and branded entertainment. With over three decades of experience, Greg has founded and led some of the most innovative agencies in the industry, including the Luna Bacardi Group, O! Brand Entertainment & Marketing, and Iconoclast Brand Marketing & Entertainment. He has worked with Fortune 500 companies, revolutionizing marketing strategies and transforming the branded entertainment landscape. In this episode, Greg shares his invaluable insights on what it takes to succeed in sales and marketing today. He discusses his journey launching groundbreaking campaigns for iconic brands like Apple, CBS, and Disney, and offers practical sales advice that can elevate your approach in today's competitive market. Excitingly, Greg's upcoming book, "I Can Make Caffeine Nervous," is set to release next week! Tune in to hear about his mission to change the world through collaboration with his nonprofit, "The Genius Stage," and gain inspiration to push the boundaries of your own career. Whether you're a seasoned sales professional or just starting out, this episode is packed with wisdom that will empower you to drive success in your endeavors. Get your sales in rhythm with The Sterling Method: https://SterlingSales.co I'm going to be sharing my secrets on all my social channels, but if you want them all at your fingertips, start with my book, Sales for Noobs: https://amzn.to/3tiaxsL Subscribe to our newsletter today: https://bit.ly/3Ned5kL #SalesTraining #B2BSales #SalesExcellence #SalesStrategy #BusinessGrowth #SalesLeadership #SalesSuccess #SalesCoaching #SalesSkills #SalesInnovation #SalesTips #SalesPerformance #SalesTransformation #SalesTeamDevelopment #SalesMotivation #SalesEnablement #SalesGoals #SalesExpertise #SalesInsights #SalesTrends
Questions, comments? Shoot me a text.Unlock the secrets of sustainable weight loss by prioritizing protein in your diet. Discover why traditional calorie-cutting methods often fall short, and learn how setting your protein intake to 40% of your daily calories can be a transformative approach. As a board-certified holistic nutritionist and life coach, I, Amy White, will guide you through breaking free from outdated dieting principles to achieve lasting results. In this episode, we'll tackle common misconceptions about protein and illuminate how this macronutrient can help maintain muscle and strength while effectively shedding unwanted pounds.Excitingly, you're also invited to join our Holiday Weight Loss Masterclass on Friday, October 25th, at 11 am Pacific time. This free event offers practical insights into utilizing protein for continuous weight loss during festive times. Bring along your friends and family to benefit from mutual support and accountability, and don't worry if you can't attend live—you'll receive a recording to catch up at your convenience. Make sure to register through the links in the show notes and seize this opportunity to become leaner, stronger, and healthier, all while enjoying your favorite holiday treats.Healthy Holidays MasterClass: Register HereProtein Snack Challenge: Get It HereWeight Loss Coaching Program: Hangry to Healthy™Get Your Food Audit HereWhat to Eat Guide: Healthy Food ListSchedule Your Free Consult: Lose Weight For The Last TimeWebsite: The Simplicity of WellnessFollow Me on Instagram
(SEASON 3. EP 41) Thrilled to welcome Kelowna's Makayla Charleen to Country with Celine! She just released her latest single, “Marlboro Might,” co-written with David Borys this September, and she shares the story behind the song, which is inspired by a particular saloon in K-Town. Excitingly, the track was also added to Apple Music's ‘New In Country' editorial playlist! Makayla is gearing up for an exciting 2025, especially with the CCMAs taking place in her hometown! Tune in to find out where you can catch her performing for the rest of this year! SOCIALS Instagram: https://www.instagram.com/makaylacharleenmusic/ Web: https://makaylacharleenmusic.com/ _______________ FOLLOW & KEEP UP with COUNTRY WITH CELINE: Insta: https://www.instagram.com/countrywithceline/ Web: https://countrywithceline.ca Apple Podcast: https://podcasts.apple.com/us/podcast/country-with-celine/id1563285858 Spotify: https://open.spotify.com/show/0ULNqzQp0Tw0Jv4g0Rtjxz
Join hosts JJ Englert and David Pal on 'This Week in NoCode + AI' as they delve into the bustling realm of BubbleCon and upcoming industry conferences. Anticipations for Paris' No Code Summit are high with event demos and swag giveaways promised. The episode discusses OpenAI's strategic pivots, fluctuating company valuations, and technological advancements, highlighting the ongoing AI race and infrastructure challenges faced by key players. The hosts explore strategies for content relevance in AI-assisted search ecosystems and the significant Google ad integrations in AI-powered search results. Excitingly, JJ unveils a new SaaS project aimed at simplifying backlink acquisition for indie makers. Insights from BubbleCon unfold as they discuss Bubble's focus shifts, enterprise approach challenges, pricing concerns, and the integration of native mobile capabilities. Wrapping up, they celebrate Bubble's first acquisition of Flusk, poised to enhance security offerings. Stay tuned for comprehensive anecdotes, industry trends, and projections in the ever-evolving no-code landscape. 00:00 Introduction and Upcoming Events 00:40 Exciting No Code and AI Developments 02:30 OpenAI Dev Day Highlights 08:41 Google's AI-Powered Search and Ad Integration 14:32 Building a New SaaS: Tradebacklinks 19:43 BubbleCon Recap and Insights 24:38 Bubble's Focus on MVP Builders 25:38 Challenges for Enterprise Users 26:25 Feedback from Bubble Users 27:00 Enterprise Frustrations and Market Position 28:38 Agency Experiences and Market Insights 36:09 FlutterFlow's Enterprise Success 38:05 Native Mobile App Development 44:28 Bubble's First Acquisition 46:45 Concluding Thoughts and Future Predictions
In this week's podcast episode, I sat down with Florence Gaub, the Director of Research at the NATO Defense College in Rome. A World Economic Forum's Global Future Council member, and Vice-President of the European Forum Alpbach, Florence is a master of strategic foresight and international security. Her latest book, The Future, is a must-read, exploring how humanity's visions of tomorrow have shifted in different historical contexts. Our conversation spanned various domains, from the fascinating work being done at NATO to her new book, which I read and absolutely loved. Although it's not out in English yet, I have been assured that the release is on the horizon – so keep your eyes peeled! In our conversation, Florence walked us through her career, the impact of her recent publication, and, as always, the four books which have been most pivotal in her life and her work. From a mysterious Dutch novel from her childhood to Isaac Asimov's The Foundation, each pick tells a unique story about how we perceive time, and how this has changed in line with the maturation of our societies. Excitingly, Florence also shared some insights into the methodology of strategic foresight and the ethical implications of forecasting the future – and the technologies she thinks we'll look back on and laugh at in 100 years time. Lit with Charles loves reviews. If you enjoyed this episode, I'd be so grateful if you could leave a review of your own, and follow me on Instagram at @litwithcharles. Let's get more people listening – and reading! Florence's four books were: The Towers of February, Tonke Dragt (1973) The Foundation Part 1, Isaac Asimov (1951) Nos Derniers Festins, Chantal Pelletier (2019) Julia, Sandra Newman (2023)
Cassandra Austen, beloved sister to Jane, was a talented artist in her own right. At age 19, she illustrated Jane's satirical History of England with thirteen delightful ink-and-watercolor portraits. She continued to draw and paint throughout her life, most often copying from popular newspaper and magazine prints of the day. In this episode, Austen scholar Janine Barchas discusses her recent discovery of previously unidentified works by Cassandra and the underappreciated "art of copying," a talent Jane Austen gave her heroine Elinor Dashwood. Excitingly, there may still be pieces of Cassandra's work out there, waiting to be discovered by you, the listener! Images of Cassandra's drawings discussed in this episode are included in the transcript on our website: https://jasna.org/austen/podcast/ep15. A video version of this episode is also available on our YouTube Channel: https://youtu.be/AzPfNIDt-6UVisit our website: www.jasna.orgFollow us on Instagram and FacebookEmail: podcast@jasna.org
"You have to think about the exit from day one.” – Ronik Patel, Founder of UnlimitedWP. If you've ever dreamt of selling your agency, look no further than this episode. Excitingly, we have another two-time guest on, as Corey welcomes back Ronik Patel to discuss his recent success in getting his company, UnlimitedWP acquired by E2M. And, if the latter sounds familiar, it's because its founder, Manish Dudharejia, is also a recent guest and friend of the podcast! Ronik joins the show to share his exit experience, how that came to be, and all the lessons along the way. It truly is fascinating to witness the journey of a competitor becoming the acquirer, and what decisions led to the exit - or partial merger, to be exact. While this episode is a gold mine of M&A advice, If we had to summarize a single takeaway, it would be that exits are made from day one. You have to plan for it from the get-go to ensure that your agency can run (and be acquired) without the day-to-day involvement of the founder. Beyond the acquisition, we also discuss his future plans and what Ronik is building next for agencies. Tune in for the full story. Here's what we cover in this episode: - How to prime your agency for an acquisition. - Best practices for a successful exit. - Acquisition experience and learnings. - What's next for Ronik and what he's building in the agency space. Here are some actionable key takeaways for agency founders: - If you're looking for an exit, seek out buyers immediately. - Trust and transparency are paramount in the acquisition process. - Make a list of potential buyers early on and build relationships before you need them. - Aligning on values is the cornerstone of a successful acquisition. The resources mentioned in this episode are: - Connect with Ronik on LinkedIn Here- Check out UnlimitedWP Here
Welcome to Astronomy Daily, your go-to source for the latest in space and astronomy news. I'm your host, Anna. Today we've got some fascinating stories lined up that you won't want to miss. We'll be diving into SpaceX's recent breakthroughs, including the reveal and first firing of their latest Raptor 3 engine. We'll also cover major milestones from NASA, such as the significant progress made with the Nancy Grace Roman Space Telescope. Lastly, we'll discuss an exciting citizen science project from the European Space Agency that invites you to help classify thousands of newly imaged galaxies. So grab your telescopes and let's embark on this cosmic journey together.- **SpaceX's Raptor 3 Engine Reveal**: SpaceX had a bustling week revealing and firing the new Raptor 3 engine. This advanced engine significantly improves performance, packing a punch with 280 metric tons of thrust while being lighter than its predecessors. What makes Raptor 3 stand out is its internal design, where much of the external plumbing has been either moved inside or eliminated, allowing for higher pressure and efficiency. This marks a noteworthy evolution from the Raptor 2, which has been the workhorse of SpaceX's Starship program so far.- **SpaceX's Starship Preparations**: Meanwhile, SpaceX isn't just resting on its laurels. The company is deeply engaged in preparations for Flight 6 and is eagerly awaiting regulatory approval for Flight 5. These efforts include readiness checks and vital tests. Excitingly, this also involves operational tests with the Mechazilla chopsticks, a key mechanism designed to catch the Starship boosters as they return from space. The upcoming Flight 5 mission is on standby with both the ship and the booster cleared and ready pending final clearance. This highlights SpaceX's relentless push to refine its technologies and expand its capabilities, keeping the momentum going for future space endeavors.- **Starship Project Advancements**: SpaceX is also rapidly advancing in its Starship project. With Ship 33 nearing full assembly, only two sections remain to complete the first Block 2 ship: the bottom liquid oxygen tank section and the aft engine section. This new configuration will allow SpaceX to add around 300 extra tons of propellant, enhancing the ship's capabilities. In the meantime, major upgrades are underway for Booster 14.1. It's back at Orbital Launch Pad A for more testing, particularly focusing on the innovative Mechazilla chopsticks catch mechanism. These tests are crucial to ensuring the system can handle the instant loads required for successful booster recovery.- **NASA's Nancy Grace Roman Space Telescope**: NASA has achieved a significant milestone with the Nancy Grace Roman Space Telescope. Recently, the deployable aperture cover, an essential component of the telescope, successfully passed rigorous environmental tests designed to simulate the challenging conditions it will face during launch and in space. This large sunshade is designed to keep unwanted light out of the telescope, ensuring the clarity and accuracy of its observations. - **ESA's Galaxy Classification Project**: The European Space Agency and Galaxy Zoo are calling for public participation to classify thousands of galaxies imaged by the Euclid Space Telescope. This citizen science project is perfect for astronomy enthusiasts who love to explore the cosmos and contribute to scientific research. - **Groundbreaking Sounding Rocket Mission**: A groundbreaking sounding rocket mission is set to study the sun as a star. This first-of-its-kind mission aims to observe the sun's behavior in an unprecedented way, potentially unlocking new insights into solar science. By utilizing a sounding rocket, scientists can gather unique data on solar activity that regular satellites and space telescopes might miss. For more Astronomy Daily, including our continually updating newsfeed, visit our website at astronomydaily.io. Follow us on social media at AstroDailyPod on Facebook, X, YouTubeMusic, and TikTok.For more Space and Astronomy News Podcast, visit our HQ at www.bitesz.com.Become a supporter of this podcast: https://www.spreaker.com/podcast/astronomy-daily-the-podcast--5648921/support.
In this Friday guest episode of The Therapy Edit, Anna chats to Dad Blogger and father of 3 Giles Alexander about his one thing; that you're doing an incredible job and your child is so lucky to have you. Giles Alexander is one of the UK's leading dad bloggers, who has been writing about fatherhood and the highs and lows of his own parenting journey for nearly a decade.After finding out he was going to be a dad for the first time, Giles quickly discovered that almost everything online about pregnancy and parenthood was targeted at mums and their experience, with hardly anything to help new dads figure out this monumental transition in their lives.To help fix this, Giles created his blog – www.youthedaddy.co.uk – which quickly became a favourite with new mums and dads around the world, drawn in by his humour, honesty and reassuring, practical advice.Best known for his “Man's Guide to Baby Growth During Pregnancy”, interviews with well-known personalities and parenting experts, and his funny and heart-warming parent poetry, Giles' posts have helped more than a million parents to navigate the rollercoaster ride of parenthood, while providing support, advice and encouragement to new dads the world over.Excitingly, Giles has recently published his first book - YOU THE DADDY: the hands-on dad's guide to fatherhood - providing dads-to-be with a huge amount of practical advice on how to be supportive partners throughout pregnancy and birth, as well as confident, hands-on dads throughout the early years of fatherhood.A proud working dad with a busy career and home life, Giles lives in the UK with his wife Rosie and their three small children.Don't miss Giles' Book - You the Daddy: The Hands-On Dad's Guide to Pregnancy, Birth and the Early Years of Fatherhood: Amazon.co.uk: Alexander, Giles: 9781837991259: BooksAnd of course, be sure to give him a follow on Instagram - www.instagram.com/youthedaddy
Imagine setting off for an exciting trip to Atlanta, only to be greeted by a series of mishaps—plumbing disasters, canceled Airbnb reservations, and frantic last-minute hotel bookings during a bustling Memorial Day weekend. Amidst this chaos, I found a respite at the Iconic Leaders event, which honored influential women and presidential lifetime achievement award winners. This event proved to be a serene haven amidst the turmoil, but by the end of my trip, one thing became painfully clear: my dreams of living like a Real Housewife of Atlanta were thoroughly quashed. In a different vein, have you ever pondered a world where a lie could cost you your life? Our special guest, Anfernee Parker, explores this concept in his new book, "The Liar Killers." Set in a dystopian future where honesty is enforced with a brutal hand, we discuss the potential of such extreme measures to eliminate corruption and ponder what a society governed by absolute truth might look like. Excitingly, our podcast now offers videos on Spotify, enhancing the experience for our listeners with more immersive content. As we commemorate Juneteenth, we also share personal reflections that illuminate our evolving understanding of privilege and the power of genuine connections. We explore the intricacies of overcoming trauma, the vital role of therapy, and why embracing vulnerability is a sign of strength. This episode is replete with touching stories and motivational insights, guiding listeners through the nuances of privilege, the fostering of authentic connections, and the pursuit of personal dreams. Join us for an enriching dialogue about staying motivated, trusting in oneself, and navigating life's numerous challenges. From Dystopian Truths to Heartfelt Healing: Navigating Chaos, Honesty, and Connection | Season 3 Episode 331 CHAPTERS: 00:00 - Intro 0:38 - Micah's Atlanta Trip 6:18 - Guest Introduction 9:04 - Anthony's Book 10:07 - The Lock-In 12:38 - Is the Government Corrupt 17:47 - Why is Juneteenth Important 21:03 - Privilege 24:30 - What is Privilege 29:53 - How to Connect 31:08 - Next Time on the Last Minute Podcast 33:43 - Motivational Speaker 36:12 - Men's Mental Health Awareness Month 45:40 - Why Do You Want to Be a Motivational Speaker 46:46 - What Do You Want to Say to the People Still Watching 48:02 - Why Don't People Want Help 50:08 - How to Deal with Manipulative People 53:32 - What's Next for Anthony 55:44 - Has Your Soul Gauge Ever Been Wrong 58:48 - Forgiving Yourself 1:01:06 - OUTRO #IconicLeaders, #RealHousewifeDashed, #TravelChaos, #TheLiarKillers, #DystopianFuture, #PodcastOnSpotify, #JuneteenthReflections, #PrivilegeAwareness, #GenuineConnections, #OvercomingTrauma, #TherapyTalk, #VulnerabilityIsStrength #thesefukkenfeelingspodcast #spiritualawakening #emotionalhealing #mentalhealthawareness #mentalhealthpodcast #mentalhealth
In this episode, we have the pleasure of interviewing two remarkable individuals from Pegasus Farm: Shelley Sprang, the Executive Director, and Audre Manners, the Equestrian Administrative Director. Pegasus Farm is dedicated to creating a supportive community that empowers individuals with diverse needs through therapeutic equestrian programs, vocational services, and recreational & social activities. They work with people with disabilities, anxiety, and trauma, using animal interactions as a healing tool. Additionally, Pegasus Farm employs adults with disabilities and operates a center to support military personnel and first responders. Excitingly, they are also in the process of launching a new Stable Moments program aimed at helping children in foster care. Tune in to learn more about their inspiring work and the positive impact they are making in their community!Episode Highlights: Introducing Shelley Sprang & Audre Manners What is Pegasus Farm? The History Behind Pegasus Farm Equestrian ProgramsAdult Day Vocational ServicesMilitary & First Responders Center The New Stable Moments Program Why horses? How to get involved with Pegasus Find more on Guest:Visit Pegasus Farm's Website Find More on Hope Bridge:Visit Our Website Follow us on InstagramFollow us on Facebook Foster Our Community Instagram This show has been produced by Adkins Media Co.
In this episode of Generation AI, hosts Ardis Kadiu and Dr. JC Bonilla discuss OpenAI's launch of GPT-4o, a groundbreaking multimodal AI model. GPT-4o can accept and generate text, audio, images, and video, enabling more natural human-AI interactions. The model is faster, more capable, and half the price of its predecessor, GPT-4. Excitingly, GPT-4o will be freely available to all users of ChatGPT. Ardis and JC explore the potential applications of GPT-4o in higher education, such as real-time translation, interactive learning, and AI-assisted content creation. They emphasize the importance of AI literacy and encourage listeners to embrace these new technologies.IntroductionHosts Ardis Kadiu and Dr. JC Bonilla introduce the episode's main topic: OpenAI's launch of GPT-4o.GPT-4o: A Multimodal AI ModelGPT-4o is a multimodal AI model that can accept and generate text, audio, images, and video.The model enables more natural human-AI interactions, responding to audio input in as little as 232 milliseconds.Improved Performance and AccessibilityGPT-4o is faster, more capable, and half the price of GPT-4.The model will be freely available to all users of ChatGPT, democratizing access to advanced AI.Application examplesReal-time translation for improved communication and collaboration.Interactive learning experiences, such as AI-assisted math tutoring.Enhanced content creation, like generating personalized stories and lullabies.The Future of Human-AI InteractionOpenAI envisions a future where AI agents interact with humans through voice and video, similar to the movie "Her."Integration with existing platforms, such as Google Drive and Microsoft OneDrive, streamlines workflow.Promoting AI LiteracyThe hosts emphasize the importance of AI literacy and encourage listeners to explore and share GPT-4o with students, family, and colleagues.As GPT-4o becomes more accessible, it has the potential to positively impact lives and drive innovation in higher education. - - - -Connect With Our Co-Hosts:Ardis Kadiuhttps://www.linkedin.com/in/ardis/https://twitter.com/ardisDr. JC Bonillahttps://www.linkedin.com/in/jcbonilla/https://twitter.com/jbonillxAbout The Enrollify Podcast Network:Generation AI is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too! Some of our favorites include The EduData Podcast and Visionary Voices: The College President's Playbook.Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com. Connect with Us at the Engage Summit:Exciting news — Ardis will be at the 2024 Engage Summit in Raleigh, NC, on June 25 and 26, and would love to meet you there! Sessions will focus on cutting-edge AI applications that are reshaping student outreach, enhancing staff productivity, and offering deep insights into ROI. Use the discount code Enrollify50 at checkout, and you can register for just $200! Learn more and register at engage.element451.com — we can't wait to see you there!
Dr. Daisy Robinton, co-founder and CEO of Oviva Therapeutics, discusses the company's innovative approach to improving women's healthspan by targeting the biology of ovarian aging. Motivated by her personal experiences and the realization that female physiology is underserved by research and medicine, Daisy outlines how menopause is a key inflection point in the acceleration of aging in women. She explains the central role of anti-Mullerian hormone (AMH) in regulating ovarian function and fertility. Oviva's lead program, a recombinant enhanced AMH protein, aims to improve IVF outcomes by synchronizing follicle growth. Excitingly, this approach could also preserve ovarian reserve to delay menopause onset, thereby extending female healthspan.Key Topics Covered:Pivoting from developmental biology to found a women's health startupOvaries as central regulators of female healthspan beyond reproductionAMH as a brake on follicle activation and loss of ovarian reserveUsing enhanced AMH to improve egg yield in poor-responding IVF patientsPotential of AMH-based therapy to delay menopause and slow agingMenopause as the single greatest known accelerator of agingEconomic and societal impact of extending female healthspanDistinguishing reproductive longevity from overall women's healthViewing fertility as a marker of overall health and wellbeing
At last it has stopped raining so David can take Joe metal detecting. David goes to his his usual field and fires up the kit. Joe is loving it but to say its a slow start is an understatement. They realise they don't really know how the Garrett Pro Pointer works so Joe tries to read out the instructions for David as they go. Excitingly they get a hit and David digs and digs and digs and digs and digs and digs but NOTHING. So they start again which feels like a step back. But eventually they get another hit and believe it or not they find something incredible (a nut and bolt with an Allen Key top). This is just the beginning of David's hunt for gold. FOR ALL THINGS CHATABIX'Y FOLLOW/SUBSCRIBE/CONTACT: You Tube: https://www.youtube.com/@chatabixpodcast Twitter: https://twitter.com/chatabix1 Insta: https://www.instagram.com/chatabixpodcast/ Patreon: https://www.patreon.com/chatabix Merch: https://chatabixshop.com/ Contact us: chatabix@yahoo.com Learn more about your ad choices. Visit podcastchoices.com/adchoices
➡️ Download Dr. Laura's guide to Developmental Milestones (it's free!) Today we will be exploring the intricate dance between eye movement patterns and brain function. We discuss the "Triangle of Development" with its cornerstones: the eyes, ears, and vestibular system, and how pivotal they are from the earliest stages of life. The development of the eyes, including the muscles and neurological connections, is crucial from birth and deeply intertwined with the coordination of the ears to help us navigate and understand the world around us. The ‘Triangle of Development' is a concept that I've introduced with my patients based on my extensive practice and even contributing to the creation of a piece of medical equipment. This triangle consists of the eyes, the ears, and the vestibular system—three essential components that connect to create a foundation for our neurological and sensory development. Understanding these connections can really help us understand the causes behind attention issues, anxiety, developmental delays, and language struggles. We know that the special senses, including sight, hearing, taste, smell, and touch, are the electrical information to the brain. With the vestibular system joining in before these senses, they all play a crucial part in how we perceive and interact with the world. Excitingly, we'll introduce our new "Jump Start Test" and the "Fantastic Seven" exercises designed to address primitive reflexes and core stability. This program is a leap towards transforming our understanding of neurology and behavior. Subscribe to the Connect My Brain YouTube Channel! SHOW NOTES: https://connectmybrain.com/episode114 What do you want to learn more about? Submit your questions here: https://www.connectmybrain.com/survey/ Phone the office: 678-501-5172
BEER TIME! Grab a glass – our glass – and settle in as we go behind the scenes of our glassware shoot, discuss BrewDog's bad news, and get a lecture in biology from Brad of all people. Excitingly, all our podcasts are now live on Youtube too, so welcome anyone listening thereWATCH THIS WEEK'S VIDEO: https://www.youtube.com/watch?v=jwZ4Fmz58SoSupport the showBrought to you by the team behind the Craft Beer Channel, The Bubble takes an irreverent look at beer from the outside, inviting new people to give us their perspective on the world we're all obsessed with. You're listening to the bubble, the podcast turning beer inside out.SUPPORT US! Pledge on Patreon and get some cool merch & videos: https://www.patreon.com/craftbeerchannel Check out our awesome sponsor The Malt Miller: https://www.themaltmiller.co.uk/ Twitter – @beerchannelFacebook – http://www.facebook.com/thecraftbeerchannelInstagram – @craftbeerchannel
Today I'm talking to Safi Abdi, a qualified Hypnotherapist, coach, and a trainee Psychotherapeutic Counsellor, dedicated to transforming lives through helping people to change their mindset and heal from diet trauma. As the host of the Gastric Mindset Podcast, Safi delves deep into the psychological roots of yo-yo dieting, empowering people who have had struggled with food and with insights and strategies for sustainable change. From battling childhood obesity to undergoing weight loss surgery in 2015, Safi intimately understands the struggles of the journey to a healthier lifestyle. Her personal experiences have fuelled her mission to help others break free from the vicious cycle of dieting. Drawing from her own journey and her ongoing training as a therapist, Safi offers personalised 1-2-1 coaching, providing clients with the tools and support they need to break free from destructive habits and cultivate lasting success on maintaining their weight loss after Bariatric surgery. Excitingly, Safi is launching the Mind-Over-Diet community, offering accessible support for individuals navigating the challenges off recovering from, chronic dieting, disordered eating patterns and food addiction. In the episode today, Safi shares openly about her childhood and teenage years, delving into the early trauma and later diet culture that profoundly impacted her. Safi explores the years of yo-yo diets and preoccupation with food, weight and body and the devastating impact on her wellbeing. She also talks about gastric surgery and her work in supporting others. Safi is an incredible inspiration, navigating the ups and downs of her healing journey. I'll be inviting her back to talk more about gastric surgery and the aftermath of managing weight loss and relationship with food later this year. I hope that you enjoy it. Follow Safi on Instagram @gastricmindset on all platforms and tune in to her podcast on YouTube and TikTok for invaluable insights and inspiration. For more information about Safi visit www.therapybysafi.com This week's sponsor: WeShape weshape.com/freedom Harriet Frew's current offers: - Online 10 Steps to Intuitive Eating Course with Harriet Frew - 50% off with code FREEDOMISPOSSIBLE https://www.theeatingdisordertherapist.co.uk/online-courses.html Eating Disorders Training for Professionals https://www.theeatingdisordertherapist.co.uk/eating-disorders-training-with-harriet-frew.html Body Image Training for Professionals https://www.theeatingdisordertherapist.co.uk/body-image-training-with-harriet-frew.html
This week, join Nate Klemp as he leads us in a meditation practice to widen our perspective. In our world, brimming with constant information and rapid change, it's common to feel overwhelmed and resort to narrowing our focus, often manifesting as stress and anxiety. This practice proposes a different approach by widening our perspective. It shifts our view from a limited 'soda straw' outlook to an expansive 'panoramic' one. Excitingly, Nate is also offering 20 free copies of his new book "Open: Living with an Expansive Mind in a Distracted World" to our listeners. This is a fantastic opportunity to delve deeper into mindfulness and expansion. To win a copy of Nate's new book, head over to https://signups.mindful.org/book-giveaway-open-by-nate-klemp/ before March 15th to enter for a chance to win. Show Notes Find more from Nate Klemp here: Nate Klemp on Mindful.org Nate Klemp's Website 80/80 Marriage And more from Mindful here: More episodes of 12 Minute Meditation The Real Mindful Podcast Let us know what you thought of this episode of 12 Minute Meditation by leaving a review or by emailing yourwords@mindful.org.
This week, join Nate Klemp as he leads us in a meditation practice to widen our perspective. In our world, brimming with constant information and rapid change, it's common to feel overwhelmed and resort to narrowing our focus, often manifesting as stress and anxiety. This practice proposes a different approach by widening our perspective. It shifts our view from a limited 'soda straw' outlook to an expansive 'panoramic' one. Excitingly, Nate is also offering 20 free copies of his new book "Open: Living with an Expansive Mind in a Distracted World" to our listeners. This is a fantastic opportunity to delve deeper into mindfulness and expansion. To win a copy of Nate's new book, head over to https://signups.mindful.org/book-giveaway-open-by-nate-klemp/ before March 15th to enter for a chance to win. Stay curious, stay inspired. Join our community by signing up for our free newsletter, where we share compelling insights and actionable ideas to enrich your everyday life. Connect with us at mindful.org/signup. Show Notes Find more from Nate Klemp here: Nate Klemp on Mindful.org Nate Klemp's Website 80/80 Marriage And more from Mindful here: More episodes of 12 Minute Meditation The Real Mindful Podcast Let us know what you thought of this episode of 12 Minute Meditation by leaving a review or by emailing yourwords@mindful.org.
In this episode of The Blac Moment Podcast, we delve into the transformative power of the energies that influence us. It's a reflection on whether individuals can truly change and what sparks significant shifts in our lives. The essence of the discussion highlights how the company we keep and the energies we engage with can profoundly shape our beliefs and self-perception.We explore the importance of guarding our space against negativity and self-doubt shared by others, recognizing how easily we can internalize these destructive beliefs. Emphasizing the significance of being selective about the energies we welcome into our lives, this conversation encourages us to surround ourselves with people who elevate and inspire us, thereby fostering a positive internal dialogue.Excitingly, we introduce our upcoming guest, the dynamic and adventurous Jacky, who embodies the spirit of positive energy and change. Living as a digital nomad, Jacky merges her love for adventure with a deep commitment to service. Her admirable efforts range from aiding a dog shelter in Thailand to contributing to the well-being of children in Ghana through her work with Maven Heart Foundation Ghana ( mavenheart.org ).Join us for a compelling episode this Saturday across various platforms, including YouTube, Apple Podcasts, Spotify, and GoodPods, to hear how Jacky's journey exemplifies the essence of The Blac Moment. Tune in to be inspired by her story and to reflect on how positive energies can influence personal growth and transformation.
PASSIONATE PMDD PARTNERS COURSEUse CODE "FRESH START" for $1000 OFF the Retreat!!Secure your Spot for the Breakup Proof RetreatClick to Join the PMDD Partners Breakup Proof AcademyClick to Book Private PMDD Partner SessionsEmbarking on a deeply personal exploration, I invite you on a journey through the heartrending world of PMDD and the profound connections we make and break along the way. The intricate dance of intimacy and disassociation within such relationships is a central theme, as we unravel the complexities of trauma bonds and the courage it takes to navigate and heal from them. Our discussion isn't just a narrative of my own experiences but a shared space for insights and wisdom, offering a guiding light for those touched by similar stories.As the host, I cast a spotlight on the bittersweet symphony of love and loss, reflecting on the lessons learned from ending a relationship with a PMDD partner. Unveiling the raw truths about compatibility and the maturity needed for closure, this episode examines the intricate web of emotions and the pivotal moments that can lead to profound self-reflection and change. The conversation aims to resonate with those grappling with the question of whether to hold on or let go in the face of PMDD's challenging dynamics.Excitingly, we also discuss the upcoming Breakup Proof Couples Retreat in Orlando, a sanctuary for couples embroiled in the PMDD journey. Here, attendees will find therapy, relaxation, and fun activities designed to fortify relationships against the tumult of PMDD. This episode promises a nurturing embrace for the heart and mind, equipping listeners with the tools for managing PMDD and fostering a deeper understanding of the bonds that both bind and heal us.
In this first podcast episode, Alex Beeson and Jon Hope, dive into the dynamic world of cybersecurity at Sophos. They discuss the company's proactive measures in countering emerging threats with Intercept X, shedding light on the critical attack warning feature designed to pinpoint high-priority alerts. The conversation explores the integration of Sophos' new APIs within Sophos XDR, highlighting the benefits of consolidation in threat detection and response. Excitingly, the hosts reveal new collaborations, such as the partnership with Veeam, and share insights into the early access program for Sophos' cutting-edge DNS protection. Tune in for an insider's perspective on the latest developments and strategies employed by Sophos to stay ahead in the ever-evolving landscape of cybersecurity.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Attention SAEs Scale to GPT-2 Small, published by Connor Kissane on February 3, 2024 on The AI Alignment Forum. This is an interim report that we are currently building on. We hope this update + open sourcing our SAEs will be useful to related research occurring in parallel. Produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort Executive Summary In a previous post, we showed that sparse autoencoders (SAEs) work on the attention layer outputs of a two layer transformer. We scale our attention SAEs to GPT-2 Small, and continue to find sparse interpretable features in every layer. This makes us optimistic about our ongoing efforts scaling further, especially since we didn't have to do much iterating We open source our SAEs. Load them from Hugging Face or this colab notebook The SAEs seem good, often recovering more than 80% of the loss relative to zero ablation, and are sparse with less than 20 features firing on average. The majority of the live features are interpretable We continue to find the same three feature families that we found in the two layer model: induction features, local context features, and high level context features. This suggests that some of our lessons interpreting features in smaller models may generalize We also find new, interesting feature families that we didn't find in the two layer model, providing hints about fundamentally different capabilities in GPT-2 Small See our feature interface to browse the first 30 features for each layer Introduction In Sparse Autoencoders Work on Attention Layer Outputs we showed that we can apply SAEs to extract sparse interpretable features from the last attention layer of a two layer transformer. We have since applied the same technique to a 12-layer model, GPT-2 Small, and continue to find sparse, interpretable features in every layer. Our SAEs often recover more than 80% of the loss[1], and are sparse with less than 20 features firing on average. We perform shallow investigations of the first 30 features from each layer, and we find that the majority (often 80%+) of non-dead SAEs features are interpretable. interactive visualizations for each layer. We open source our SAEs in hope that they will be useful to other researchers currently working on dictionary learning. We are particularly excited about using these SAEs to better understand attention circuits at the feature level. See the SAEs on Hugging Face or load them using this colab notebook. Below we provide the key metrics for each SAE: L0 norm loss recovered dead features % alive features interpretable L0 3 99% 13% 97% L1 20 78% 49% 87% L2 16 90% 20% 95% L3 15 84% 8% 75% L4 15 88% 5% 100% L5 20 85% 40% 82% L6 19 82% 28% 75% L7 19 83% 58% 70% L8 20 76% 37% 64% L9 21 83% 48% 85% L10 16 85% 41% 81% L11 8 89% 84% 66% It's worth noting that we didn't do much differently to train these,[2] leaving us optimistic about the tractability of scaling attention SAEs to even bigger models. Excitingly, we also continue to identify feature families. We find features from all three of the families that we identified in the two layer model: induction features, local context features, and high level context features. This provides us hope that some of our lessons from interpreting features in smaller models will continue to generalize. We also find new, interesting feature families in GPT-2 Small, suggesting that attention SAEs can provide valuable hints about new[3] capabilities that larger models have learned. Some new features include: Successor features, which activate when predicting the next item in a sequence such as "15, 16" -> "17" (which are partly coming from Successor Heads in the model), and boost the logits of the next item. Name mover features, which predict a name in the context, such as in the IOI task Duplicate token f...
#235In very rare cases, Alzheimer's disease could be transmitted from person to person during medical procedures. This finding comes as five people have developed the disease after receiving contaminated human growth hormone injections in the late 1950s to early 1980s – a practice that is now banned. What this finding means for medical settings and why most people don't need to be concerned. Elon Musk's mind-reading brain implant company Neuralink is carrying out its first human trial. The volunteer who has received the surgically implanted device and is now, Musk said earlier this week, “recovering well”. Neuralink promises to connect users to their smartphones and computers, reading brain signals and translating a person's intentions into text or other functions. While this isn't the first device of its kind, it is the only one being marketed as a consumer technology device, as opposed to a medical device. Contrails, the streams of white vapour that form behind planes in the sky, are to blame for a huge proportion of air travel's impact on the climate. But there's good news. Small changes in altitude may be sufficient to reduce their formation – and implementing these changes may be easier than we thought. Plus why flying at night has a bigger climate impact.Tiny tornadoes have been discovered inside the egg cells of fruit flies. These twisters circulate the jelly-like cytoplasm inside the cells and could be essential to the successful reproduction of these fruit flies. Excitingly, these tornadoes may be happening in the cells of other animals too – just not humans.Plus: Revealing which dogs live the longest; how an army of Twitter bots spreaded fake news about 2023's Chinese spy balloon incident; an ancient gadget that turns fibres into rope.Hosts Timothy Revell and Christie Taylor discuss with guests Chen Ly, Matt Sparkes, James Dinneen and Alex Wilkins. To read more about these stories, visit newscientist.com. Hosted on Acast. See acast.com/privacy for more information.
Jillian and special guest Jeff Guenther get candid and personal in this riveting conversation about why men look at provocative accounts, the difference between healthy and unheathy chemistry and why dating apps might be working against you. Jeff Guenther is a licensed professional counselor in Portland, Oregon, practicing since 2005. His focus includes working with both couples and individuals. Alongside therapy, Jeff dedicates his spare time to creating short videos for TikTok and Instagram, and hosting his weekly podcast, 'Big Dating Energy.' Excitingly, you can now pre-order his new book, also titled 'Big Dating Energy.' Follow him on all platforms for insightful relationship advice, tips, pep talks, and a range of resources to boost your mental health. Learn more at TherapyJeff.com ~~ Follow the show on Instagram: @jillianonlove Email the show at hello@jillianonlove.com Subscribe to Jillian on Love+ on Apple Podcasts or Patreon Find Resources mentioned in the show at the Jillian on Love Recommendations Follow Jillian Turecki on Instagram: @jillianturecki TikTok: @jillian.turecki Twitter: @JillianTurecki Visit her website at jillianturecki.com ~~ Jillian On Love is brought to you by QCODE. To advertise on the show, contact us! Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to the Toshia Lane Show!
Welcome to the first news episode of The Radcast for the week of January 5, 2024! In this 2024 kickoff, we reflect on the evolving tech landscape, discussing marketers' top concerns. From economic uncertainties to AI's rise, CEO-CMO disconnect, and work-from-home challenges, we dissect the trends shaping 2024. Dive into AI chatbots predicting social media and ad tech trends, emerging social media trends, and the impact of the 2024 Olympics on niche sports. Stay tuned for insights on media strategy, the TV vs. digital media debate, and a surprising revelation about Instagram Reels. Excitingly, we're launching Radcast Media Network, offering podcasters strategic insights and services. Sponsored by Vaycay, delivering premium plant-based wellness. Here's to a phenomenal 2024!Ryan and Chris' reflections on 2024 and Technology Predictions (00:39)Marketers' biggest concerns heading into 2024 (02:07)https://adage.com/article/year-review/cmo-concerns-heading-2024-2023-year-review/2531701Economic UncertaintyAI's Rise CEO-CMO DisconnectWork-from-home and In-office BalanceOrganizing TalentNavigating a Fraught Geopolitical LandscapeAI chatbots predict 2024's hottest trends in social media and ad tech (07:54)https://adage.com/article/digital-marketing-ad-tech-news/ai-chatbots-predict-2024s-hottest-trends-social media-and-ad-tech/2534701AI chat bots predicting 2024's hottest trends in social media and ad techGen Z Marketing AppsFinchHey! VinaFrienderClubhouse Rooms 2.02024 Social Media Trends Predictions (10:19)Niche Neighborhood AppsInterest-Based Micro-CommunitiesSkill-Sharing and Collaboration2024 Olympics Niche Sports (25:23)Media Strategy for Advertisers (27:09)TV vs. Digital MediaConnected TV vs. Private Marketplace ProgrammaticInstagram Reels outperforms TikTok, Facebook for branded video content (27:40)(https://www.marketingdive.com/news/instagram-reels-outperforms-tiktok-facebook-branded-video/703155 / Mickey Mouse copyright expiration inspires horror movies, video games and memes (32:09)https://www.nbcnews.com/tech/internet/mickey-mouse-horror-movie-slasher-trap-public-domain-rcna1318 97 Announcement: Radcast Media Network (35:13)Launching a multi-show media network with shared opportunities, production services, and more.Focus on strategy, branding, monetization, and guest management.Sponsor Promotion: Vaycay: Premium Wellness Products Delivered Straight to You in the US Website: www.RolloffthePain.comPodcast: www.thevaycaypodcast.com If you enjoyed this episode and want to learn more, join Ryan's newsletter https://ryanalford.com/newsletter/ to get Ferrari level advice daily for FREE. Learn how to build a 7 figure business from your personal brand by signing up for a FREE introduction to personal branding https://ryanalford.com/personalbranding. Learn more by visiting our website at www.theradcast.com. Subscribe to our YouTube channel https://www.youtube.com/c/RadicalHomeofTheRadcast.
Happy and healthy 2024! In this energizing start to the new year, Dr. Kahn brings you an action-packed episode of the "pea-cast." He dives into a variety of bite-sized topics, including the link between garlic consumption and reduced cancer risk, plus a roundup of the top scientific breakthroughs in heart disease from 2023. Next up, Dr. Kahn delves into the world of fatty acids, spotlighting a C15 "odd chain" fatty acid known as pentadecanoic acid. This powerhouse compound boasts significant heart and metabolic benefits. He'll dissect a groundbreaking study on a C15 supplement and discuss its implications. Excitingly, C15 is now available as a vegan-friendly capsule, Fatty15. For those eager to try it, Dr. Kahn has arranged a special discount on first orders. Just use the code KAHN at checkout or visit http://fatty15.com/KAHN. Wrapping up the episode is a deep dive into a transformative study on the cardiovascular advantages of the PROLON fasting mimicking diet. Prepare to be wowed by the study's astounding discovery: fat loss achieved without sacrificing lean muscle mass - the holy grail of weight loss! Interested listeners can learn more and order PROLON at http://www.prolonfmd.com/drkahn. Join Dr. Kahn for this thrilling episode as he unpacks these fascinating developments, setting the tone for a year full of health and knowledge!
Uncover the essential requirements for qualifying for a franchise loan as we break down the key elements crucial to your financial strategy. Dive into the intricate world of franchise financing with our episode, designed to empower both aspiring and seasoned entrepreneurs on their journey to success. Discover the pivotal role a robust credit score plays in your loan application and learn how a substantial down payment can significantly bolster your chances of securing the funding you need. We delve into the importance of crafting a solid business plan that not only outlines your vision but also demonstrates your understanding of the franchise model. For those venturing into the world of franchises for the first time, we shed light on the significance of relevant experience and how it can positively impact your loan application. Seasoned entrepreneurs will find valuable insights into the financial metrics that matter the most, including liquidity and net worth, as they navigate the franchise loan landscape. Excitingly, we are thrilled to announce our partnership with Lumos, a trusted financial ally committed to providing tailored solutions and expert support throughout the entire loan application process. Together, we aim to simplify the complexities of franchise financing, ensuring that you have the tools and guidance needed to make informed decisions and propel your entrepreneurial dreams forward. Don't let finance overwhelm you! Click here to connect with a specialist and secure your loan today! https://www.vettedbiz.com/funding-product/ Entrepreneurial dreams start here. Take our Biz Quiz, filter through 10,000+ opportunities, and get a boost with a loan. https://www.vettedbiz.com/quiz-test/ Ready to navigate the world of franchise financing and drive your entrepreneurial dreams forward? Not sure which franchise is right for you? click here. https://www.vettedbiz.com/solutions/franchise-buyers/ #FranchiseLoans #FranchiseFindings If you are looking for more information, you can connect with us through our networks: https://www.vettedbiz.com/ https://www.linkedin.com/company/vettedbiz/ https://www.facebook.com/vettedbiz
In today's episode, we have the pleasure of speaking with Meghan Blum, the talented founder and lead designer of Meghan Blum Interiors (MBI). Based in Des Moines, IA, MBI is an esteemed full-service interior design firm specializing in custom residential design. Meghan has developed a renowned classic style that seamlessly merges clean lines with striking details, offering a bridge between high-end design and practical living. In our conversation, Meghan shares the philosophy behind MBI's approach to interior design, underscoring the importance of homes reflecting the unique personalities of the individuals who inhabit them. We delve into the transformative power of a well-designed space that can enhance not only the aesthetics but also the overall well-being of its occupants. Excitingly, Meghan also reveals the recent launch of MBI's sister company, House of Blum. This new venture allows the brand to extend its expertise beyond interior design, offering a selection of beautiful home décor and exquisite furnishings. Listeners will be inspired to elevate their living spaces with the same attention to detail and refined taste that is synonymous with Meghan Blum Interiors. As we explore MBI's portfolio and delve into their design process, Meghan enlightens us with expert insights on curating timeless pieces, creating personalized luxury, and infusing harmony into everyday life. Whether you are in the midst of a home redesign or seeking design inspiration, this episode promises to be an invaluable resource for transforming your living space into a captivating haven. To learn more about Meghan Blum Interiors, and their services, and browse the stunning home décor offerings of House of Blum, visit their websites at meghanbluminteriors.com and houseofblum.com Join us for an enlightening conversation filled with expert design tips and insights as we discover the art of elevating your living space with Meghan Blum Interiors.
In this episode of the How to Protect the Ocean podcast, host Andrew Lewin dives into the controversial topic of deep-sea mining. While the focus has been on COP28, Andrew shifts the conversation to the recent developments in deep sea mining. He highlights reports from Greenpeace and mining websites that discuss countries and companies eager to start testing or continue testing deep-sea mining. Andrew raises questions about the viability and financial motivations behind these efforts. Tune in to learn more about the potential impacts of deep-sea mining and how it may affect our oceans. Links to article: 1) https://www.miningweekly.com/article/a-showdown-over-deep-sea-mining-is-taking-place-in-the-pacific-2023-11-28 2) https://www.greenpeace.org/international/press-release/64213/norways-greenlight-for-deep-sea-mining-in-the-arctic-shatters-international-credibility/ 3) https://www.euronews.com/green/2023/08/02/deep-sea-mining-heres-which-countries-oppose-and-support-the-controversial-practice Share your conservation journey on the podcast by booking here: https://calendly.com/sufb/sufb-interview Fill out our listener survey: https://www.speakupforblue.com/survey Join the audio program - Build Your Marine Science and Conservation Career: https://www.speakupforblue.com/career Facebook Group: https://bit.ly/3NmYvsI Connect with Speak Up For Blue: Website: https://bit.ly/3fOF3Wf Instagram: https://bit.ly/3rIaJSG Twitter: https://bit.ly/3rHZxpc The host of the podcast, Andrew Lewin, has recently launched a daily newsletter dedicated to providing valuable information about the ocean. This newsletter serves as a complementary resource to the podcast episodes, ensuring that listeners stay up to date with ocean-related news that may not be covered in the show. Describing it as an "information highway," the host encourages listeners to access the newsletter either through the podcast or by subscribing to receive it directly in their inbox every weekday morning. Stressing its significance, the host emphasizes that the newsletter is a valuable tool for staying informed about ocean-related matters and urges listeners to sign up for it. Excitingly, the podcast host announces the inclusion of job postings in the newsletter. These job opportunities are specifically related to the ocean and provide details about their locations and the organizations offering them. The host expresses enthusiasm for this new addition and hopes that it will gain traction among the audience. The overarching goal of the podcast and the host's company, Speak Up For Blue Media and Communications, is to inform listeners about ocean-related developments, empowering them to advocate for the ocean and take action towards its betterment. By signing up for the newsletter, which is sent every weekday morning, listeners can stay up to date with the latest information and job postings related to the ocean. The podcast, titled "How to Protect the Ocean," and its host's company, Speak Up For Blue Media and Communications, are dedicated to educating listeners about the state of the ocean and inspiring them to become advocates for its preservation. Andrew Lewin emphasizes the importance of speaking up for the ocean and encourages listeners to actively engage and make a difference. The podcast aims to raise awareness about various ocean-related issues, including deep-sea mining, and foster meaningful conversations and discussions among its audience. Additionally, the host mentions a newsletter that listeners can subscribe to, ensuring they receive regular updates and information about the ocean. Overall, the podcast and the host's company are committed to educating and empowering listeners to protect and conserve the ocean.
In this milestone episode of "The Lumber World," we're privileged to have Russ Taylor from Russ Taylor Global as our special guest. Fresh from his travels in Europe and Asia, Russ brings a wealth of knowledge on global production costs, the intricate movement of lumber, and his forecast for the upcoming year. But that's not all! The crew delves into the latest New Home sales and the Case Shiller housing index, providing a comprehensive analysis of their impact on the lumber market. A surprising revelation surfaces – Southern Yellow Pine is now up against the cost of production. The burning question arises: Are more curtailments on the horizon, and what undervalued and basis opportunities does this situation present? Shifting focus, we explore the current landscape of repair and remodel trends, unraveling insights that could shape the future of the lumber industry. The team also delves into the potential drop in interest rates in Q1, with Gregg offering a compelling response that's not to be missed. Excitingly, listeners are actively participating by calling in weekly to suggest new content and provide feedback. If you're looking to enhance your intelligence and gain perspectives from traders navigating the daily challenges of the lumber trenches, this episode is a must-listen. Join us for a captivating discussion that encapsulates the breadth and depth of the lumber industry. Drop your questions or comments in the feed, and let's make this anniversary episode of "The Lumber World" one to remember!
In this insightful content, we break down the pivotal decision-making process between SBA and Conventional Loans. Uncover the advantages and considerations of each option, shining a spotlight on the Small Business Administration's impactful support for small businesses through initiatives like the 7(a) Loan Program. Dive deep into the pros and cons of both loan types, with a focus on key factors such as interest rates, repayment terms, and eligibility criteria. Gain valuable insights into navigating the complex landscape of business loans to make informed decisions for your entrepreneurial journey. Excitingly, we're thrilled to introduce a strategic partnership with Lumos for franchise financing! Aspiring franchise owners can now access predictive models and invaluable insights to pave the way for their success. This episode was based on an exclusive Vetted Biz analysis, click here for the full report: https://www.vettedbiz.com/sba-vs-conventional-loan/ Secure Your Franchise Funding Today: https://2c2i0ujnjcc.typeform.com/to/CRh33AmF?typeform-source #SBALoanVsConventionalLoan #FranchiseFindings If you are looking for more information, you can connect with us through our networks: https://www.vettedbiz.com/ https://www.linkedin.com/company/vettedbiz/ https://www.facebook.com/vettedbiz
Just imagine, a life where you have far more good days than bad. Our guest for this episode, Mark "Mr Noots" Effinger, the 'King of Nootropics' unveils how he uses plant-based products to manipulate neurotransmitter levels, all in a bid to ensure you experience joy more often than not. Listen in as we navigate through the transformative power of these products, the crucial role neurotransmitters play in everyday mood, and cognitive performance, and the touching personal story that motivates Mark's mission to assist others in having 'no more bad days'.We take a detour into the realm of business and technology as Mark shares his journey of establishing his company, its evolution through personal tragedy, and his unique approach to creating unforgettable customer experiences. Excitingly, he gives us a sneak peek into his innovative product packaging, shares his background in creating laser light shows and his first love for Macintosh computers. He talks growing up in a blue-collar family, his early experiments with chemistry and vacuum technology, and how his resourceful parents were pivotal in fostering his curiosity.Our episode wraps up with a stimulating discussion on how nootropics can help us achieve our ambitions by improving our lives. Mark opens up about his personal journey battling ADHD and overcoming negative self-talk. He also introduces some of his innovative products, sucha as Zamner Juice, designed to stimulate the release of GABA and enhance motivation. As a special addition, we engage in a chat on my preferred podcasting stack—Upbeat and BrainFlow—and the intriguing process of creating a new NectarX formulation. Don't miss out on this enlightening voyage into the world of autobiology and biohacking. Tune in now!Want to try out some of Mark's creations? I'd recommend starting with Collagenius—it's yummy and packed full of nootropics: click here to check it out!BiOptimizers: Digestion & Nootropics! The digestion experts (MassZymes is the best!) and leading Nooptropics provider (Nootopia) all in 1!Safe Tech & Pharma-grade Supplements DefenderShield® & Lightbody Labs are proud to offer the best EMF/5G physical and cellular protection$50 off your Test from The DNA Company Revolutionizing DNA interpretation by matching genetic systems to human biochemistry. EnergyBits: Nutrition for Mitochondria Use code AUTOBIOLOGY to get 20% off your purchase of any EnergyBits package.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Connect with Me!IG: @autobiologywithjenniferRumble: @autobiologywithjenniferYouTube: autobiologyFB: @autobiologyTikTok: @jenniferlittlefleckWebsite: https://autobiology.net/Autobiology Podcast on Apple, Google Play, Spotify and all your other favorite places!
Octavio F. G. is a musician and songwriter. https://linktr.ee/OctavioF.GGet ready for an exhilarating journey deep into the realms of music, comedy, and culture, as we invite Octavio into the Mind Buzz universe. We've got a lot to unpack, starting with our latest endeavors - from the captivating performances at the Artist in the Alley event to the zany shenanigans at the Monday Night Goofball Open Mic at First Amendment Pizza Joint. Excitingly, we're also turning our website into a mecca for open mics - a platform for comedians, musicians, and poets to showcase their talent!As we wander through Octavio's life, we discuss his cross-country touring experiences, his move to LA, and the stark contrasts between California and other states. Notably, we touch upon the rising rent costs in California and delve into the unique scenario enveloping the fentanyl situation in Vancouver, Canada. We weigh in on Vancouver's approach to the Control Drugs and Substances Act, its significant impact on the city's culture and population, and draw parallels with LA.We then turn the spotlight on my own personal journey into the music world. From my introduction to the guitar during my college days to my transition into a music career, I share my story. Octavio and I delve into the importance of live performances in personal growth, the courage it takes to get on stage, and the influence of the environment on performance. As we wrap up, we dissect the process of writing and testing comedy sets, sharing our top five comedy rules amidst the West Coast Pop-Lock Podcast Comedy Competition. So, tune in for a whirlwind of insightful discussions, amusing anecdotes, and engaging dialogue!FirmeMezcal.comUse promo code MYGRITO to receive a discount with your purchasehttps://www.firmemezcal.com/HouseofChingasos.comUse promo code MINDBUZZ to receive a 10% discount on entire purchasehttps://houseofchingasos.com/?ref=0F5Yfbs6SAN0f2Mindbuzz.orgStart podcasting!https://www.mindbuzz.org/ Subscribe to The Mindbuzz Youtube Channel https://www.youtube.com/channel/UCIYj7eDCsV3YPzxv7VRKZKg Don't forget to follow us on Instagram @themindbuzz https://www.instagram.com/themindbuzz/ to keep up with our hosts, guests, and upcoming events! See you on the next one!"King without a Throne" is performed by Bad HombresKing without a Throne Official Music Videohttps://www.youtube.com/watch?v=fNhxTYU8kUsKing without a Thronehttps://open.spotify.com/track/7tdoz0W9gr3ubetdW4ThZ8?si=9a95947f58bf416e
Have you ever felt desperate for a health breakthrough? Meet our inspiring guest, Rose Allison, who spent 14 years battling health issues before she discovered the life-changing potential of Colostrum. Her story is an awe-inspiring testament to the power of this natural supplement and the profound impact it can have on wellness. Excitingly, Rose also shares how the right brand of Colostrum made a world of difference. Ever heard of Lemusie six? It's an incredible blend of seaweed lemon and aritaponica that assists in purging heavy metals from your body. The gamechanger? It's the addition of Colostrum six that helps deliver these benefits right into your body's deeper layers. We'll also talk about how colostrum can help reset the immune system and transform your reaction to dairy products. Pull up a chair as we unpack the amazing properties and benefits of colostrum. Prepare to be fascinated by this bovine derived product that's loaded with antibodies and offers potent immune system support. Not just for humans, but animals too! Find out about the sourcing and processing of colostrum and the different forms it can be consumed in. Plus, you'll learn about the remarkable ways Colostrum can shield against diseases and support the health of your pets. Don't miss out as we uncover the transformative world of natural supplements together!FIND ME!
My guest today has spent his entire career, trying to understand mental illness. What's really causing it – and how can we better manage it. Dr Chris Palmer is Director of the Department of Postgraduate and Continuing Education at McLean Hospital, Massachusetts and an Assistant Professor of Psychiatry at Harvard Medical School. In today's episode, he shares some of the profound insights he's gained over almost 30 years as an academic psychiatrist. He combines years of clinical, neuroscience and metabolic studies into one unifying idea: that mental disorders are not caused by a chemical imbalance. Instead, they are metabolic disorders of the brain, caused by dysfunction in our mitochondria. It's a theory that connects physical, mental and emotional health, and it's the topic of his excellent new book, Brain Energy. Chris doesn't deny the roles trauma, psychological and social factors can play in poor mental health. But he explains the link between these factors and our metabolism, and how diet and lifestyle interventions can help. Excitingly, Chris explains that making changes to our diet and lifestyle actually offer far more hope for long-term remission than existing treatments, which generally aim to only reduce symptoms. As Chris reveals, his own experience with trauma and mental illness is what drives him to try and help millions of people around the world who are still suffering. Chris is advocating for a transformation in the way we view and treat mental health. And, if that happens, it won't just help ease an epidemic of depression, anxiety and other conditions – it also has the potential to address all of the chronic diseases that are underpinned by metabolic dysfunction. Chris is knowledgeable, passionate and articulate. I thoroughly enjoyed my conversation with him and I hope you enjoy listening.CAUTION: This podcast discusses ketogenic diets. Always consult a qualified healthcare practitioner before making any drastic changes to your diet.Support the podcast and enjoy Ad-Free episodes. Try FREE for 7 days on Apple Podcasts https://apple.co/feelbetterlivemore. For other podcast platforms go to https://fblm.supercast.com.Thanks to our sponsors:https://hunterandgatherfoods.com/livemorehttps://exhalecoffee.com/livemorehttps://drinkag1.com/livemorehttps://vivobarefoot.com/livemoreShow notes https://drchatterjee.com/396DISCLAIMER: The content in the podcast and on this webpage is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your doctor or qualified healthcare provider. Never disregard professional medical advice or delay in seeking it because of something you have heard on the podcast or on my website. Hosted on Acast. See acast.com/privacy for more information.
In this captivating podcast episode, host Brian engages in a thought-provoking conversation with Chris Feldmann of the rock band Ronin. The focus of their discussion revolves around Ronin's latest album, "Valak The Defiler," a masterpiece that showcases the band's ability to seamlessly blend diverse rock styles into a cohesive musical experience.Chris delves into the intricate songwriting process behind "Valak The Defiler," shedding light on the band's creative approach and the influences that shaped their sound. He passionately shares how bands like Creed, Metallica, and Tool have played a pivotal role in inspiring his musical journey, and how attending electrifying live concerts has further fueled his passion for creating impactful music.As the conversation progresses, Brian and Chris delve into Ronin's unique performance style, which sets them apart from other rock bands. Chris candidly discusses the challenges they faced during the COVID-19 pandemic, as live gigs became scarce and the music industry faced unprecedented obstacles. Despite these setbacks, Chris remains optimistic about the future, highlighting the band's unwavering commitment to constantly evolving their music and pushing boundaries.Excitingly, Chris reveals Ronin's future plans, which include a highly anticipated mini tour to connect with their dedicated fan base and expand their reach to new audiences. He also shares their intention to secure a booking agent, a strategic move that will undoubtedly propel their career to new heights.In summary, this episode offers a captivating glimpse into the world of Ronin and their unwavering passion for creating impactful rock music. Listeners are treated to an insightful exploration of the band's latest album, "Valak The Defiler," as well as an intimate understanding of their creative process, performance style, and future aspirations. Brian and Chris's engaging conversation leaves no doubt that Ronin is a force to be reckoned with in the rock music scene, and their dedication to pushing boundaries and connecting with audiences is truly commendable.Find Ronin here:https://ronin-band.com/https://www.facebook.com/ronin.rockbandhttps://www.instagram.com/ronin.rockband/Find CTMU here:https://linktr.ee/ConcertsthatmadeusNewsletter: https://concertsthatmadeus.aweb.page/p/f065707b-2e34-4268-8e73-94f12bd2e938Save 10% on Band Builder Academy membership by following this link https://bandbuilderacademy.com/Brian_Concerts/join and using promo code "concerts" at signup. Become a member at https://plus.acast.com/s/concerts-that-made-us. Hosted on Acast. See acast.com/privacy for more information.
Today, I have the delight of engaging in enlightening conversation with the vibrant Robin Long. Robin is not just a dear friend, but also an accomplished entrepreneur and the illustrious founder and CEO of Lindywell. She is a beacon in the health and wellness sector, extending the reach of Pilates and holistic well-being to broader horizons. In our enlightening conversation, Robin unfolds her inspiring journey, a journey wrapped in discoveries of true health and wellness. She reflects on the common perceptions of health among women, which, more often than not, tend to fixate solely on weight loss rather than embracing a wholesome perspective of well-being. Robin presents her insights on body positivity with vivacity and profound wisdom. She explores the essence of feeling good in one's skin and illuminates the myriad challenges associated with nurturing children to develop a positive body image in a world saturated with contrasting ideals. In addition, Robin discusses nurturing our nervous system—a fundamental yet often overlooked aspect of our well-being. She touches on holistic approaches to health, focusing on synchronizing both body and mind, and emphasizes the importance of internal equilibrium. Excitingly, Robin will also be shedding light on her new book, 'Well to the Core,' a treasure trove of knowledge set to be unveiled tomorrow, October 3rd. For those whose curiosity is piqued and are eager to plunge into her sea of knowledge, you can find her book through the link provided below. If you enjoyed and want to hear more from Robin, you can go to Lindywell.com or go find her Instagram here. I invite you all to join us in this exploration of holistic wellness, and uncover the treasures of well-being with Robin Long, as we delve deep into topics that affect us all, seeking insights and enlightenment in our journey towards a healthier, more balanced life. May this conversation light your path to realizing a more harmonious and balanced existence, helping you embrace your true self and discover the myriad facets of well-being beyond the surface. Check out these resources mentioned in this episode: Well to the Core: A Realistic, Guilt-Free Approach to Getting Fit and Feeling Good for a Lifetime: https://www.amazon.com/Well-Core-Realistic-Guilt-Free-Approach/dp/1496472624 Thanks for listening! …………………………………. Connect with Kara on Instagram: @kara_ayala https://www.instagram.com/kara_ayala/?hl=en Check out how you can work with Kara on her website: https://kara-ayala.mykajabi.com/ Join Kara's FREE Facebook group: https://kara-ayala.mykajabi.com/offers/aubrLymR/checkout Check out these resources mentioned in this episode: Well to the Core: A Realistic, Guilt-Free Approach to Getting Fit and Feeling Good for a Lifetime: https://www.amazon.com/Well-Core-Realistic-Guilt-Free-Approach/dp/1496472624 Thanks for listening! …………………………………. Connect with Kara on Instagram: @kara_ayala https://www.instagram.com/kara_ayala/?hl=en Check out how you can work with Kara on her website: https://kara-ayala.mykajabi.com/ Join Kara's FREE Facebook group: https://kara-ayala.mykajabi.com/offers/aubrLymR/checkout