Podcasts about Jamba

  • 160PODCASTS
  • 212EPISODES
  • 53mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Jamba

Latest podcast episodes about Jamba

Podded Dirt Podcast
Petty N' Principled : Episode 49 Carney and the devils lettuce or cabbage

Podded Dirt Podcast

Play Episode Listen Later May 2, 2025 75:45


Hoy por Hoy
Pretérito pluscuamperfecto | Los politonos, el sector en el que fuimos pioneros mundiales

Hoy por Hoy

Play Episode Listen Later Apr 30, 2025 16:04


Recordamos los sonoros años 2000. Cuando los teléfonos móviles superaron a los fijos en España y se llegaban a mandan 15 millones de SMS por minuto en todo el mundo surgieron empresas como Movilisto, MyAlert, Jamba y Club Zed que crearon contenidos para estos dispositivos. Los más destacados fueron las votaciones e interacciones en directo con las televisiones y el gran mercado de los politonos. Desde las melodías que ahora nos parecen primitivas, pasando por bromas, frases del momento, villancicos personalizados, hasta llegar a los tonos reales, las canciones del momento. Mandando un SMS al 7777 tuvimos el tema del momento cuando nos llamaban. Gonzalo de la Cierva, creador de Movilisto, nos ayuda a recordar y entender cómo fuimos

Take-Away with Sam Oches
Jamba's chief brand officer on the art and science of a new store prototype

Take-Away with Sam Oches

Play Episode Listen Later Apr 15, 2025 42:10


In this episode of Take-Away with Sam Oches, Sam talks with Nathan Louer, chief brand officer at Jamba, the legacy smoothie bowl concept that is one of seven brands in the GoTo Foods portfolio. Nathan declares that Jamba is on the comeback trail, and a big part of that comeback is the just-announced new store prototype called Hello Sunshine, which updates the Jamba aesthetic but also the experience leveraging self-order kiosks, digital marketing screens, and streamlined store layouts. It also provides strategic cost efficiencies and financial incentives for franchisees. Nathan joined the podcast to talk about the art of designing a new store prototype and why finding efficiencies isn't always about drastic change. In this conversation, you'll find out why:If you're looking for a refresh, start with your rootsEven if you go back to your roots, your brand refresh requires newness and innovationHow you make guests feel is as important as the product they consumeGreat hospitality transcends convenience of an experience You can't go halfway on brand evolutionEfficiency doesn't always require blowing the model up for cost savings Have feedback or ideas for Take-Away? Email Sam at sam.oches@informa.com.

Eye On A.I.
#244 Yoav Shoham on Jamba Models, Maestro and The Future of Enterprise AI

Eye On A.I.

Play Episode Listen Later Mar 27, 2025 52:16


This episode is sponsored by the DFINITY Foundation.  DFINITY Foundation's mission is to develop and contribute technology that enables the Internet Computer (ICP) blockchain and its ecosystem, aiming to shift cloud computing into a fully decentralized state.   Find out more at https://internetcomputer.org/ In this episode of Eye on AI, Yoav Shoham, co-founder of AI21 Labs, shares his insights on the evolution of AI, touching on key advancements such as Jamba and Maestro. From the early days of his career to the latest developments in AI systems, Yoav offers a comprehensive look into the future of artificial intelligence.   Yoav opens up about his journey in AI, beginning with his academic roots in game theory and logic, followed by his entrepreneurial ventures that led to the creation of AI21 Labs. He explains the founding of AI21 Labs and the company's mission to combine traditional AI approaches with modern deep learning methods, leading to innovations like Jamba—a highly efficient hybrid AI model that's disrupting the traditional transformer architecture.   He also introduces Maestro, AI21's orchestrator that works with multiple large language models (LLMs) and AI tools to create more reliable, predictable, and efficient systems for enterprises. Yoav discusses how Maestro is tackling real-world challenges in enterprise AI, moving beyond flashy demos to practical, scalable solutions.   Throughout the conversation, Yoav emphasizes the limitations of current large language models (LLMs), even those with reasoning capabilities, and explains how AI systems, rather than just pure language models, are becoming the future of AI. He also delves into the philosophical side of AI, discussing whether models truly "understand" and what that means for the future of artificial intelligence.   Whether you're deeply invested in AI research or curious about its applications in business, this episode is filled with valuable insights into the current and future landscape of artificial intelligence.     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction: The Future of AI Systems (02:33) Yoav's Journey: From Academia to AI21 Labs (05:57) The Evolution of AI: Symbolic AI and Deep Learning (07:38) Jurassic One: AI21 Labs' First Language Model (10:39) Jamba: Revolutionizing AI Model Architecture (16:11) Benchmarking AI Models: Challenges and Criticisms (22:18) Reinforcement Learning in AI Models (24:33) The Future of AI: Is Jamba the End of Larger Models? (27:31) Applications of Jamba: Real-World Use Cases in Enterprise (29:56) The Transition to Mass AI Deployment in Enterprises (33:47) Maestro: The Orchestrator of AI Tools and Language Models (36:03) GPT-4.5 and Reasoning Models: Are They the Future of AI? (38:09) Yoav's Pet Project: The Philosophical Side of AI Understanding (41:27) The Philosophy of AI Understanding (45:32) Explanations and Competence in AI (48:59) Where to Access Jamba and Maestro  

Retail Daily Minute
Wayfair Launches Product Verification, Walmart Partners with JPMorgan, and Jamba Debuts “Hello Sunshine” Store Design

Retail Daily Minute

Play Episode Listen Later Mar 25, 2025 5:09


Welcome to Omni Talk's Retail Daily Minute, sponsored by Mirakl. In today's Retail Daily Minute:Wayfair Rolls Out “Verified” Product Quality Program – Wayfair introduces a new Verified badge for rigorously tested products, aiming to boost customer trust and set higher quality standards across its home goods marketplace.Walmart Taps JPMorgan to Streamline Seller Payments – Walmart teams up with JPMorgan Chase to accelerate payment processing for marketplace sellers, enhancing cash flow and embracing the trend of embedded finance.Jamba Unveils “Hello Sunshine” Store Format – Jamba's new store design emphasizes digital convenience, operational efficiency, and vibrant branding—with a focus on self-order kiosks and high-traffic expansion opportunities.The Retail Daily Minute has been rocketing up the Feedspot charts, so stay informed with Omni Talk's Retail Daily Minute, your source for the latest and most important retail insights. Be careful out there!

Do This, NOT That: Marketing Tips with Jay Schwedelson l Presented By Marigold

In this episode of Do This, Not That, host Jay Schwedelson talks with Matt Bettis, Senior Director of Customer Engagement and Loyalty at GoTo Foods, about loyalty programs, data collection strategies, and customer engagement in the food industry. Matt shares his journey from computer engineering to marketing, insights on loyalty programs for brands like Auntie Anne's and Cinnabon, and tips for standing out in a competitive space.=================================================================Best Moments:(00:40) Introduction to Matt Bettis and GoTo Foods brands(03:15) Matt's career transition from computer engineering to marketing(05:50) Overview of GoTo Foods' loyalty program and customer base(10:45) Data collection strategies and progressive profiling(13:29) Importance of welcome series in customer engagement(15:31) Creating unique brand experiences to stand out in a competitive industry(17:02) Matt's Halloween costume success story(19:56) Connecting with Matt Bettis on LinkedIn=================================================================Guest Bio:Matt Bettis is the Senior Director of Customer Engagement and Loyalty at GoTo Foods, where he oversees loyalty programs for popular brands like Auntie Anne's, Carvel, Cinnabon, Jamba, and Moe's Southwestern Grill. With a background in computer engineering, Matt has successfully transitioned from advising the Pentagon on technology strategy to leading customer engagement initiatives in the food industry.=================================================================Check out our FREE + VIRTUAL EVENTS! -> EVENTASTIC.comGuruConference.comDeliveredConference.com=================================================================MASSIVE thank you to our Sponsor, Marigold!!Looking to master consumer engagement in 2025? The 2025 Consumer Trends Index from Marigold reveals how AI, economic pressures, and personalized marketing are shaping consumer expectations. Uncover data-driven insights to foster stronger brand relationships, strike the right balance between personalization and privacy, and turn casual customers into loyal advocates.Download the 2025 Consumer Trends Index today at meetmarigold.com/guru and stay one step ahead of evolving consumer demands!

Menu Feed
2025 menu predictions and Dry January news

Menu Feed

Play Episode Listen Later Jan 7, 2025 32:45


Welcome to a new year of Menu Talk. On this week's podcast, Bret Thorn, senior food & beverage editor of Nation's Restaurant News and Restaurant Hospitality, and Pat Cobe, senior menu editor at Restaurant Business, talk trends. New Year's Day marks the start of Dry January, which seems to be motivation for people to moderate alcohol consumption for a while, even if they fall off the wagon before the month is out. Pat has tried it and stuck with it twice, but this year, she's going with “Damp” January instead, cutting back without completely abstaining.  However Dry January shakes out, the hosts are in agreement that the quantity and quality of mocktails at restaurants and bars is much improved. Bret recently wrote about how the complexity and craftsmanship of spirit-free options offers non-drinking guests an experience that's not at all diminished. In fact, spirit-free pairings or smaller pours with a tasting menu can actually enhance rather than dull the experience. Aside from the spirit-free trend, which Pat and Bret see continuing, we chatted about the abundance of food and drink predictions that have landed in our inboxes. Will the sweet-heat or “swicy” flavor trend move into 2025, and what will be the “it” cuisine this year? And what's with all the brown sugar and espresso on the beverage side? Plus, what happened to all the healthy menu items that usually launch in January?  Tune in to find out the latest, plus Pat shares an interview with Nathan Louer, chief brand officer at Jamba. He discusses how Jamba has evolved from a juice and smoothie concept to a destination for meal replacements and snacks that balance health and indulgence. Louer and his team are focusing innovation on the core menu, introducing new categories including bowls, blended coffees and bites. Give a listen.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
2024 in Post-Transformers Architectures (State Space Models, RWKV) [LS Live @ NeurIPS]

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 24, 2024 43:02


Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe

Behind Her Empire
Saying Yes Before You're Ready, Listening to Your Intuition & Not Letting Your Past Define You with Kat Cole, CEO of AG1 

Behind Her Empire

Play Episode Listen Later Nov 4, 2024 54:17


Kat Cole is the CEO of AG1 (formerly known as Athletic Greens), the global health company that pioneered Foundational Nutrition support with a comprehensive and convenient daily supplement.Kat has built an incredible career grounded in resilience, grit, and a passion for leadership. Growing up, Kat witnessed her mother's strength firsthand when, at just nine years old, her mom left her alcoholic dad and became a single parent to Kat and her two sisters. From an early age, Kat learned the value of hard work and determination. As the first in her family to go to college, she supported herself financially by working at Hooters, and ultimately left school to take on a corporate roles with the company. By the age of 26, she was Vice President, leading global growth through franchising, training, and operations and helped lead Hooters' sale to a private equity firm in 2010. Kat went on to spend over a decade as President and COO at Focus Brands, where she led a multi-billion-dollar portfolio of global restaurant brands with brands like Cinnabon, McAlister's, and Jamba under her leadership.In this conversation, Kat dives into her journey from childhood to becoming a powerhouse in business, highlighting how curiosity, a strong work ethic, and embracing change have fueled her growth from the very start. She shares her biggest lessons from running multiple brands at scale, how she gets involved to be the most effective leader, and why she prioritizes quality and constant improvement to keep brands fresh and relevant. We also discuss her experience taking a year off, her insights as an angel investor, what she looks for when investing in brands, and how she got involved with AG1. Kat also talks about the importance of listening to intuition and trusting your gut with every decision, as well as her grounding routines that keep her balanced and focused, and so much more. In this episode, we'll talk to Kat about:* Why we shouldn't let our past solely define us. [03:34]* Kat's childhood and upbringing. [05:37]* Kat's career journey, leading her to AG1. [07:58]* Kat's drive to achieve success. [10:54]* The importance of not knowing the truth. [16:25]* How Kat cultivates comfort in new situations. [20:01]* Leadership and creating a safe environment. [24:25]* Taking a year gap to reevaluate her career path. [31:36]* Leaving Focus Brands and transitioning to AG1. [35:20]* AG1's commitment to product quality. [38:33]* AG1's product iteration and innovation. [41:41]* The necessity of evolution in business. [46:06]* Kat's grounding routines. [49:41]This episode is brought to you by Beeya:* If you or anyone you know have been struggling with hormonal imbalances and bad periods, go to https://beeyawellness.com/free to download the free guide to tackling hormonal imbalances and to learn more about Beeya's seed cycling bundle.* Plus, get $10 off your order by using promo code BEHINDHEREMPIRE10Follow Yasmin:* Instagram: https://www.instagram.com/yasminknouri/* Website: https://www.behindherempire.com/Follow Kat:* Website: https://drinkag1.com/* Instagram: https://www.instagram.com/drinkag1/* Instagram: https://www.instagram.com/katcoleatl/ Hosted on Acast. See acast.com/privacy for more information.

Wer ausschenkt muss auch einschütten können.
#109: Gefangen im Jamba! Sparabo

Wer ausschenkt muss auch einschütten können.

Play Episode Listen Later Oct 23, 2024 72:57


Instagram: https://www.instagram.com/werausschenkt Playlist: https://open.spotify.com/playlist/47mxXXMymea7ghngqi0YIi?si=SkDgIMMBTjOXdYDIyC1LhA

The Brand Insider
Ep. 141 with Nathan Louer, Chief Brand Officer, Jamba

The Brand Insider

Play Episode Listen Later Oct 18, 2024 39:02


This week Erin Everhart chats with the Chief Brand Officer of Jamba, Nathan Louer.

AWR Malagasy / Malgache
1 - Tranom-bahiny 2 - Lesona tsara 1 3 - faminaniana tanteraka 04(Tempolin39;i Jerosalema) 4 - Ampitomboy ny fahalalana mialohan39;ny hanambadiana 5 - Ny fanasitranana ilay jamba fiz 2

AWR Malagasy / Malgache

Play Episode Listen Later Oct 9, 2024 59:00


1 - Tranom-bahiny 2 - Lesona tsara 1 3 - faminaniana tanteraka 04(Tempolin39;i Jerosalema) 4 - Ampitomboy ny fahalalana mialohan39;ny hanambadiana 5 - Ny fanasitranana ilay jamba fiz 2

AWR Malgache
1 - Tranom-bahiny 2 - Lesona tsara 1 3 - faminaniana tanteraka 04(Tempolin39;i Jerosalema) 4 - Ampitomboy ny fahalalana mialohan39;ny hanambadiana 5 - Ny fanasitranana ilay jamba fiz 2

AWR Malgache

Play Episode Listen Later Oct 9, 2024 59:00


1 - Tranom-bahiny 2 - Lesona tsara 1 3 - faminaniana tanteraka 04(Tempolin39;i Jerosalema) 4 - Ampitomboy ny fahalalana mialohan39;ny hanambadiana 5 - Ny fanasitranana ilay jamba fiz 2

AWR Malagasy / Malgache
1 - Izay tianao atao aminao de ataovy @ hafa 2 - Vahaolan`ny fihenan`ny firaisana ara-nofo 1 3 - Fomba fitarian39;I Jesosy 4 - Didy 12 Ray aman dReny sy ny mpanabe 09 5 - Nampahiratra lehilahy teraka jamba Jesosy

AWR Malagasy / Malgache

Play Episode Listen Later Oct 8, 2024 59:00


1 - Izay tianao atao aminao de ataovy @ hafa 2 - Vahaolan`ny fihenan`ny firaisana ara-nofo 1 3 - Fomba fitarian39;I Jesosy 4 - Didy 12 Ray aman dReny sy ny mpanabe 09 5 - Nampahiratra lehilahy teraka jamba Jesosy

AWR Malgache
1 - Izay tianao atao aminao de ataovy @ hafa 2 - Vahaolan`ny fihenan`ny firaisana ara-nofo 1 3 - Fomba fitarian39;I Jesosy 4 - Didy 12 Ray aman dReny sy ny mpanabe 09 5 - Nampahiratra lehilahy teraka jamba Jesosy

AWR Malgache

Play Episode Listen Later Oct 8, 2024 59:00


1 - Izay tianao atao aminao de ataovy @ hafa 2 - Vahaolan`ny fihenan`ny firaisana ara-nofo 1 3 - Fomba fitarian39;I Jesosy 4 - Didy 12 Ray aman dReny sy ny mpanabe 09 5 - Nampahiratra lehilahy teraka jamba Jesosy

Erfolgreich Alpha – WiWo Chefgespräch
Taxfix-Chef Ott: „Die meisten Menschen können mit der Steuersprache nichts anfangen“

Erfolgreich Alpha – WiWo Chefgespräch

Play Episode Listen Later Oct 4, 2024 50:43


Haben Sie Ihre Steuererklärung pünktlich abgegeben? Oder gehören Sie womöglich zu den etwa 12 Millionen Deutschen, die gar keine Steuererklärung machen? Das mag gut für Ihre Nerven sein, für Ihren Geldbeutel ist es das aber nicht: Im Schnitt verzichten Sie dann nämlich auf etwa 1000 Euro Rückzahlung. Martin Ott ist Chef des Dienstleisters Taxfix, einer Finanzplattform für digitale Steuererklärungen: mehr als fünf Millionen Steuererklärungen in Deutschland, Italien, Spanien und Großbritannien wickelt das Unternehmen mit etwa 400 Mitarbeiter ab. Bevor er vor drei Jahren Chef bei Taxfix wurde, war Ott bei Jamba, Facebook und WeWork. Er spricht mit Varinia Bernau über Steuertipps, was er von den Samwer-Brüdern und Mark Zuckerberg gelernt hat, aber auch darüber, wie er selbst einen Ausgleich zur Arbeit sucht. Mitarbeit: Johannes Grote, Anna Hönscheid *** Das exklusive Abo-Angebot für alle Hörerinnen und Hörer vom WirtschaftsWoche Chefgespräch: wiwo.de/chef-abo Helfen Sie uns, unsere Podcasts weiter zu verbessern. Ihre Meinung ist uns wichtig: www.wiwo.de/zufriedenheit [Mehr über die Angebote unserer Werbepartnerinnen und -partner finden Sie HIER](http://cmk.wiwo.de/cms/articles/15602/anzeige/podcast-werbepartnerinnen/hier-gibt-s-weitere-infos-zu-den-angeboten-unserer-werbepartner-innen)

Reversim Podcast
475 Jamba with Hofit from AI21

Reversim Podcast

Play Episode Listen Later Aug 13, 2024


[קישור לקובץ mp3]פרק 475 של רברס עם פלטפורמה, שהוקלט ב-23 ביולי 2024, שיא הקיץ. אורי ורן מארחים את חופית מחברת AI21 Labs, כדי לדבר על מודלי-שפה גדולים עם Context ממש-ממש ארוך (וגם להסביר מה זה).00:40 חופית ו-AI21(רן) אבל קצת לפני זה - עליך, חופית, ועל החברה. מי את? מה עשית עד היום? מה עושים אצלכם בחברה?(חופית) אז אני חופית, ואני כרגע חוקרת ומפתחת מודלי-שפה בחברת AI21 Labs.נולדתי וגדלתי ברמלה, אם נחזור ממש להתחלה.בגיל 14 כבר התחלתי לתכנת - בבית הספר הכניסו אותנו לזה יחסית מוקדם - והבנתי שזה מה שאני הולכת לעשות כבר אחרי יומיים.אז הגעתי למודיעין, ליחידה צבאית, יחידה מודיעינית - דווקא לא תכנתתי שם, אבל ישר אחרי הצבא התחלתי תואר במדע המחשב באוניברסיטה העברית.מהר מאוד הבנתי שאני רוצה לשלב עם מתמטיקה. התאהבתי במתמטיקה - מתמטיקה תיאורטית, חשוב לציין, כי בתיכון לא כל כך אהבתי . . . אז עשיתי תואר משולב עם מתמטיקה.(רן) אז אם אני אבקש ממך עכשיו את הנוסחא של דטרמיננטה, את לא תדעי לשלוף לי . . .(חופית) ממש לא . . . (רן) . . . אבל תורת החבורות - בכיף . . . (חופית) תורת החבורות - אני מקווה שזה יהיה ה… קרא עוד

AWR Malagasy / Malgache
1 - Ny tena fitiavana 2 - mofo mamy @ voatavo 3 - Fanatitra eken39;Andriamanitra 4 - Michel Jamba (Vokatry ny AWR + tononkalo (Ry bemarenina) 5 - Ny fanaon`olombelona sa ny didin`Andriamanitra?

AWR Malagasy / Malgache

Play Episode Listen Later Aug 4, 2024 59:00


1 - Ny tena fitiavana 2 - mofo mamy @ voatavo 3 - Fanatitra eken39;Andriamanitra 4 - Michel Jamba (Vokatry ny AWR + tononkalo (Ry bemarenina) 5 - Ny fanaon`olombelona sa ny didin`Andriamanitra?

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Thank you for 1m downloads of the podcast and 2m readers of the Substack!

united states god ceo american new york world australia english google ai apple vision voice talk americans san francisco new york times research war chinese rich australian data market european union search microsoft italian holy new zealand drop south iphone illinois selling irish code ladies supreme court chatgpt missouri memory valley os atlantic whatsapp software washington post reddit wars cloud singapore midwest philippines indonesia laugh ios scottish intelligence new yorker context mark zuckerberg scaling architecture uma oracle stopping snap bloomberg cto substack malaysia vc iq similar whispers adapt ipo determine southeast asia fireworks optimizing openai gemini residence laughing gdp gateway fusion nvidia nah acknowledge financial times hardware chess api document av wang frontier chrome blank verge 10k mojo scarlett johansson winds vertical gpt ftc gorilla nexus aws ml lama boston marathon llama small talk goldman mandarin apis bedtime ruler great lakes consensus nome amd synthetic tt frameworks band aids romain chameleons nano biases ids opus hirsch weights sam altman chai ops llm mamba skynet colbert gpu gg pdfs crowdstrike venn google chrome gnome 5b modular soit skyfall soc mozilla zuck wix cuz nama kv haiku imo rag vespa rudyard kipling gpus sonnets golden gate bridge 7b quadrants benchmarking sdks ilya irobot ccs lambda perplexity san fernando valley alessio lightspeed asics anthropic lms crackle stack overflow scarjo little italy noose 8b restful economically lex fridman cpus malay shutterstock riaa asic mistral suno inflection gcp opex tts superintelligence vertex a16z multimodal latency ozymandias larry ellison observability olympiads datadog gradient proxies asr icm baits drop zone rpc devrel mimicry netlify etched ai news cloud platforms temasek gpc sandbagging jamba eclair gbt gpd apple notes augments character ai exa neurips li bai huggingface ai engineer george hotz singlish harvard yard entropic gbd code interpreter icml phy ml ops ai winter martin casado crosstrek technium latent space johnny ive numina inprint sohu i okay
Neil Rogers Show
Neil Rogers Show (August 10, 2004)

Neil Rogers Show

Play Episode Listen Later Jul 12, 2024 186:57


Howard Stern is coming to WQAM, Joe Rose is on The Ticket, Hank has been suspended, and Neil can't talk about it. They are going to have a triple "dump" system, with Duff having the final say. Jamba's real name is Jerone. Follow the link to see Boca, and Skippy on Segways, watch the Ricky Williams Tribute Video, and read David Ross's Confidential Howard Stern

The VBAC Link
Episode 313 Happy Birthday to Meagan's VBA2C Babe + Walking Down Memory Lane with her Husband, Ric

The VBAC Link

Play Episode Listen Later Jul 1, 2024 44:38


“Trust your partner. Trust the mom. They know things better than you do.”Meagan's husband, Ric, joins the podcast today as they celebrate their VBA2C baby's 8th birthday! Ric gives the perspective from the partner's side of things as they both share details of Webster's birth story. He talks about some not-so-proud moments and is the first to admit how little he knew about how to support a VBA2C labor– especially one that went over 40 hours! But through it all, Ric came to understand the importance of doulas and how magical it can be to have not one but five doulas! He agrees that the births of each of their children ultimately was a special journey and brought the two of them closer together. Needed WebsiteHow to VBAC: The Ultimate Prep Course for ParentsFull Transcript under Episode Details 01:17 Review of the Week04:10 Deciding to birth out of the hospital06:35 Agreeing on a birth center10:57 PROM for the third time15:05 Laboring loudly20:23 Relying on the doulas28:33 Navigating doubt and transition on the toilet34:25 Pushing Webb out in three pushes37:08 Passing out after birth40:37 It takes a village42:45 Ric's advice to other dadsMeagan: Today is my VBAC baby's birthday. I cannot believe that it has been 8 years since that little boy joined our family and today I wanted to share or reshare his story. I know I've done it in the past, but I invited my husband, Ric, to share the story again for you and maybe I might just give him a couple of questions and see how he felt about it from his perspective. When we were going through pregnancy and preparing for me, it was just like, This is what I want to do. This is what I want to do. I would always go and say, “Hey, these are my thoughts”, and to be honest, I don't know if I even gave him a ton of opportunity to share his exact thoughts because I was so driven and just wanted to get this.We are going to dive more into his thoughts and his perspective on the birth because we know so many dads out there are also a little hesitant when it comes to the idea of VBAC because the world as we know it talks about VBAC in a very poor manner and it can be a very scary thought. So we will be diving into that today in just one moment.01:17 Review of the WeekMeagan: We have a Review of the Week so I wanted to get to that before we get into Webster's birth story. This is from Katiewarren11. It says, “I wish I would have found this sooner.” It says, “I love the show. I wish I would have heard these before my last baby. I was planning for a VBAC 7 years after my first baby and just thought it would happen. I didn't realize I might have to fight for it.” That just gave me the chills. It says, “I got to the week of my due date and my body didn't seem at all ready. Then they were estimating her to be 9 pounds, 12 ounces, and the doctor told me, ‘No option. You are getting a C-section.' After listening to these stories, I now know that there were other options.”Thank you, Katiewarren11, for sharing your review. I want you to know that you are not alone. There are so many of us who get to the point at the end of our due date. We are being told that our babies are too big or our bodies aren't working because they are not dilated yet or whatever it may be. There are lots of scenarios that people are told, but there are options. You have options and that is definitely what this podcast is about is helping you learn and grow and know your options. So thank you, Kate, again, and as always, if you have one moment, we would love to hear your review of the show. It really does help the show grow. It helps other Women of Strength find these stories and help them know their options as well. You can do that on Apple Podcasts. You can even Google “The VBAC Link” and leave us a review there or if you would like, you can email us a review and let us know what your thoughts are. We always throw those into our spreadsheet as well. Thank you so much. 04:10 Deciding to birth out of the hospitalMeagan: Okay, you guys. As I mentioned, I have my husband, Ric. Hey, hon. Ric:  Hello, everyone. Meagan: I'm sure he is just so excited that he is here. But really, I wanted to go through the experience from your perspective on VBAC and not only just the birth but also before and us deciding to birth out of the hospital. I was already kind of a crazy pants when we were trying to conceive because we really wanted a boy so I was really dialed into that. Then once we found out we were pregnant, I really, really just wanted to find someone to help me through the journey of VBAC. I interviewed many, many, many providers in fact, even before I was pregnant. We know on the show, I have talked about it, that it is really important to interview and look for providers before you are pregnant if you can but I ended up finding a provider actually just right after I found out I was pregnant or right before I found out I was pregnant. We went in and it seemed like a really great fit. Ric, you seemed like you were pretty on board with the provider shift at that point. ric: Yeah, I mean those who know Meagan know that when she is passionate about something, it is very unlikely that she will be turned away. Meagan: Convinced otherwise. Ric: Convinced otherwise so I just kind of went with the flow. But yeah, the provider seemed great. You seemed happy which was most important. Meagan: Yeah. And just kind of a quick little back summary, how did you feel about the C-sections? Did they bug you at all? Did they affect you at all? Did they just seem normal?Ric: Yeah, I mean, I didn't know anything other than the C-sections so it was normal. It was just that you were very unhappy with them which was hard for me. It was hard because I was stoked that we had the babies and you were upset with yourself, with the provider, and I didn't share those feelings because I didn't know. Meagan: Yeah, It was hard because like you said, we were so happy that we had our baby but I was in this cloud of doom and just and unsettled cloud. Ric: Dissatisfied. Meagan: Yeah, I was dissatisfied. 06:35 Agreeing on a birth centerMeagan: Okay, so we found this provider and everything is going really great. This provider at the time was the VBAC provider in Utah. Everybody went to him and he was amazing. He flat-out said after reviewing my op-reports that my pelvis was too small and my baby would probably never come out of my pelvis and that my body didn't know how to dilate, he really agreed that I probably just wasn't given a fair chance and he didn't understand why we wouldn't be able to go forward.But at 24 weeks, I attended a birth just before that with a midwife out of the hospital that blew me away. I immediately knew that I wanted to go talk to her which was kind of interesting because we never really discussed birthing outside of the hospital, but I went and met with her and I told you, “Hey, I want to birth out of the hospital.” Now, you knew nothing but C-sections. You were okay with me finding a provider, but how did you feel about the idea of birthing outside of the hospital?Ric: I don't think I was that excited about it. I was okay switching providers, but not being in the hospital was worrisome. I actually think, didn't you broach the subject on birthing from home? Meagan: I think I did. Ric: I immediately put the kibosh on that. Meagan: You were like, “No.” Ric: So I think when you initially discussed birthing outside of a hospital, you gave a couple of options of a birth center or a birth from home so I completely– that was too big of a jump for me from hospital to home so we went and did we go to multiple birth centers or just one? Meagan: We just went to one and we interviewed with a different provider than the one I met, but it was at the same birth center that the provider I met would have birthed at so we met with another midwife at the birth center. Ric: Right. It was awesome. Meagan: It was awesome.Ric: No, the midwife was cool too. She was great. Meagan: Yeah, she was really awesome. Yeah. So as we were there, did you feel like, Oh, okay. Once you saw it, did you feel more comfortable? Ric: Yeah, because it seemed more medical. I don't know the word for it, but it just seemed like, Oh, hey. Things looked sterile which was a big deal for me and it just made it seem like, Yeah, it's not the hospital, but– can I swear on the podcast? Meagan: Yeah, sure. Ric: –if shit hit the fan, then we were in a better circumstance than trying to find gauze and stuff at our home. Meagan: At our home, yeah. Which for those who are birthing at home, typically your midwives would bring all of that to the birth but we didn't even get there for me to explain that. Ric: I don't need to know if I would have even let you. Meagan: Get to that point? Ric: Yeah. Meagan: Okay, so then fast forward. Labor begins. Actually, we hired a doula. Ric: A doula? Meagan: Multiple doulas.Ric: You had these two mentors in the doula community here and you said, “I definitely want to hire them,” so we did. Those two mentors were in a group of three. Not only that, your really good buddy who became a doula at about the same time and had gone through the doula course with you wanted to attend, and then your cousin who is about as much of a doula as you can be without being a doula– Meagan: Seriously, yeah. Ric: Also had to attend the birth. Plus the midwife–Meagan: And the assistant. Ric: And the assistant. There were a lot of people in the room. Meagan: There were a lot of people in the room.Ric: Initially when you said, “Hey, look. We are going to hire a doula,” because you were doing the stuff, I was totally on board. I had no idea how many doulas would actually show up. Meagan: Yeah. Ric: But they did and it was fine. Meagan: It was great. And rewinding back, remember back with Lyla when I asked you if we could hire a doula or bridge that, you weren't super keen on that idea. Ric: I don't remember that conversation.Meagan: You don't. Ric: But I remember our nurse being a doula. Meagan: Yes. Ric: And she was awesome. Meagan: She was fantastic. Ric: And that solidified your desire to be a doula. Yeah. Meagan: Absolutely. 10:57 PROM for the third timeMeagan: Okay, so with all three of our kids, I for some reason have PROM. If you don't know what PROM means, that's the premature rupture of membranes. My water broke with each kiddo and my body took its sweet old time to kick into labor. They say only 10% of women will experience that, but we are 3 for 3. Ric: Do they know the story about where I was when your water broke with Lyla? Meagan: No, I don't know. That was another reason why I wish we had a doula. So going back to Lyla's birth, my second C-section–Ric: Kind of just showing the progress of where Meagan began as a, “Hey, look. I trust my doctors. I'm going to do everything that they say on the first birth.” The second birth opened my eyes as to how Meagan was going to control the situation as much as she possibly could. So yeah, tell them where I was when your water broke. Meagan: So you were in Texas when my water broke with Lyla. As he mentioned, my cousin is pretty much a doula without the doula training and she just is so loving and caring. She was really excited because we wanted this VBAC. I wanted this VBAC really, really badly. So yeah. Ric was out of town and my water broke. I was like, “Uh, you should probably come home.” Nothing was really happening at all really. I was just leaking. Yeah. You got home probably 6-7 hours later. Ric: No, it was about 10. Meagan: Was it about 10? Ric: Yes and I assumed you were going to go to the hospital. Meagan: Yeah, you were not happy when I was not at the hospital when you got home. Ric: I walked in and you were sitting there naked in the bathtub and I'm like, “What in the world are you doing? You are supposed to be in the hospital. Your water broke.” Because for me, your water breaks, you go to the hospital. For Meagan, that's not necessarily the case. Meagan: Well, yeah. I think going back to what you were saying, a lot of providers actually say, “If your water breaks, come right in,” even if labor is not going on. Through my research with Lyla and the VBAC, I realized that I didn't necessarily need to just run right into the hospital. I checked my vitals. All was well. Everything was good, so we stayed and labored at home. Plus, I was waiting for you to get in town. Ric: Yeah, but it kind of prepped me for what the next birth was going to look like. Obviously, that birth ended up in another C-section and you were really disappointed after that one. You worked really, really hard. Meagan: I was, yeah. Ric: Then with the next one, when you were going through options of birth centers, doulas, and midwives, that instance where I flew home in an emergency fashion as quickly as I could and came home to find you in the bathtub realizing, Meagan is going to do what Meagan wants to do. Meagan: Yeah, so when I told you, “Hey, let's birth out of the hospital”, did you feel like, She is going to do whatever she wants to do anyway? Or were you more comfortable with the birth centers? Were you okay with that? Ric: Yeah. It's hard to tell you no, but when we went to the birth center, I did feel significantly better about having a birth there. Meagan: Yeah. What had you heard about or had you heard anything about VBAC just in general? Ric: Nothing. Meagan: So you didn't really hear a ton. Ric: Other than what I heard from you. Meagan: So you didn't hear anything scary. Ric: No. Meagan: Okay, because a lot of dads out there do hear when they say, “Oh, my wife wants to VBAC,” people are like, “Oh my gosh. It's so scary.” I think that can be really hard especially if their partner is saying, “Hey. I want to birth out of hospital.”15:05 Laboring loudlyMeagan: Okay, so my water broke with Webb at 3:00 AM or something like that. Yeah, what do you remember about that? My water broke in the middle of the night. I don't even think I told you until I woke up. Do you remember anything about that?Ric: With Web, that was where you labored forever, right?Meagan: Yeah, 42 hours. Ric: I don't remember that first morning. I remember the next night. Meagan: Yeah. Ric: Didn't Hillary– Hillary is her cousin, everyone. Hillary showed up at 6:00 in the morning and you guys went out and walked around the neighborhood. Meagan: Yeah, so the night–Ric: The first night?Meagan: No, that was the second morning, yes. My water broke and again, I had PROM so I was so frustrated. I was 40 weeks and 3 days or 4 days. I had him at 40 weeks and 5 days. We had a visit with Danielle and my water had broken. I was sort of contracting a little bit here and there. I asked if you would come up to Park City with me. We went up to Park City and I went in and I did my regular visit and then she said, “You're going to Christine.” Christine, at the time, was my chiropractor so we went to the chiropractor. You got me a Jamba and we drove back down the mountain and came home. My body just really wasn't going into labor. It was taking its time so I went and I took a nap which is really hard to do when you are in labor because your mind is so excited and you just want to have a baby, but I needed to nap so I went in and I napped. It's weird. I can even picture exactly how our room was set up that day. I took a snooze and woke up and I was sort of starting to contract. I actually went out into the driveway and threw a tantrum. Do you remember me throwing a tantrum in the driveway?Ric: No. Was I working? Was I at home?Meagan: You were at home. I threw a tantrum that my water broke. I was triggered. I was like, “This is going to be the same. I'm going to have another C-section.” I was just so upset. I remember our next-door neighbor had this big pine tree and they were watching me throw this insane tantrum in our driveway. But yeah, so then that night, that's when you said you started remembering. My cousin came over for a little bit and actually, my doula came over and was doing some rebozo work and some things, but then they left and I really wanted to labor in my son's room, in our baby's room. Ric: Yeah, but wasn't Hillary there at that time? Meagan: She was for a little bit, uh-huh. You ended up going to sleep because you were super tired and again, labor wasn't super happening. I had Hillary there. We were just hanging out. That's when you came in with a pillow. Ric: Guys, so I mean, it's not a big house but we've got enough space where you can spread out so you don't have to wake everybody up with your– can I say moaning? Meagan: I was moaning. I was moaning to cope through. At that point, I was contracting. Ric: Yeah, so there were three bedrooms right next to each other, but we had a whole family room on the other side of the house and she could have done that and not woken everybody up, but instead– Meagan: I just woke you up. Ric: You were so loud though. You were so loud and can I make the noise? Can I pretend? Meagan: Oh my gosh, sure. But you are going to be dramatizing it. Ric: No. No. You exaggerate pain so much. Meagan: I don't think so. Ric: You think you are great at handling it but–Meagan: I am. Ric: You obviously are enough, but the way you are great at it is by being really loud. Meagan: Posterior baby, everybody just to let you know. Ric: I don't know what that means. But you were contracting every 5 minutes or so– Meagan: Yeah, every 5-8. Oh my gosh. Ric: That's exactly how it was and it was loud and you were in the room right next door to our two little girls and right across the hall from me so I was super frustrated because I was exhausted and I couldn't sleep and of all of the places you decided to labor, it was right next to everyone so I came in with a pillow and threw it in your face and said, “Muffle yourself.” Meagan: Oh my gosh. This was not the brightest moment. Ric: This is why you hire a doula because sometimes dads just don't get it. Meagan: Just don't get it. And you were tired. It was really late. Ric: You don't need to excuse me. I was being a complete jerk. Meagan: But this is why I love that it is from your perspective because in my perspective, I was not that loud. I was moaning for sure. I was coping. Oh my gosh. I had so much back labor, but yeah. It was so funny. 20:23 Relying on the doulasMeagan: You throw the pillow at my face. You walk out and you leave and Hillary, my cousin, was like, “Oh no he didn't.” She was laughing. So we continued. We definitely were just quieter. I don't know. Ric: No, you didn't leave the room. Meagan: No, we didn't. Ric: You were so stubborn. You were so stubborn. You probably were louder after that because you were so mad. Meagan: When you find a space where you want to labor and are coping really well, you stay. Then the next morning came around and one of my doulas was up in the canyon so she was not even getting a ton of messages and didn't have service. She was coming down and obviously the texts were blowing up so she started texting me and said, “Why don't we call the midwife and see? Maybe we should plan on heading there.” Like Ric said, my cousin and I decided to go walk. It was 6:00 in the morning and my cousin and I decided to go walk around the block. Man, my labor totally picked up after walking. We were doing curb walks. You go up and down the curbs. We were just walking and it was such a beautiful morning, absolutely beautiful. The birds were chirping. It was July 1st. It was such a great time of year. We actually had gone to the birth center the night before to go get checked. I don't remember if you remember that and they placed a Foley balloon which is a catheter that they can fill up with saline that pushes pressure on the cervix to try and help dilate so I think it was 1 centimeter or something like that. But it popped on the way, so nothing really happened. The next morning, we went in. It was 9:00 AM and we met everybody there. My cousin had left at this point. Maybe she had stayed for a little bit actually, and then my doulas were there so like Ric said, there were just so many people there. Do you remember arriving and anything about that?Ric: No, I don't actually. The part that I do remember is hanging out outside of the birth center with Robin who is my favorite and just watching her. She just had her hands on your belly and was just calming you down. Meagan: Yeah. Yeah. I'm going to rewind a little bit. We get to the birth center. She does. She did do a cervical exam and she said, “All right. We're going to stay. Let's go upstairs.” So we go upstairs. At that point, she didn't tell me what I was dilated to but I knew I was dilated enough to stay. For me, dilation was a big mental block because I had never made it past 3 before. I had never made it past 70% effaced either. I was told on my op reports. I don't know if you remember that day that I got the op reports and I was just crying and so upset, but I was told on those op reports that I was failure to progress and that my pelvis was too small. I was just worried about dilating but at the same time, it gave me some oomph because she said, “Let's go. Let's go upstairs.” So we went upstairs. I later learned that I was 4 centimeters which was huge and yeah. My baby just really was posterior and really having a hard time turning. We did the stairs. We walked up and down the stairs and like Ric said, we went outside and we went underneath this beautiful tree. I sat on a peanut ball or I sat on a ball and my one doula was behind me holding my belly. You were there and then I had another doula keeping me hydrated. It was just a beautiful time. It was a beautiful time. I really liked it. Yeah, then we went in and I feel like that's from the point we went in, it started getting a little bit more serious but you hadn't eaten. It was like, Okay if we are going to take a turn, we need to get Ric food because we are going to have a baby soon. Do you remember that you left for a little while? Do you remember leaving? Ric: I don't. No, I do remember leaving because that's when I came back and everybody had shown up. Everybody had shown up. Meagan: Everybody was there, everybody. Yeah, so you left which was nice that you were able to leave and decompress and maybe reset. Did it feel good to be able to leave? Did you feel nervous leaving?Ric: No, again, the benefit of having Robin there. Robin was kind of the main doula for me. She was always the one who would talk to me and make sure that I was doing okay which I was. Meagan: Which is good to know because I think that hours and hours and hours into labor, you could have easily been freaking out. Ric: Yeah, I don't know why. It was just calming. Meagan: It felt calming. Ric: It just seemed we had a bunch of hands on deck that could have handled any situation that presented itself. So yeah, I remember coming back. Did you move to the room with the bed? Meagan: Mhmm. I had. I was getting counterpressure. Ric: I walked in and there was Courtney, Robin, Hillary, Angie, Danielle– there were five. Yeah. Meagan: You said Courtney, yeah. Ric: There were five women there. Meagan: Surrounding. Ric: I walked in and there was such a relief. I didn't have to do a thing. I was like, I can just sit. Because I think I brought my food. I just sat and ate and watched as you were getting pampered. You were getting attended to by these amazing women. Meagan: Such a princess. Really, there was a point where all of them like you said, all hands were on deck. They were all giving me counterpressure. They were all doing something. After you ate, do you remember when I was like, “I need Ric”?Ric: Yeah, for some reason I've got magic fists. Meagan: You have strength. Ric: I basically punched my wife in the lower back over and over and over again. Just as hard and as much pressure as possible. For some reason, it worked for her. Those women are way stronger than a man. Meagan: They are so incredible. Ric: Yeah, but I remember we would go between there and the bathroom that had the bathtub. I remember for a second we filled up the bathtub. You hung out in the bathtub for a while. Meagan: Yeah. Ric: And just kind of sat there. You obviously kept working yourself up because the progress wasn't quick enough. Baby wasn't coming fast enough. You were obviously uncomfortable. Meagan: Yeah, it had been at least 35 hours at this point of being in that tub. Ric: Yeah, so you just kept trying to find the spot where you felt would trigger things for the labor and get the labor going. Meagan: Yeah, I was really trying to get that baby to rotate. I was trying to move. Every five contractions, I would re-position myself in that tub. Eventually, I got out. Ric: Yeah, we went back into the bedroom and that's when Robin pulled me aside– or maybe it was Danielle– I think it was Robin who pulled me aside and she was like, “Hey, you were very much in your own head and starting to doubt yourself.” Meagan: I was, yeah. 28:33 Navigating doubt and transition on the toiletRic: Robin said, “Hey, I think we need to leave.” Meagan: We might need to leave, yeah. Ric: No, no, no, no, no. Meagan: Oh, I don't know. I shouldn't correct you. Ric: She was saying that the girls needed to leave like all of the women needed to leave and it just needed to be me and you. So we hung out for a little bit longer. We went back into the bathroom. Do you remember fainting on the toilet? Meagan: That was after the birth, but yes. Ric: That was after birth. Meagan: So it was just you and I. What happened was you all went out and Danielle and I were in the bathroom and she did an NST on me. She was just checking on the baby to make sure he was doing okay and  he was doing fantastic. Ric: What's an NST? Meagan: A non-stress test. They did a non-stress test on him and he was doing great. Everything was great. We weren't having issues. I didn't have any fever because again, it had been many hours since my water had broken and I'm assuming that's when you were being talked to and then I remember Danielle taking the machine out, going out and you coming in. It was just you and me. I was on the toilet. I was facing backward– the dilation station– and I was really hot. That position is a really good one though. It really opens the hips. It just helps. So I was there and I had a backpack– or not a backpack. I had a pillow. Ric: You had everything. Meagan: Yeah, I had a pillow and then you were keeping me cool with rags and stuff. There were some pictures of you even touching me and just your touch was so amazing and did so much for me. I remember just absolutely loving it. I think that's even more of why I was like, “I need Ric,” for counterpressure. Yes, your counterpressure was incredible, but I just needed your touch too. Anyway, but yeah, we were in the bathroom for a bit. It felt like a little bit. Ric: Yeah, and you really started doubting yourself. Meagan: I really was getting down. She had just done an NST and she said the NST was great, but I was thinking, Whatever. They're going to transfer me. I'm going to have a C-section. Ric: The one lady had come in and said that you should transfer so a midwife who wasn't our midwife who was at the center–Meagan: With another mom.Ric: I think she was frustrated that we were taking so long. Meagan: She was. Ric: But she had mentioned the hospital word and that really set you off. Meagan: That really impacted me. Ric: You immediately started feeling doubt in yourself. Up until this point, I don't think you had. Meagan: In my head, I was like, Oh my gosh. This is taking forever and it's getting really strong but we're not getting anywhere. I was thinking that, but when she said the word– I remember she wasn't very great. Her bedside manner was not very great. She checked me and I was 6 centimeters which was great, but I had been just lagging. She was like, “I think it's time to go to the hospital,” or something like that. I think that's when she told the midwife and the midwife came in and did the NST. But we were in there and one of our other doulas came in, Angie. I turned to her and said, “Are they going to transfer me?” She just said honestly which I really appreciated, and I really encourage doulas if you are listening, to be honest with your clients. Honesty is so important. She just said, “They are looking at things. It's one of the things they may consider.” I was like, “Okay. We've got to do something here.” Ric: No, that's not what you did. Meagan: In my head, that was what I was thinking. Ric: You got really down on yourself. Meagan: I did. Ric: This is when I turned into super-Meagan. I was like, “No. You can do this. You've got this. You worked so hard. You've done everything in your power to have the baby here. Let's have the baby here. You keep doing what you are doing and it will happen.” That was the one time when I think I was the one who was pushing more for having the VBAC than you were and was it 5 minutes later when Danielle came in and said, “All right, we're good.” Meagan: Well, yeah. She came in. She had me turn around. Ric: You had been checked. Sorry, let's go back a little bit. Right before it was just you and I in the bathroom, you had been checked and you were like an 8.5 or a 9. Meagan: Oh, yes. I was a 6 when the other midwife checked. She had checked me right before. Ric: Probably a half hour past. Meagan: Yeah. Ric: Then right before we were left alone in the bathroom, Danielle came in and checked you and you were like a 9. I don't know what everything else means, but I don't think that Webb was in a great position though. Meagan: He wasn't. I don't know if you remember, but first of all, I was already having back labor. Now my baby was really low. I was dilated pretty far and I wanted to push. I don't know if you remember. I was trying to push, but they were like, “You're not dilated.”Ric: You thought you were going to go to the bathroom. Meagan: Yeah, so I was living on the toilet then she came in and I think that they had been listening. It really wasn't that long. Yeah. She checked me and what she did was she kind of advanced my cervix. I was 9 centimeters. My baby was posterior and she stretched my cervix over his head. Ric: Yeah. Meagan: She manually brought me to a 10. Ric: She assisted. 34:25 Pushing Webb out in three pushesMeagan: As soon as she did that, it was like, Oh my gosh. This baby is coming. Everybody flooded. Ric: She brought in the stool. Meagan: Yeah, she brought in the stool and everybody flooded in the bathroom. It was insane. There were so many people in this small bathroom. Yeah. I sat on the stool and you were right behind me. I think I put at least one of my feet on someone's shoulder. Ric: Courtney. Meagan: Maybe. Courtney was taking pictures. Ric: I don't know. Meagan: Yeah. I don't know either but yeah. I put my foot on someone and I started pushing. She was like, “Let's have a baby.” I still in that moment was like, No. It's not going to happen. This isn't happening. How am I pushing a baby out now? It was so– I don't know if it's euphoric but it was really weird. Ric: It was exciting. Meagan: It was super exciting but I didn't believe it. I didn't believe that what was happening was happening. Ric: I did. I remember they asked me if I wanted to catch the baby and then they asked if you wanted to catch the baby which because of where you were at on the stool, you weren't able to. Meagan: Yeah. Yeah. I pushed and within one push, he made really great progress. He had rotated. He had rotated because I did not give birth to him posterior. He had rotated and yeah. It was one push with major movement. The second push had major movement then I just remember I was sitting there. It was really quiet and there was another mom in the next room also pushing. She was a VBAC and I was like, I'm going to have this baby before her. I made it a competition a little bit. It seemed like we were kind of on and off. When I was pushing, she was not. When she was pushing, I wasn't. With the next contraction, Danielle looked at me. I remember her eyes and I was like, It's going to happen. I felt it. I felt a lot of pressure, a lot of pressure. I pushed him out, pulled him up, put him on my chest, and I don't know. Were you crying? Ric: No. You were. Meagan: I was bawling. Everybody else I feel like was bawling, just all of the women in the room who had just gone through this whole experience with me, not just the labor but the journey of wanting the VBAC and then also as a doula watching me want this VBAC. So anyway, we were all crying and then you'll have to say. I don't know what happened. 37:08 Passing out after birthRic: Yeah, you passed out. I was behind you with my arms around you and the baby. You had been crying and with the emotion, with all of the hard work, you suddenly just went limp. So I had just told one of the doulas, “Hey, can one of you guys grab the kid because Meagan just passed out and we need to wake her up?” They grabbed Webb and–Meagan: Gave him to you, right? Ric: No. Meagan: Oh, really? Ric: No, I hung out with you while they had the baby. Meagan: Oh, I didn't know that. Ric: You came to and did they start? I remember they cut the umbilical cord. Meagan: Yeah, because they took the baby. They cut the umbilical cord. I saw pictures of you holding the baby and me on the ground. Ric: I was just focused on you because you had passed out. Meagan: I just assumed they handed the baby to you. Ric: Eventually. Meagan: Okay, yeah. So yeah. I don't know. I woke up pretty quickly. It was pretty quick it seemed like. Ric: Yeah. Meagan: But yeah, then I was just on the floor and I was just beaming and laughing and just so stinking happy. And then we went into the bedroom and I nursed for a while and was doing really, really well. They were like, “Okay. Let's get you to the bathroom and showered and then you can go home.” What happened?Ric: You passed out again. Meagan: I passed out again. Ric: Yep. You woke up on the floor. You had just sat up on the side of the bed and you passed out. This is when I did have Webb in my hands at this time and you passed out. Luckily, another doula had come so we had a fresh one, Rachel. You woke up laughing. You were like, “Oh, I'm on the floor again.” Meagan: I was like, “Why does this keep happening?” Ric: But you really wanted to go to the bathroom so we went. You and I just went to the bathroom. You sat down on the potty and you passed out again. Meagan: Yeah, and Robin came in. I remember waking up and you and Robin were right there. Ric: Yeah. We had to pick you up so we hung out in the birth center a lot longer than we would have. Meagan: Than normal.Ric: I think you ended up going to sleep. Meagan: I did. Ric: Because I was next to you and then Webb was between us. I was super worried about rolling over on him or you rolling over on him, but I think we hung out there for a couple of hours. They checked on him. They checked on you and then I just remember how amazing it was to go home that night. Meagan: Yeah. Ric: I mean, it was later. I think it was 11:00 at night. Meagan: He was born at 5:30 and it was like 11:00 that we were finally stable enough to go home. Ric: It was so odd to be told, “Hey, look. You can go home now.” He didn't have to wait in the nursery. He didn't have to do any of that. We were just able to go home. We came home. We had the crib in our room. We put him in the crib and we slept great that night. Meagan: Yeah, we did. Ric: He did too. He did awesome. I think he woke up once or twice to feed, but he was so calm. 40:37 It takes a villageRic: From my perspective, seeing you accomplish what you wanted and for those of you who are unaware, I told Meagan unequivocally that this was our last child, so this was her last opportunity. She wouldn't have had another opportunity after this. So it was really fun to see you accomplish what you had wanted to accomplish. It truly did. It took a village. You had so much help. We had so much help. I had no idea what I was doing and it was awesome because I had no idea what I was doing and everybody else who was there knew exactly what they were doing and they did such a good job. Meagan: Yeah, so obviously you would advocate for a doula. Ric: Oh, 100%. When people come up to me and ask what a doula is, I tell them it's what the perfect partner would be and how they would act and how they would treat their partner during birth. Meagan: Mhmm. Ric: So yeah, they were fantastic. Again, being able to leave and come back knowing that you were 100% taken care of– obviously, I had my spot there. I don't feel like I was minimized or my role was minimized at all. There were a bunch of times where you would have me step in when I needed to get in there and help, but I was able to focus on being there for you and they were able to show me, “Hey, look Ric. Here's where she wants you to push.” I remember that. You had showed and I think Robin or Angie said, “Hey, this is the spot where you need to push.” I remember when we were out under the tree, I was able to look at you because Robin was holding you from behind and that was a big deal because I remember Robin was obviously there and it was just serene having her with us, but it very much felt like a moment between just you and I because we were able to just sit there and be with each other and talk to each other. Meagan: Yeah. Yeah. It just helped the connection and the bond and everything. I just love doulas so much. I love you and I am so grateful that we were able to have this journey together. 42:45 Ric's advice to other dadsMeagan: Do you have any advice to a dad who may be in the spot that I put you in? Ric: Yeah, I'm sorry. First and foremost, I apologize to you because that's rough. It's a rough spot to be in. No, honestly, trust your partner. Trust the mom. They know things better than you do and again, for us, it's really easy because you get your way 99% of the time in our marriage but seeing how things ended and how everything happened, it just showed me that yeah, I can trust her and I know that she's listening to her body and she'll know what needs to happen. Meagan: I love that you point out that I was listening to my body. I think that can be a hard thing for any dad or partner to understand because there is this weird, innate thing inside of us. It just felt so right to birth vaginally after two C-sections and then it also felt right to birth out of the hospital. So thank you for supporting me through all of that and for being there. I can't believe our baby boy is 8 years old today so happy birthday, Webster. We love you so stinking much. ClosingWould you like to be a guest on the podcast? Tell us about your experience at thevbaclink.com/share. For more information on all things VBAC including online and in-person VBAC classes, The VBAC Link blog, and Meagan's bio, head over to thevbaclink.com. Congratulations on starting your journey of learning and discovery with The VBAC Link.Support this podcast at — https://redcircle.com/the-vbac-link/donationsAdvertising Inquiries: https://redcircle.com/brands

ROI’s Into the Corner Office Podcast: Powerhouse Middle Market CEOs Telling it Real—Unexpected Career Conversations

Jeff Weinstein grew up in the Detroit Metro area with a big extended family.  His earliest work experience started at 13, answering phones and doing clerical work during tax season for the accounting firm where his father was a partner.  He gained his first leadership experience as a founding member, chapter president and regional vice president of B'nai B'rith Youth Organization in Michigan. He has over 30 years of experience in the food and beverage business, beginning with full service restaurant and cafe work while he studied Philosophy at the University of Michigan.  His knowledge of the business comes from working on the job. Jeff spent 12 years with Peet's Coffee & Tea in the San Francisco Bay Area where he played every role they would allow him, from serving coffee, managing a store, running a district, building an operations services department and creating wholesale programs. After time with Dean & Deluca and Starbucks—both great learning opportunities—he joined Jamba Juice where he led operations services for over 800 domestic and international locations.  In 2015, Jamba sold most of their corporate units to franchisees.  Jeff led operations for Vitaligent—Jamba's largest franchisee with shops in Northern California and Washington—and eventually became CEO of the 96-unit restaurant group. In 2022, Vitaligent was successfully acquired by an even larger restaurant group called Sizzling Platter.  After supporting the transition, Jeff joined Wise Sons Jewish Delicatessen as CEO.  Wise Sons operates six delis in San Francisco and Oakland, in addition to a scratch bakery, commissary and wholesale business.   Following the recent college graduation of their daughter, Jeff and his wife of 30 years moved to the Sonoma Valley where they serve multiple non-profit organizations.  

Emerging Markets Enthusiast
[Operator Stories] Ralf Wenzel (JOKR) on how to achieve profitability in grocery delivery models, value chain integration and what founders and artist have in common

Emerging Markets Enthusiast

Play Episode Listen Later May 28, 2024 40:33


On this episode Pat sits down with the one and only Ralf Wenzel, Founder & CEO of JOKR. We dive into his entrepreneurial journey from growing up in East Germany to his first entrepreneurial stint with Jamba, conquering Asia with Foodpanda and subsequently embarking onto LatAm with grocery delivery platform Jokr. You will learn aboutWhy Founders and Artists are more alike than you would thinkHow to unlock value in quick commerce and grocery deliveryIt is all about integrating the value chain and ensuring a superior customer experienceHow to conquer a complex market such as BrazilYou can find Ralf on LinkedIn here. Support the Show.

QSR Magazine's Fast Forward
Cinnabon and Carvel Join Forces, with Kristen Hartman

QSR Magazine's Fast Forward

Play Episode Listen Later May 17, 2024 42:33


Kristen Hartman, the president of Specialty Brands at GoTo Foods (formerly Focus Brands)—the parent company of Auntie Anne's, Carvel, Cinnabon, Jamba, McAlister's Deli, Moe's Southwest Grill, and Schlotzsky's—takes us inside the mashup everybody's talking about. Is co-branding making a comeback? Should we call it something else? We'll get into that and a lot more.

Practical AI
Mamba & Jamba

Practical AI

Play Episode Listen Later Apr 24, 2024 41:15


First there was Mamba… now there is Jamba from AI21. This is a model that combines the best non-transformer goodness of Mamba with good ‘ol attention layers. This results in a highly performant and efficient model that AI21 has open sourced! We hear all about it (along with a variety of other LLM things) from AI21's co-founder Yoav.

Changelog Master Feed
Mamba & Jamba (Practical AI #266)

Changelog Master Feed

Play Episode Listen Later Apr 24, 2024 41:15


First there was Mamba… now there is Jamba from AI21. This is a model that combines the best non-transformer goodness of Mamba with good ‘ol attention layers. This results in a highly performant and efficient model that AI21 has open sourced! We hear all about it (along with a variety of other LLM things) from AI21's co-founder Yoav.

The Nonlinear Library
LW - Ophiology (or, how the Mamba architecture works) by Danielle Ensign

The Nonlinear Library

Play Episode Listen Later Apr 9, 2024 20:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ophiology (or, how the Mamba architecture works), published by Danielle Ensign on April 9, 2024 on LessWrong. The following post was made as part of Danielle's MATS work on doing circuit-based mech interp on Mamba, mentored by Adrià Garriga-Alonso. It's the first in a sequence of posts about finding an IOI circuit in Mamba/applying ACDC to Mamba. This introductory post was also made in collaboration with Gonçalo Paulo. A new challenger arrives! Why Mamba? Promising Scaling Mamba [1] is a type of recurrent neural network based on state-space models, and is being proposed as an alternative architecture to transformers. It is the result of years of capability research [2] [3] [4] and likely not the final iteration of architectures based on state-space models. In its current form, Mamba has been scaled up to 2.8B parameters on The Pile and on Slimpj, having similar scaling laws when compared to Llama-like architectures. Scaling curves from Mamba paper: Mamba scaling compared to Llama (Transformer++), previous state space models (S3++), convolutions (Hyena), and a transformer inspired RNN (RWKV) More recently, ai21labs [5] trained a 52B parameter MOE Mamba-Transformer hybrid called Jamba. At inference, this model has 12B active parameters and has benchmark scores comparable to Llama-2 70B and Mixtral. Jamba benchmark scores, from Jamba paper [5:1] Efficient Inference One advantage of RNNs, and in particular of Mamba, is that the memory required to store the context length is constant, as you only need to store the past state of the SSM and of the convolution layers, while it grows linearly for transformers. The same happens with the generation time, where predicting each token scales as O(1) instead of O(context length). Jamba throughput (tokens/second), from Jamba paper[5:2] What are State-space models? The inspiration for Mamba (and similar models) is an established technique used in control theory called state space models (SSM). SSMs are normally used to represent linear systems that have p inputs, q outputs and n state variables. To keep the notation concise, we will consider the input as E-dimensional vector x(t)RE, an E-dimensional output y(t)RE and a N-dimensional latent space hRN. In the following, we will note the dimensions of new variables using the notation [X,Y]. In particular, in Mamba 2.8b, E=5120 and N=16. Specifically, we have the following: [N]h(t)=[N,N]A[N]h(t)+[N,E]B[E]x(t) [E]y(t)=[E,N]C[N]h(t)+[E,E]D[E]x(t) This is an ordinary differential equation (ODE), where h(t) is the derivative of h(t) with respect to time, t. This ODE can be solved in various ways, which will be described below. In state space models, A is called the state matrix, B is called the input matrix, C is called the output matrix, and D is called the feedthrough matrix. Solving the ODE We can write the ODE from above as a recurrence, using discrete timesteps: [N]ht=[N,N]A[N]ht1+[N,E]B[E]xt [E]yt=[E,N]C[N]ht+[E,E]D[E]xt where A and B are our discretization matrices. Different ways of integrating the original ODE will give different A and B, but will still preserve this overall form. In the above, t corresponds to discrete time. In language modeling, t refers to the token position. Euler method The simplest way to numerically integrate an ODE is by using the Euler method, which consists in approximating the derivative by considering the ratio between a small variation in h and a small variation in time, h=dhdtΔhΔt. This allows us to write: ht+1htΔt=Aht+Bxt ht+1=Δt(Aht+Bxt)+ht Where the index t, of ht, represents the discretized time. This is the same thing that is done when considering a character's position and velocity in a video game, for instance. If a character has a velocity v and a position x0, to find the position after Δt time we can do x1=Δtv+x0. In general: xt=Δtvt+xt1 xt=(...

Papers Read on AI
Jamba: A Hybrid Transformer-Mamba Language Model

Papers Read on AI

Play Episode Listen Later Apr 6, 2024 25:58


We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU. Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license. 2024: Opher Lieber, Barak Lenz, Hofit Bata, Gal Cohen, Jhonathan Osin, Itay Dalmedigos, Erez Safahi, S. Meirom, Yonatan Belinkov, Shai Shalev-Shwartz, Omri Abend, Raz Alon, Tomer Asida, Amir Bergman, Roman Glozman, Michael Gokhman, Avashalom Manevich, Nir Ratner, N. Rozen, Erez Shwartz, Mor Zusman, Y. Shoham https://arxiv.org/pdf/2403.19887v1.pdf

AWR Malagasy / Malgache
1- Tsy misy fakam-panahy mahazo afa tsy izay zaka 2- Ody fahotana 3- Gervais (Jamba) - Fampaherezana 4- Ilay ady tany an-danitra

AWR Malagasy / Malgache

Play Episode Listen Later Mar 31, 2024 59:00


1- Tsy misy fakam-panahy mahazo afa tsy izay zaka 2- Ody fahotana 3- Gervais (Jamba) - Fampaherezana 4- Ilay ady tany an-danitra

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Weekly Rundown (March 23 to March 31 2024) Major AI announcements from OpenAI, Zoom, Microsoft, Anthropic, DeepMind and more.

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Mar 31, 2024 9:39


https://youtu.be/TXg3J618JvIMicrosoft and OpenAI to build $100 billion AI supercomputer 'Stargate'OpenAI unveils voice-cloning toolZoom launches all-in-one modern AI collab platformStability AI launches instruction-tuned LLMStability AI CEO resigns to focus on decentralized AIMicrosoft study reveals the 11 by 11 tipping point for AI adoptionA16z spotlights the rise of generative AI in enterprisesGaussian Frosting revolutionalizes surface reconstruction in 3D modelingDBRX becomes world's most powerful open-sourceLLM Claude 3 Opus crowned the top user-rated chatbot, beating GPT-4Empathy meets AI: Hume AI's EVI redefines voice interactionAI21 Labs' Jamba triples AI throughputGoogle DeepMind's AI fact-checker outperforms humansX's Grok gets a major upgradeSubscribe for weekly updates and deep dives into artificial intelligence innovations.✅ Don't forget to Like, Comment, and Share this video to support our content.

AWR Malagasy / Malgache
1- Mino sy manantena ary miasa 2- Confiture courgette sy cannel ary citron 3- Tora-bato avy any an-danitra 4- Tmg Ferdinand (2) ( Jamba) Toliary - 5- Manadratà ny tananareo ho amin39;ny fitoerana Masina

AWR Malagasy / Malgache

Play Episode Listen Later Mar 17, 2024 59:00


1- Mino sy manantena ary miasa 2- Confiture courgette sy cannel ary citron 3- Tora-bato avy any an-danitra 4- Tmg Ferdinand (2) ( Jamba) Toliary - 5- Manadratà ny tananareo ho amin39;ny fitoerana Masina

First Bite
What's the future of the new GoTo Foods?

First Bite

Play Episode Listen Later Feb 26, 2024 14:38


After the Atlanta-based restaurant company formerly known as Focus Brands announced its rebranding to GoTo Foods on Tuesday during the company's annual conference in Las Vegas, CEO Jim Holthouser offered some details on exactly what to expect in the days and years ahead.Although daily operations won't change much for franchisees and employees as the company moves to a more platform-based business model, more brand collaboration will be on the table. The goal is to eventually break down most silos between the seven restaurant brands — including Jamba, Cinnabon, Auntie Anne's and Moe's Southwest Grill — and commonly share resources, personnel and technology. For example, the company is in the midst of moving all seven brands to a single POS platform, Qu, to promote synchronicity among the brands.

AWR Malagasy / Malgache
1- Efa nentin39;i Jesosy izay nentinao, azonao ny fandresena 2- Papay amin39;ny voanjo 3- Noho ny teninao 4- Tmg Ferdinand Jamba Fizarana faharoa - 5- Mpiandry ondry mahafoy tena

AWR Malagasy / Malgache

Play Episode Listen Later Feb 25, 2024 59:00


1- Efa nentin39;i Jesosy izay nentinao, azonao ny fandresena 2- Papay amin39;ny voanjo 3- Noho ny teninao 4- Tmg Ferdinand Jamba Fizarana faharoa 5- Mpiandry ondry mahafoy tena

Marketing Happy Hour
BONUS! Where are they now? | Bari Tippett of Auntie Anne's and Jamba

Marketing Happy Hour

Play Episode Listen Later Dec 26, 2023 15:04


Welcome back to our "Where are they now?" bonus series where we'll be catching up with past Marketing Happy Hour podcast guests to see what they're currently up to and the projects they've been working on since we last spoke! Our next "Where are they now?" guest is Bari Tippett of Auntie Anne's and Jamba. Listen in to Bari's first MHH appearance earlier this year: ⁠How to be a Confident Marketer in 2023⁠, then join us as we catch up with her this week and hear her 2024 predictions for social media! ____ Say hi! DM us on Instagram and let us know which bonus episodes you're excited for - we can't wait to hear from you!  Please also consider rating the show and leaving a review, as that helps us tremendously as we move forward in this Marketing Happy Hour journey and create more content for all of you. ⁠⁠Get the latest from MHH, straight to your inbox: ⁠⁠Join our email list!⁠⁠⁠ Connect with Bari: LinkedIn | Instagram | Twitter Follow Auntie Anne's: LinkedIn | Instagram | Twitter | TikTok Follow Jamba: LinkedIn | Instagram | Twitter | TikTok Connect with Co-Host Erica: ⁠LinkedIn⁠ | ⁠Instagram⁠ Connect with Co-Host Cassie: ⁠LinkedIn⁠ | ⁠Instagram⁠ Follow MHH on Social: ⁠⁠⁠⁠Instagram⁠⁠⁠⁠ | ⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠ | ⁠⁠⁠⁠Twitter⁠⁠⁠⁠ | ⁠⁠⁠⁠TikTok⁠⁠⁠⁠ --- Support this podcast: https://podcasters.spotify.com/pod/show/marketinghappyhour/support

The Natural Nurse and Dr. Z
The Natural Nurse and Dr.Z: Medicinal herbs and spices of the Caribbean

The Natural Nurse and Dr. Z

Play Episode Listen Later Nov 28, 2023 53:52


On Todays Show, Co Hosts,  Ellen Kamhi PhD RN, The Natural Nurse, and Dr. Eugene Zampieron, DR Z , Discuss Medicinal herbs and spices of the Caribbean.   So many different spices and essential oils have their origin in the Caribbean . Medicinal plants have made the journey via discovery by ethnobotanists and ethno pharmacologists into the pantheon of medication used in Western medicine.  Doctor Eugene Zampieron Has been combing the hills, valleys and rainforests of the Caribbean since 1981 in search of new medicines. His work with a Jamaican bush doctor and shaman, Jamba of The Maroons, was recorded in a new documentary :  Eco tours for cures a retrospective Journey. https://www.youtube.com/watch?v=_G6MjsoabWs&t=1514s   Contacts: www.naturalnurse.com www.drznaturally.com

Medien-KuH
Folge 443: Alles wird gut? "SchleFaZ"-Aus und VIVA-Doku

Medien-KuH

Play Episode Listen Later Sep 26, 2023 101:36


Ob Tele 5 künftig nur noch sehr sehr gute Filme zeigen wird? Wie nun bekannt wurde beendet der Sender das Kultformat “SchleFaZ” in diesem Jahr. Wie und wo es für die Sendung weitergehen könnte, darüber denken Körber und Hammes heute nach. Außerdem huldigen die beiden stellvertretend Nina Ruge, die einst das ZDF-Magazin “Leute heute” moderierte. Das bunte Format wird nach 26 Jahren eingestellt. Und die ARD arbeitet ein einer Dokumentation über den ehemaligen Musiksender VIVA. Dieser würde nämlich Ende dieses Jahres seinen 30. Geburtstag feiern… Wir blicken auf noch heute bekannte und berühmte VIVA-Gesichter zurück …und auf die furzenden Affen von Jamba. FERNSEHEN 00:05:17 | Tele 5 stellt “SchleFaZ” ein 00:13:58 | ZDF beendet Boulevardmagazin “Leute heute” 00:24:06 | Medienmogul Rupert Murdoch hört auf 00:33:24 | 30 Jahre nach Sendestart: ARD arbeitet an VIVA-Doku 00:48:20 | Ein Dschungelkönig wird zum Soap-Schauspieler 00:50:46 | “Unter uns” mit neuem Supermarkt-Sponsoring WEIDENGEFLÜSTER 01:00:26 | Viehdback zu Folge 442 01:12:34 | Danke für Euren Support und Hinweis Affiliate FILM 01:15:53 | Kino-Charts 01:21:14 | Heimkino 01:32:06 | Deal liegt vor: Writers' Striker potentiell beendet 01:32:52 | “Star Wars”-News der Woche QUOTENTIPP 01:36:34 | Letztes Mal: “Late Night Berlin” (Dienstag, 19. September 2023, 23:15 Uhr, ProSieben) 01:39:23 | Dieses Mal: “heute-show” (Freitag, 29. September 2023, 22:30 Uhr, ZDF) Alle Wortbeiträge dieser Folge sind eigene Meinungen – teils satirisch – oder Kommentare.

Bald Faced Truth with John Canzano
BFT Show: Anthony Gould, Dan Lanning

Bald Faced Truth with John Canzano

Play Episode Listen Later Sep 22, 2023 101:21


John Canzano reacts to USC's Lincoln Riley reversing course on their "suspension" of a beat reporter. Oregon State receiver Anthony Gould joins for his weekly appearance presented by Jamba as the No. 14 Oregon State Beavers visit the No. 21 Washington State Cougars on Saturday, and Oregon Ducks head coach Dan Lanning calls in as the 10th-ranked Ducks host No. 19 Colorado in Eugene. Subscribe to this podcast for more great content.

Bald Faced Truth with John Canzano
BFT Interview: Anthony Gould

Bald Faced Truth with John Canzano

Play Episode Listen Later Sep 21, 2023 20:28


John Canzano talks to Anthony Gould, Oregon State wide receiver, for his weekly conversation on the Bald Faced Truth presented by Jamba. Gould talks about his 75-yard touchdown against San Diego State, the Beavers learning lessons in the victory, and the upcoming showdown between No. 14 Oregon State and No. 21 Washington State on Saturday at 4:00 p.m. on FOX. Subscribe to this podcast for more great content.

RAD Radio
09.11.23 RAD 01 Jamba Whore Writes Back, Broadway Breakdown & IFSF - Wearing Two Different Teams Gear

RAD Radio

Play Episode Listen Later Sep 11, 2023 21:57


Jamba Whore Writes Back, Broadway Breakdown & IFSF - Wearing Two Different Teams GearSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

DW em Português para África | Deutsche Welle
1 de Setembro de 2023 - Jornal da Noite

DW em Português para África | Deutsche Welle

Play Episode Listen Later Sep 1, 2023 20:00


Oposição moçambicana acusa a FRELIMO de recolher cartões de eleitores. Mas o partido no poder refuta as acusações. Na Guiné-Bissau: Umaro Sissico Embaló nomeia um chefe de Estado Maior para a presidência. E analista fala de atropelos à lei. Líder da UNITA critica a procura de ossadas na Jamba.

OMR Podcast
Special "OMR Rabbit Hole: Die Samwer-Story"

OMR Podcast

Play Episode Listen Later Jun 26, 2023 39:19


Mit "OMR Rabbit Hole: Die Samwer-Story" bringen wir unseren erstes Storytelling-Format unter der Marke OMR auf den Markt. Der Podcast handelt von der Geschichte der Samwer-Brüder, die hinter Firmen wie Jamba!, StudiVZ, Zalando oder Hellofresh stecken und durch Investments in Digitalfirmen ein Milliarden-Vermögen aufgebaut haben dürften. Hier könnt Ihr exklusiv die erste Folge hören.

South African Border Wars
Episode 106 – Operation Packer/Tumpo 3 and Castro's obsession

South African Border Wars

Play Episode Listen Later May 23, 2023 20:48


More than two decades of conflict in Ovamboland and southern Angola had worn down South African military domination - tactical superiority was no longer certain. The initial approach which had been innovative and inspirational, fast, seat of the pants and smart, had slumped into attritional raging bull blow for blow brutality. It was March 1988, time for one last push by the SADF against their FAPLA enemy and their Cuban and Russian allies. As you heard last episode, Commandant Gerhard Louw and most experienced officers of the Border War thought the overall plan to attack the Tumpo Triangle for the third time was a bad idea. Jan Breytenbach called it truly misguided. Cuban president Fidel Castro had made it very clear that he wanted the East bank of the Cuito River held at all costs. As long as the Cubans, Angolans and Russians held the bridgehead, it meant the SADF could not attack the town directly. Not that this was the South African's aim - at least not their official aim. The plan was merely to seize the east bank, cross over to the West side, blow up the bridge which would put an end to FAPLAs assaults on the UNITA held towns of Mavinga and Jamba. However, the Angolans thought that Cuito Cuanavale was the main target and so did many South African troops fighting against FAPLA. I mean, there was the strategic town right in front of them, do you seriously think that had the Angolan army broken and run, that the SADF would have stopped across the Cuito River? So with that small diversion as a way of introduction, we rejoin Commandant Gerhard Louw and his ou-manne. IT's Four pm on Tuesday 22nd March 1988, and the attackers were heading towards FAPLAs well defended positions on the east bank of the Cuito River 32 Battalion and Groot Karoo Regiment troops were joined by UNITAs 4th Regular Battalion on the western slope of the Chambinga High ground sweeping the area and trying to blunt any FAPLA reconnaissance from moving east of the Amhara Lipanda flatlands. UNITA spent a lot of time lifting mines, but it wasn't enough, more than 15 000 landmines awaited the SADF and this was going to lead to a lot of trouble for the Olifant tanks. Laid in layers, the Cubans had doubled up the fields of death by laying anti-tank mines along with 130mm shells, when these detonated, the effect would be biblical.

Neil Rogers Show
Neil Rogers Show (February 6, 2003)

Neil Rogers Show

Play Episode Listen Later May 18, 2023 194:57


Neil was in a good mood. "Jamba" called twice. Talked about: pending Iraq war, Leafs passed the Panthers 6-0, Yesterdays poll-re: Michael Jackson (ongoing), air in the studio smells moldy, a possible satellite radio show for Neil, Greg has a new haircut, democrats are wimps, Colin Powell dog & pony show at the UN, Shuttle Columbia disaster (5 days earlier), terrorism threat up due to pending Iraq war, reads Jimmy Carters statement about the Iraq war, Hank is at a golf tournament, the caller suggests nuking Baghdad, airport security is lax, talks about new poll (take no calls, some calls, all calls, don't care?), Jesse Ventura's new TV show, yesterdays poll results (Michael Jackson is either a fruit cake or a pedophile), Martha Stewart about to be arrested.

Restaurant Business Magazine
Focus Brands CEO Jim Holthouser on the future of the company's concepts

Restaurant Business Magazine

Play Episode Listen Later Mar 29, 2023 29:16


What happens when you mix ice cream and cinnamon rolls? We're about to find out. This week's episode of the Restaurant Business podcast Deeper Dive features Jim Holthouser, CEO of the Atlanta-based brand operator Focus Brands, which operates Cinnabon, Auntie Anne's, Carvel, McAlister's Deli, Schlotzski's, Jamba and Moe's. He discusses the status of these brands and the company's plans for them. He talks about the future of development in and out of the mall. And he talks about the potential for McAlister's to become a $1 billion brand. But Holthouser also talks about Swirl, a concept Focus is developing that combines Cinnabon and Carvel in a new type of co-branding arrangement. In addition, I talk about the future of restaurant IPOs, franchisee optimism, and why tipping culture will never change.

Keys To The Shop : Equipping the Coffee Retail Professional
400 : The Values of Speed and Service w/ Jaime Denney of Scooters Coffee

Keys To The Shop : Equipping the Coffee Retail Professional

Play Episode Listen Later Mar 20, 2023 61:05


The market place of coffee has seen rapid expansion of drive thrus and concepts focused on quick service. There is no doubt that customers appreciate speed when it comes to their coffee, better than speed though is speed plus service. These are the areas that today's guest is focused on delivering and continually enhancing in one of the country's fastest growing fast coffee concepts, Scooter's Coffee.  Jaime Denney joined Scooter's Coffee as the Vice President of Franchise Operations in November of 2022 to focus on initiatives aimed at increasing the speed and effectiveness of drive-thru service. Denney is a highly energetic and passionate, results-oriented operations leader with a proven record of achievement in the quick service restaurant industry. She has experience leading large cross-functional teams, delivering business results, and implementing foodservice solutions. Prior to joining Scooter's Coffee, Denney was Vice President of Brand Operations at Jamba, where she worked cross-functionally with teams to drive franchise operational excellence and deliver quality training and support. She helped successfully migrate Jamba to a new POS system and implemented digital enhancements to set franchisees up for success during the COVID-19 pandemic. Denney, who has 15 years of coffee industry experience, also held leadership roles with Starbucks, Tropical Smoothie Café and Aramark. With 613 locations and no sign of slowing, there is a good chance you have a Scooters Coffee near you. Today we are going to get an inside look into what makes this company tick,  the values that drive its drive thru empire, details about how Scooter's operates, and what we in the independent cafe community can learn from our corporate counterparts.  I hope you all enjoy and learn a lot! We cover: Choosing markets for expansion Training managers Setting goals for speed Competing in the local market place Managing costs amid inflation Keeping things simple to help speed and service The limits of speed as a value Service and consistency Serving franchisees Wages, benefits, and culture Networking and tools for refinement Links: www.scooterscoffee.com   Related episodes: 127 : Passion and Curiosity: A conversation w/ Starbucks Global Sr. Project Manager, Major Cohen Episode 121 : Working from Your Strengths 161 : Founder Friday Drive-Thru Edition w/ Jasmine Diedrich Wilson of Diedrich Espresso 384 : How to Run a Successful Coffee Cart w/ Sarah Naylor of Daybreak Coffee Cart 163 : Lessons from Crush the Rush w/ Josh Little Field    Interested in leveling up your coffee shop or setting up 1:1 coaching? Click here to schedule a free consulting discovery call with KTTS Click here to book a formal one-on-one consulting call! Visit our amazing Sponsors! www.groundcontrol.coffee www.pacficfoodservice.com www.coffeefest.com        

Doughboys
Munch Madness: Pressed vs Jamba with Zach Cherry

Doughboys

Play Episode Listen Later Mar 9, 2023 131:34


Zach Cherry (Severance, The Great American Baking Show) joins the 'boys to talk healthy bowls and Hot Ones before tackling the Acai region of Munch Madness 2023: The Tournament of Chompions: BOWL! This episode is sponsored by BetterHelp. Give online therapy a try at betterhelp.com/DOUGHBOYS. Sources for this week's intro: https://www.allaboutpalmtrees.com/acai-palm-tree https://www.goodhousekeeping.com/health/diet-nutrition/a47009/what-is-acai/ https://www.thespruceeats.com/what-is-acai-4707708 https://www.jamba.com/newsroom/meet-new-jamba https://pressed.com/our-journey#Our_JourneyWant more Doughboys? Check out our Patreon!: https://patreon.com/doughboysSee omnystudio.com/listener for privacy information.

The Barron Report
230. Focus Brands Pulling Out the Stops for Growth

The Barron Report

Play Episode Listen Later Feb 17, 2023 35:19


The challenge that operators are dealing with today is the issue with consumer sentiment, and it's underperformance when sales and menu prices are up. Has the art of hospitality been lost? In todays podcast, I get a chance to break this down with Claiborne Irby, SVP, of Strategy & Insight at Focus Brands. Our discussion topics:Thoughts on new customer growth this year in the wake of consumer pushback on menu pricing - what is the strategy?Where do you see the loyalty and community landscape going in the future - compared to what Starbucks and Brands like Doritos are doing with Web3Where do you see your brands developing in terms of growth - new service models, marketing, and products?What do you see as the largest issues facing the restaurant industry?The shift of casual dining moving down with new strategies to match fast casualAbout: Atlanta-based Focus Brands® is a leading developer of global multi-channel foodservice brands. Focus Brands, through its affiliate brands, is the franchisor and operator of more than 6,400 restaurants, cafes, ice cream shoppes, and bakeries in the United States, the District of Columbia, Puerto Rico, Guam and over 55 foreign countries under the Auntie Anne's®, Carvel®, Cinnabon®, Jamba®, Moe's Southwest Grill®, McAlister's Deli®, and Schlotzsky's® brand names, as well as the Seattle's Best Coffee® brand on certain military bases and in certain international markets. Please visit www.focusbrands.com to learn more.

Canary Cry News Talk
PANG PAL

Canary Cry News Talk

Play Episode Listen Later Oct 11, 2022 155:21


Canary Cry News Talk #545 - 10.10.2022 - Recorded Live to Tape PANG PAL - Vatican Vandals, Nuke Prep, Micro Nations, New Gods, Sci-Sovereignty  A Podcast that Deconstructs Mainstream Media News from a Biblical Worldview. Harvard: Index of MSM Ownership (Harvard.edu)   SHOW NOTES Canary Cry News Talk #545 - 10.10.2022 - Recorded Live to Tape PANG PAL - Vatican Vandals, Nuke Prep, Micro Nations, New Gods, Sci-Sovereignty  A Podcast that Deconstructs Mainstream Media News from a Biblical Worldview. Harvard: Index of MSM Ownership (Harvard.edu)   Podcast = T - 3:07 (From YT) Timestamps by Jade Bouncerson   HELLO, RUN DOWN 5:59 V / 2:52 P BUILD BACK BETTER 8:05 V / 4:58 P American tourist smashes two sculptures in the Vatican (CNN)    DAY JINGLE/PERSONAL/EXEC. 14:17 V / 11:10 P   FLIPPY 25:16 V / 22:09 P Robots working in pharmacies will prepare prescription drugs for customers (Global Coverage)   NEW WORLD ORDER 31:07 V / 28:00 P U.S. buys radiation sickness drug as part of long-standing program (Reuters)   POLYTICKS 39:28 V / 36:21 P Meet President Baugh, self-proclaimed 'dictator' of a micronation in Nevada (Insider)   MONEY 59:21 V / 56:14 P PayPal Says It Won't Fine $2,500 for ‘Misinformation' after Backlash (Yahoo/National Review) → Harry Legs: Biden Warns Inflation Will Worsen if Republicans Retake Congress (NY Times) → Crypto: Treasury's financial stability watchdog warns crypto threatens U.S. economy (CNBC)   PARTY TIME 1:09:54 V / 1:06:47 P BREAK 1: TREASURE 1:14:20 V / 1:11:13 P   COVID 1:24:46 V / 1:21:39 P Why Dates and Times Seem to Lose Their Meaning (WSJ)   BEAST SYSTEM 1:33:49 V / 1:30:42 P Technology as the new God (MindMatters) → I once fell for the fantasy of uploading ourselves. It's a destructive vision (LA Times) → Metaverse: Why The Future Of The Metaverse May Lie In The FCC's Hands   BREAK 3: TALENT 1:55:36 V / 1:52:29 P   ANTARCTICA 2:04:45 V / 2:01:38 P World-class birder to speak at GC Audubon Society about Antarctica (Hot SR)   BREAK 4: TIME 2:15:58 V / 2:12:51 P END   This Episode was Produced By: Executive Producers Christine S** Sir Sigrah Beast**   Producers SIR MORV Knight of the Burning Chariots, Shield Maiden for Christ, PrgrssNtPrfctn, Sir Darrin Knight of the Hungry Panda's, Sir LX Protocol V2 Knight of the Berrean Protocol, Sir Scott Knight of Truth, Veronica D, Dame Gail M, Sir Casey the Shield Knight   Visual Art Dame Allie of the Skillet Nation Sir Dove Knight of Rusbeltia   CLIP PRODUCER Emsworth, FaeLivrin, Epsilon   TIMESTAPERS Jackie U, Jade Bouncerson, Christine C, Pocojoyo, Joelle S   SOCIAL MEDIA DOERS Dame MissG of the OV and Deep Rivers   LINKS HELP JAM   MICROFICTION Runksmash - “The Rainbow Wizard will be the toughest boss yet!” Thinks Chris, planning his next D&D campaign, but his plans are derailed when he reaches the Jamba and finds Tracy is setting up her replacement, those vacant googley eyes taunting him silently.   ADDITIONAL STORIES Elon Musk proposes China-Taiwan 'solution', days after his Russia-Ukraine 'peace plan poll' (Sky News)  Musk says Beijing doesn't want him to sell Starlink in China (CNBC)  Chinese Police Overseas NY Outpost Can Be Tool for Sabotage in US: Expert (Epoch Times)  Snitches get riches! Parents can win £50 Amazon voucher for tracking how many times children head the ball in Under 11s football matches as part of FA study on the banned manoeuvre (DailyMail)  Uvalde school district suspends entire police force, superintendent to retire amid fallout from shooting (abc News)  Greta Thunberg on the climate delusion: ‘We've been greenwashed out of our senses. It's time to stand our ground' (Guardian)  EXCLUSIVE: 'No basis in science or data... just ideology': Critics slam Harvard children's hospital for claiming babies know in the WOMB if they're transgender (DailyMail)  NIH launches the next stage of its ‘human genome project' for the brain (Stat)  The Universe Is Not Locally Real, and the Physics Nobel Prize Winners Proved It (Scientific American)  USDA Now Asking People to Register Their Vegetable Gardens for National Database (Free Thought Project)  Musk says Beijing doesn't want him to sell Starlink in China (CNBC)  This New Pet Robot Looks Straight Out Of A Disney Movie (Wonderful Engineering)

Rich On Tech
If you use a VPN on iOS, make sure to do this first

Rich On Tech

Play Episode Listen Later Aug 19, 2022 56:11 Very Popular


A researcher explains how to make VPNs more private on iOS; Walmart+ adds Paramount+ perk; Jamba smoothie robot; Visible changes pricing and gets rid of Party Pay; Android 13 notable features.Viewers ask about the advantages of an eSIM for international travel; if I remember "monocle" mode on Yelp; Roku vs Fire TV; switching from a Samsung Galaxy S22 Ultra to an iPhone; unbundling Disney and ESPN and whether an iPhone really needs a case and screen protector.LinksFollow Rich on InstagramFollow Rich on TwitterFollow Rich on FacebookVPNs on iOSWalmart+ Paramount+Jamba smoothie robotVisible changesAndroid 13Recommended eSIMMonocle mode on YelpSamsung S22 Ultra reviewDisney bundleiPhone durable commercialSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Learning Leader Show With Ryan Hawk
476: Kat Cole - Pragmatic Optimism, Reflection Questions, Humble Confidence, Building Trust, & The Hot Shot Rule

The Learning Leader Show With Ryan Hawk

Play Episode Listen Later Jun 12, 2022 75:24 Very Popular


Text Hawk to 66866 to become part of "Mindful Monday," and join tens of thousands of Learning Leaders who receive a carefully curated email from me each Monday morning to help you start your week of right... Full show notes at www. LearningLeader.com Twitter/IG: @RyanHawk12    https://twitter.com/RyanHawk12 Kat Cole is the President, COO, and board member at Athletic Greens. She was previously President and COO at Focus Brands, the parent company of Cinnabon, Auntie Anne's, Moe's, Schlotzsky's, McAllister's, Carvel, Seattle's Best Coffee International, and Jamba. She oversaw all businesses, their 6,000 operations globally, and the multi-brand licensing and CPG business with 90,000+ points of retail distribution. She has more than 20 years of operational, brand, and executive leadership experience and has an MBA from Georgia State University and an honorary Doctorate from Johnson and Wales University. This episode was recorded at the Insight Global Headquarters in Atlanta, GA as part of the Women's Leadership Council "Raise Your Hand, Raise Your Voice" event.  Notes: A pragmatic optimist: When Kat was 9 years old, Kat's mom decided to leave her dad. Her dad was an alcoholic. Kat has two younger sisters. Kat was in multiple car accidents with her dad while he was driving drunk. At the age of 9, Kat looked at her mom and said, "What took you so long?" She learned that "the people who are closest to the action know what to do long before the senior leaders do. But they lack the language to articulate the problem and the solution. And they lack the authority to do something about it." "I learned to stay incredibly close to the people who are close to the action from that moment." "With all that he did, my mother never spoke ill of my father. I remember in all of those years, we were super poor. Taking meat scraps from the butcher. I remember one holiday season we were driving around looking at holiday lights. We went through the fancy neighborhoods and she said, 'isn't that beautiful, they must work so hard.' There are these things I absorbed that I started expecting from leaders. I learned to be grounded in the practical (the pragmatic part), but still optimistic because a whole lot is possible with very little, especially if the leader stays close to the action." "I am a learning leader. Learning is my currency." Oh! I get to do something new and I can help people, and I can make money doing it. And money is freedom because it's independence."  "When we left my dad, my mom only had one goal, all she wanted was to raise three independent girls. Our willingness to be independent was her north star." Kat got a job at Hooters and quickly set the record for "close-opens." The shifts where you close the restaurant and open it the next day. She did it 22 straight days. She was then asked to travel to Sydney, Australia and open a new restaurant. She had never left the country and didn't have a passport. She said yes anyway.  She went on to open restaurants on four continents before she was twenty.  How to build trust: It's important to lead through action, not just words. Something as simple as when we get together in person, take time to buy the donuts and coffee or some AG1. Just that effort to find a way to do something that shows you care about their experience. I don't need to say 'I thought of you.' It is obvious." "In my role, my success is your success. Your success comes from me removing friction for you." Vulnerability - Lead with vulnerability first. Share your story.  Holding people accountable - A players do not like seeing B players, C players, people who don't give their best being given equal opportunity. Someone needs to be in control, expectations are communiated and managed, and the leader is keeping us on the tracks. You have to hold people accountable.  Conflict resolution - On Friday night a regular patron would go to Hooters with his friends and order 50 wings... "After finishing the wings, he would call me over and say, 'there was only 40 wings.' He did this 4 weeks in a row. "The 4th Friday, he comes, orders 50 wings, and while they were finishing, before he finished, and I on my own waitress discount ordered 10 wings, and brought them to him. And winked. And his buddies busted out in laugher, and he said, 'good one' and tipped me 100 bucks."  "Don't confuse my kindness for weakness or stupidity. I'm generous. I'm thoughtful. I'm caring. I assume positive intent first, but I'm not going to be taken advantage of." "Confidence is not an old school overly masculine swagger, I know what I'm doing, I've got this. It's a humble confidence. It's not I know what I'm doing, it's I know I can figure this out. My confidence is deeply humble. I have screwed up so many times. I spent 10 years doing humanitarian work on the border of Ethiopia. I know what bad actually looks like. Which keeps western world business bad equally in perspective. That helps me chill. And that translates as ease. And ease translates as calm. And calm translates as both maturity and confidence. But it's actually from perspective." "Confidence is built doing many new things where you are repeatedly uncomfortable." Humble confidence is like from The Mandolorian, "This is the way." "Traditional confidence, that swagger, can be successful. And can drive outcomes, but the teams don't last very long. But the humble confidence is a learning leader. Any leader who suggests they know the way will be wrong at some points. Teams won't last as long if they don't have humble confidence." Productive achievers: The behaviors of the most successful humans have these four qualities: Courage & Confidence + Curiosity & Humility -- They must be equally balanced.  Speaking up - "If you are speaking up with the expectation of a specific outcome, you will always be disapointed. Period. That may be part of the problem. But if speaking up is about contributing and pushing the conversation forward, you're sort of lowering the expecation of the outcome. So I have very low expectation on the impact I make, but I don't expect one hand raise or one memo to change the world. But I do believe in participation." As a first time vice president at Hooters, Kat was 26 years old. She's at the table and every one of her peers was in their 50's. They had been in business longer than she had been alive. Kat's "Hot Shot Rule." The Hotshot Rule is the act of thinking of someone Kat admires, then pausing, reflecting, and asking what they would do in her situation/shoes/role, then answering what that one thing is and acting on it. The answer tends to appear quickly because it seems to be clear when you think about it through someone else's lens. That alone doesn't create change - the trick is taking action on it right away and then telling someone - the person it benefits, the person you envisioned who inspired you, or just someone you know will appreciate the change you've made. "Every time I tell my team, husband, or friend about the one thing I've done differently after the exercise, they say, 'What took you so long?' Or 'Finally!'" Kat's Monthly Reflection Questions: What has been the best part of the last 30 days? What has been the worst part of the last 30 days? Tell me one thing that I can do differently to be a better partner/teammate? What has worried you the most in the last 30 days? What is one thing you are most proud of in the last 30 days? What have you been most grateful for?