POPULARITY
Categories
In this ABCD roundup, we unpack the GENIUS Act's bipartisan momentum as the U.S. moves toward stablecoin regulation. We also examine the record-breaking Bitcoin price and futures open interest hitting $72B amid growing institutional demand. In addition, we break down SoftBank's increased investment in TSMC, signaling a deeper bet on AI chip growth, and Google's latest search overhaul with AI chatbots to take on ChatGPT. Plus, Fortnite scores a major legal win against Apple, setting the stage for renewed platform battles in the app economy. To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
About the Episode:Chris Cunningham is a founding member and Head of Social Marketing at ClickUp, the fast-growing productivity platform now valued at $4 billion. Since shaping ClickUp's brand voice and social presence from 2017, Chris has been instrumental in engineering a content system that regularly generates 200M+ monthly impressions and consistently translates content virality into real leads and customers.In this workshop episode of Uploading, Chris breaks down ClickUp's journey from early hustle—making videos solo and closing deals by hand—to building a repeatable, scalable content operation with an in-house “writer's room,” comedic actors, and a growth strategy spanning multiple platforms.Chris and host Blaine unpack content pivots, hiring creators, building brand voice, and why entertainment-first content matters for B2B. Chris also gets tactical: how to mix content types across the funnel, the operational playbook for consistent output, leveraging AI tools, success metrics, and what it takes to hit massive growth milestones.Finally, Chris shares actionable frameworks for solo founders and small teams starting from scratch—plus candid takes on virality, team structure, platform strategy, and what's next for ClickUp's $4B content engine.Today, we'll cover:- How ClickUp scaled from low-budget solo content to 200M+ impressions per month- The “bets” and breakthroughs that defined ClickUp's content playbook- Building a repeatable system: team, workflow, “writer's room,” and actors- Entertainment vs. product-driven content—and the ideal content mix- Measuring ROI: turning impressions and brand awareness into real leads and customers- Frameworks and advice for solo creators and early-stage teams to start content from scratch- Platform-specific strategies for LinkedIn, Instagram, TikTok, YouTube, and beyond- Personalization, AI, and creator partnerships: the new wave of B2B contentWhat You'll Learn1. Building a Scalable Content Engine2. Hiring and Leveraging In-house Creators3. Mixing Entertainment and Product Content4. Omnipresence across Multiple Social Platforms5. Testing, Iterating, and Doubling Down on Winners6. Aligning Content with Business Goals and Funnels7. Creating Efficient, Repeatable Content SystemsTimestamps00:00 Meet Chris Cunningham: ClickUp's content architect02:11 Chris's background: from agency to ClickUp's founding team08:07 Platform-specific content strategy & goals11:28 Making content a team priority: systems & scheduling14:37 Inside ClickUp's instagram strategy15:38 The ABCD formula: testing for virality16:09 Case study: viral skits, trends, & relatable office content19:29 Operations: writers' room, shooting schedule, & execution23:23 Starting from scratch: building in public & early tactics25:47 Frameworks for virality: the anatomy of a viral video27:41 Winning concepts: relatability, shareability, & emotional triggers30:55 Scheduling vs manual posting: what works best32:18 YouTube strategy: current state & future focus33:36 Platform prioritization: focus, layering, & growth sequence35:52 Content funnel mix: brand awareness vs product promotion37:24 Content ratio: top, middle, & bottom of funnel by stage40:00 Staff vs. actors: who should be in your content?42:10 Video length: short vs long content & platform preferences43:35 Looking ahead: 2025 content experiments & new channels46:19 Where to follow Chris & ClickUp“We've very big on shots on goal. We want to put as many shots up as possible, but we want to have calculated shots. We want to take them with low budgets… I'll make a bet and I'll start it very cheaply.” — Chris Cunningham“The only way it's really going to scale is if I brought in an expert... I took a bet that all companies would have content creators if they wanted to compete. They'll have some kind of creator that creates content for them consistently.” — Chris Cunningham“Content's just another task, right? Like anyone can make excuses. So if you're just not making content, it means you don't prioritize it. We prioritize it.” — Chris Cunningham“The dividends content rewards with is nuts. The amount of people I've met, the people who DM me and just what I'm learning… There's no reason not to make content.” — Chris Cunningham“If I had to start over and I'm at a new company—we're building in public... No actors, just talking about what we're working on. At the end of the day, I would just ask for like 5-10 minutes of all the early employees: what did you do today? And find a cool, clever way to chop it up. That's exactly what I would do.” — Chris Cunningham“You need to know your ICP. If you're creating content and you don't know who you're creating for, you really just lost the whole goal right there.” — Chris CunninghamShow notes powered by Castmagic---Have any questions about the show or topics you'd like us to explore further?Shoot us a DM; we'd love to hear from you.Want the weekly TL;DR of tips delivered to your mailbox?Check out our newsletter here.Follow us for content, clips, giveaways, & updates!Castmagic InstagramCastmagic TwitterCastmagic LinkedIn ---Blaine Bolus - Co-Founder of CastmagicRamon Berrios - Co-Founder of CastmagicChris Cunningham - Head of Social Marketing at ClickUp
Jeden Vertriebspartner gleichzubehandeln fühlt sich zwar fair an, aber behindert dein Wachstum. Hier erfährst du ein einfaches Tool, wie du es schlauer machen kannst.Hast du heute wertvolle Infos mitgenommen, die du noch nicht wusstest oder die dich weiterbringen? Vielleicht hat sich ja auch eine neue Perspektive bei dir aufgetan. Wenn du bereit bist, deinen Status quo zu hinterfragen und aufs nächste Level zu bringen, dann sichere dir dein kostenloses Analysegespräch auf: stornofabrik.deWenn du mindestens einmal lachen, weinen oder fluchen musstest, hinterlasse uns bitte eine 5-Sterne-Bewertung und ein Feedback auf iTunes, abonniere diesen Podcast und teile ihn mit deinen Kollegen. Das kostet dich maximal zwei Minuten und hilft uns dabei den Podcast weiter zu verbessern und die Inhalte noch besser zuschneiden zu können. Vielen Dank vorab für dein Engagement! Natürlich darfst du uns auch konstruktives Feedback hinterlassen oder auf Instagram anschreiben, wenn du dich angesprochen oder provoziert fühlst und mit uns sprechen willst. Wir suchen immer den Austausch und entwickeln uns weiter durch neue Perspektiven!Outro Musik: Miami Nights von Chemical Circuits (Lizenz und Download)
Recruiting Future is the podcast that helps Talent Acquisition teams drive measurable impact by developing strategic capability in Foresight, Influence, Talent, and Technology. Skills shortages in emerging technologies have reached critical levels, with companies all fishing in the same tiny talent pool for experienced professionals. But if these technologies are relatively new, why assume only experienced talent can work with them? In such fast-moving industries, how can companies develop their early career hiring strategies to ensure they get net new talent who can be productive immediately? My guest this week is Tan Moorthy, CEO of Revature. Revature is helping employers build pipelines of entry-level talent by giving high-potential hires the training and development they need to be effective from their first day. In our conversation, Tan gives us an insight into a structured approach to identifying, developing, and deploying new talent, which is transforming how organizations solve their tech talent shortages. In the interview, we discuss: Are employers looking at tech skill shortages through the wrong lens? The ABCD of in-demand skills (AI, big data, cloud, digital) AI Native entry-level talent Critical thinking and problem solving What employers get wrong about upskilling Cohorts, structure, and impact metrics Talent as a C-Suite priority The process to identify, develop, and deploy net new talent The role of technology and data Focusing humans on unique human skills What does the future look like Follow this podcast on Apple Podcasts. Follow this podcast on Spotify.
89% of entrepreneurs report feeling stuck in “mental overdrive”—even when they try to rest. But what if the problem isn't just stress—it's how you're starting your day?In this powerful and eye-opening episode, George sits down with Dr. Romie Mushtaq, board-certified neurologist, Chief Wellness Officer of Great Wolf Resorts, and bestselling author of The Busy Brain Cure. Together, they explore how brain chemistry—not productivity hacks—is the real key to healing a busy brain, boosting energy, and staying focused.Dr. Romie shares the neuroscience behind your first 30 minutes of the day, how to recognize if you're in a high or low dopamine state when you wake up, and the exact steps you can take to reset your brain chemistry—before the chaos kicks in.Plus, they dive into empathetic leadership, hope science, and why emotional intelligence is the real superpower in business today. What You'll Learn in This EpisodeWhy your first 30 minutes of the day rewires your brain—for better or worseHow to identify if you're starting the day in low dopamine or high dopamineThe specific rituals that boost your brain chemistry naturally (no caffeine needed!)The hidden cost of force-based productivity culture—and how to escape itThe ABCD model of empathetic leadership you can use in life and businessHow hope science is the new foundation for leadership, culture, and personal fulfillmentKey Takeaways✔️Your brain isn't broken—it's busy. And it can be rewired.✔️Caffeine first thing in the morning spikes cortisol and sabotages your brain.✔️Hydration, music, light exposure, movement, and gratitude are powerful brain boosters.✔️Empathetic listening (not logic dumping) is the leadership skill of the future.✔️Hope isn't optional—it's a strategic advantage in leadership and business.✔️Your rituals create your results. Your energy sets the standard for everything you lead. Timestamps[00:00] – 89% of entrepreneurs: stuck in mental overdrive[02:00] – Welcome back Dr. Romie: bestselling author + wellness leader[06:00] – Busy Brain basics: why your brain feels stuck[10:00] – High dopamine vs. low dopamine: how you're starting your day[14:00] – Ideal morning rituals to rewire your brain chemistry[20:00] – The dangers of caffeine in the first 30 minutes[26:00] – How hydration, light, and movement reset your brain[32:00] – Why your environment dictates your success (set it up!)[36:00] – How to practice empathetic listening (ABCD model)[42:00] – Why hope—not hustle—is the future of leadership[51:00] – Modeling hope and being a hope-holder for others[58:00] – Final reflections: setting your brain—and your world—up to winChoose Your Next Steps:Audit your first 30 minutes tomorrow morning: no phone, no caffeine.Pick two positive rituals (movement, gratitude, hydration, music, prayer).Reflect on your dopamine state: are you crashing or sprinting?Practice empathetic listening with one person this week—just listen and hold space.DM @itsgeorgebryant or Dr. Romie with your biggest breakthrough!Resources MentionedWant even deeper insights into busy brain, burnout, and brain health?Listen to Dr. Romie's first appearance on The Mind of George Show: Listen to Managing the Burnout, Anxiety, & Depression Created By Your “Busy Brain”Grab Dr. Romie's Book: The Busy Brain Cure – Learn more hereJoin The Alliance – The Relationship Beats Algorithms™ community for purpose-driven entrepreneursApply 1:1 Coaching – Scale with clarity, simplicity, and connectionLive Events – Step into the room where everything changes: mindofgeorge.com/event
Feeling stuck with your fat loss goals?You've been trying. Maybe even trying hard. Eating better. Moving more. Doing “the right things.”But the scale isn't moving. Your momentum's slipping. And that voice in your head is getting louder: “What's the point?”We're breaking down simple fat loss tactics that actually work—nothing extreme, nothing complicated. Just real-world strategies that help you reset, refocus, and start making progress again.No calorie-counting spreadsheets. No trendy rules. Just four practical moves that can change the way you eat, think, and show up for yourself—especially when things get busy or hard.And yeah—we'll also talk about the deeper stuff. The mindset. The habits. The quiet ways we sabotage our own success.*Tried every diet out there and nothing sticks?Imagine having a coach checking in on you EVERY day—keeping you on track and adapting your plan to whatever life throws your way.A personalized, 1:1 coaching program designed just for you can really change the game.➤ https://www.mybodytutor.com/coaching/weight-lossWhy don't we talk and see if we're a good fit for each other? Let's set up a chat.➤ https://www.mybodytutor.com/book-a-callWhat've you got to lose?
Episode 25:15 ABCD - Is There A Common Cause Behind Alzheimer's, Blood Pressure, Cancer and Diabetes? Most Americans, as they age, will have to deal with one or more of the following “ABCD” conditions: Alzheimer's Blood Pressure Cancer Diabetes Each of these conditions are treated as separate conditions requiring separate specialists: A Neurologist for Alzheimer's. A Cardiologist for Blood Pressure. An Oncologist for Cancer. An Endocrinologist for Diabetes What if this approach is wrong? That is, what if these conditions have more in common than the medical profession has led us to believe? What if they are simply “different branches of the same tree?” And, what if there are a few root causes that are common to each of these conditions? On this episode I explore these questions. In addition, I share five simple blood tests that, in my opinion, reveal the root causes behind Alzheimer's, Blood Pressure, Cancer and Diabetes. This is an episode you DON'T want to miss. It's also one you'll want to share with your friends. Thanks! ———————- Want to learn more? Continue the conversation regarding this episode, and all future episodes, by signing up for our daily emails. Simply visit: GetHealthyAlabama.com Once there, download the “Symptom Survey” and you will automatically added to our email list. ———————- Also, if you haven't already, we'd appreciate it if you'd subscribe to the podcast, leave a comment and give us a rating. (Thanks!!!) On Facebook? Connect with us at Facebook.com/GetHealthyAlabama * This podcast is for informational and educational purposes only. It is not intended to diagnose or treat any disease. Please consult with your health care provider before making any health-related changes.
• வக்ஃப் மசோதா.. 200 மணி நேர ஆலோசனை நடந்ததா - நாடாளுமன்றத்தில் நட்டா! • எதிர்ப்பு தெரிவித்த சோனியா காந்தி • வக்ஃப் மசோதா - அதிமுக எதிர்ப்பு?• மாநிலங்களவையில் வக்ஃப் மசோதா - ஜி.கே.வாசன் ஆதரவு, அன்புமணி புறக்கணிப்பு• “வக்ஃபு மசோதா கொண்டு வரப்பட்ட நோக்கம் தவறானது, தீங்கானது” -திருச்சி சிவா • இந்தியில் நன்றாகப் பாடுவார் திருச்சி சிவா - நிர்மலா சீதாராமன் • "உங்களுக்கு வக்ஃபு பற்றி ABCD கூட தெரியாது..!" - ஜான் பிரிட்டாஸ் • கச்சத்தீவை இலங்கை கொடுத்தது அரசியல் அமைப்புக்கு எதிரான - டி.ஆர்.பாலு! • "வங்கித் துறையே மலிவு விலை விமான நிறுவனம் போல் செயல்படுகிறது" - கார்த்தி சிதம்பரம் வைத்த கோரிக்கை• மணிப்பூரில் 4 மாதங்களாக வன்முறை இல்லை - அமித் ஷா• தாய்லாந்து பிரதமரைச் சந்தித்த மோடி? • 3 நாட்கள் அரசு முறை பயணமாக நாளை இலங்கை செல்கிறார் பிரதமர் மோடி! • மேற்கு வங்க பள்ளிக் கல்வித் துறையின் 25,753 பணியிடங்கள் செல்லாது - உச்சநீதிமன்றம்• 'எம்புரான்' பட சர்ச்சை - சட்டப்பேரவையில் விவாதம்! • இ.பி.எஸ். சரித்திர நாயகரா..? அப்போ நான்..?• பேச அனுமதி மறுப்பதாகக் கூறி அதிமுக வெளிநடப்பு!• “வக்ஃப் வாரிய சட்டத்திருத்தம் வேண்டாமென்றால் ஏன் இந்து சமய அறநிலையத்துறை?” - வானதி சீனிவாசன் கேள்வி• நீட் தேர்வு- ஏப்.9இல் அனைத்துக் கட்சி கூட்டம்!• TVK : 'இந்திய அரசியலமைப்பின் மீதான களங்கம்!' - வக்ஃபு திருத்த மசோதாவுக்கு விஜய் எதிர்ப்பு!• TVK எதிராக நீதிமன்றம் செல்லும் TVK? - சிபி• Modi : ஏப்ரல் 6-ம் தேதி எடப்பாடி, பன்னீரை தனித்தனியே சந்திக்கும் மோடி? - பரபரக்கும் அரசியல் களம்! • "ஊத வேண்டிய சங்கை நான் ஊதிவிட்டேன்” - சைதை துரைசாமி வலியுறுத்தல் • அதிமுக ஒன்றுபட வேண்டும் எனக் கூறிய சைதை துரைசாமிக்கு அதிமுக தகவல் தொழில்நுட்ப அணி கடும் கண்டனம். • சு.வெங்கடேசனுக்கு ஆறுதல் தெரிவித்த முதல்வர்!• “கூட்டணியில் விரிசல் ஏற்படுமா என நினைப்பவர்களின் எண்ணம் ஈடேறாது” - மு.க.ஸ்டாலின்• ``வியூக அமைப்பாளர்களின் கையில் இன்றைய அரசியல் சிக்கியுள்ளது'' - CPIM மாநாட்டில் ராஜூ முருகன்• ``கம்யூனிச இயக்கங்கள் பெரியார், அம்பேத்கர் இயக்கத்துடன் இணைந்து செயல்பட வேண்டும்'' -சாலமன் பாப்பையா
Cuando terminamos una relación, el duelo es inevitable, pero hay algo que nadie nos dice: es un proceso activo. No se trata solo de esperar a que el dolor pase, sino de tomar acción para sanar y reconstruirnos. En este episodio te comparto las claves para afrontar un duelo amoroso de manera consciente y transformadora. Desde la teoría de Worden hasta mi propia experiencia escribiendo el Diario de un duelo: te cuento cómo darle sentido a la pérdida, recolocar emocionalmente a tu ex y crear espacio para lo nuevo.Porque sí, perder a alguien duele, pero también puede ser el inicio de una nueva versión de ti. ¡Dale play y acompáñame en esta conversación sincera y colorida!
Un message de Francois pour la série "Jésus dans les évangiles", basé sur le passage de la multiplication des pains par Jésus. Ce passage est clé pour comprendre comment naviguer dans 4 saisons de vie (ABCD) !Bonne écoute !
Gauranga Das discusses the importance of preparing for life beyond mortality, emphasizing that spiritual teachings guide individuals in planning their journey after life. He highlights the significance of association, sacred literature, contemplative practices, and mindful diet in shaping one's consciousness and spiritual growth. The conversation also touches on the necessity of finding one's purpose and the role of discipline in leading a fulfilling life.
Do you need professionalism credits? Look no further and listen to host Jackie Lee discuss experiences with the ABCD for counseling from guests Cathy Quock and Lydia Tolman.
In this roundup, we dive into a packed week of digital assets and cover how markets broke out of five straight weeks of selling, perhaps moving back into buying territory. On the acquisition front, Kraken acquires NinjaTrader, expanding its digital asset trading reach, while SoftBank's $6.5B deal for Ampere Computing may underscore the intensifying race to secure AI infrastructure. We also discuss CoreWeave's $2.7B IPO plans amid demand for AI cloud services, Dubai's ambitious real estate tokenization pilot targeting $16B by 2033, and recent AI funding rounds fueling growth in the compute and infrastructure space. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
O cooperativismo financeiro tem um papel importante na sociedade brasileira, oferecendo uma alternativa ao modelo tradicional de instituições financeiras e promovendo uma relação mais próxima com seus associados. No contexto atual, em que empresas ampliam suas plataformas e estruturam ecossistemas para oferecer soluções integradas, as cooperativas também estão adotando novas abordagens e explorando oportunidades de inovação organicamente e via parcerias.No quarto episódio da série Orquestradores de Ecossistemas, produzida pelo Fintech Talks em parceria com a 180 Seguros, os hosts Bruno Diniz e Mauro D'Ancona, CEO da 180 Seguros, recebem Moacir Niehues, Diretor Executivo da Sicredi Vale do Piquiri ABCD, para discutir como o Sicredi tem se adaptado a esse cenário.A conversa aborda a evolução do cooperativismo no Brasil, a transição do Sicredi para os grandes centros urbanos e sua estratégia para integrar novas soluções financeiras, inclusive seguros, assim como não financeiras. Também exploramos o papel da intercooperação entre as cooperativas, os desafios da competição com bancos e fintechs e as inovações adotadas pelo Sicredi, incluindo Embedded Finance, Open Finance e novas interfaces digitais baseadas em IA.Assista agora e entenda como o cooperativismo está se posicionando no atual mercado financeiro e quais estratégias o Sicredi tem adotado para ampliar seu impacto!***Esse programa é um oferecimento da 180 Seguros, a primeira seguradora tech do Brasil. Ofereça seguros de forma customizada e integrada à jornada de compra dos seus clientes para gerar mais valor para seus clientes e resultados para o seu negócio.
On this episode of the ABCD Roundup, Mark is solo today as Xavier is out welcoming baby number two! Mark breaks down the latest Bitcoin volatility, market trends, the impact of the current trade wars, and how global economic shifts could shape the future of finance. We explore China's strategic dominance in AI and chips, the Federal Reserve's interest rate dilemma, and whether Bitcoin will become a strategic reserve asset. Plus, he details how AI agents are changing everything from research to real-world applications. Stay ahead of the curve with insights on this week's news in tech and markets. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
In the first episode of season 2, we speak with Jack Smyth – a freelance designer and illustrator from Ireland. His clients include Penguin Random House, Faber & Faber, Harpercollins, Granta, Daunt Books, Simon & Schuster, The New York Times, The New Yorker, Politico and The Atlantic. He has previously worked inhouse at 4th Estate, Simon & Schuster, Little, Brown and Tower Records and holds an MA in Graphic Design from Kingston University. In 2024, he was named the designer of the year at the British Book Awards. He has received 9 ABCD awards, a BBDPA award and has been featured in Creative Review, It's Nice That, Communication Arts and the 100 Archive. He lives in Dublin with his wife and cat. Cover Meeting was hosted by Steve Leard and produced by James Ede of beheard.org.uk.
This ABCD roundup discusses Trump's executive order establishing a Strategic Bitcoin Reserve and U.S. Digital Asset Stockpile, while the Texas Senate passes a bill to create a statewide Bitcoin reserve. We also touch on the latest China AI developments, analyzing how they fit into the evolving global tech landscape. In addition, we highlight the Bitcoin hammer candle as our Chart of the Week. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
Did you like this episode? Dislike it?
In this ABCD roundup, we dive deep into the shocking $1.4 billion ByBit hack—what happened, what it means for the broader crypto industry, and the lessons investors should take away. We also break down the latest market slumps, analyze trending venture dollars in blockchain-oriented investments in 2025, and highlight our Chart of the Week. Do not miss this discussion on the state of crypto and where the market could be headed next. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
In this episode, we discuss Codeium, an AI coding startup that is eyeing a $3B valuation just months after its last funding round. We also talk about how Apple's iPhone 16e is shifting toward on-device AI processing and Microsoft's Majorana 1 quantum chip breakthrough, which could fast-track fault-tolerant quantum computing. Plus, our Chart of the Week reveals ChatGPT's milestone of reaching 400 million weekly active users, and we reminisce on its historical scaling to the first 100 million users. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/ . To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
Learn more about your ad choices. Visit megaphone.fm/adchoices
In this special Valentine's Day episode, co-host Mark Yusko dials in from his travels in Hong Kong and Singapore to discuss the latest news and trending themes on the ABCD's in Asia with co-host Xavier Segura. They dissect a handful of fresh off the press headlines like Apple & Alibaba's proposed AI partnership, President Trump's call for defense talks with Russia and China, Franklin Templeton's interest in the Solana blockchain, and China's budding crypto landscape. Join us for an insightful conversation on these topics and more! Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/ . To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
In this episode, we dive into the latest high-stakes moves in AI and blockchain. France and the UAE have announced plans for a €30-50 billion AI campus, aiming to position Europe at the forefront of the AI revolution—just as DeepSeek shakes up the landscape with new breakthroughs in cost efficiency and performance. Meanwhile, Amazon is poised to outspend its rivals with a massive $100 billion of projected investments in AI infrastructure, setting the stage for intensified competition among tech giants. On the blockchain front, BlackRock has strengthened its Bitcoin position, increasing its stake in Strategy (formerly MicroStrategy) to 5%, a move that could further solidify institutional confidence in BTC. Join us as we break down these major developments and their potential impact on AI, digital assets, and the future of global finance. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/ . To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
In this ABCD Roundup, Co-host Mark Yusko kicks off the episode solo, discussing recent market trends in crypto and the political moves surrounding digital assets that have transpired during President Trump's first few weeks in office. Co-host Xavier Segura dials in, returning from his travels to the Middle East to round out the episode highlighting DeepSeek's groundbreaking AI advancements, challenging the dominance of U.S. tech giants and reigniting the U.S.-China tech rivalry. From their cost-effective AI models to allegations of espionage and intellectual property theft, DeepSeek is stirring up excitement and controversy. Join Mark and Xavier as they analyze topics such as crypto utility, the market dynamics of meme coins, and open vs. closed source AI models. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/ . To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
Here, we dive deep into effective time management strategies for real estate professionals to help you stay organized, boost productivity, and deliver an outstanding client experience. Learn how to prioritize your tasks using methods like the ABCD prioritization system and the Eisenhower Matrix, delegate efficiently, and leverage AI tools to automate repetitive tasks. We'll explore how time management can positively impact your reputation, making you more reliable, responsive, and available for your clients. Plus, discover the power of simple systems like the “rocks, pebbles, and sand” analogy to prioritize the most important tasks, and learn how consistent time management can help you stand out from the competition. Whether you're struggling with overwhelm or need tips on staying on track, this video has practical solutions to take your real estate business to the next level.
In this week's ABCD Roundup, Xavier and Mark dive into the world of meme coins, from the explosive rise (and fall) of Trump Coin to the broader crypto assets market. The hosts break down what they believe separates true innovation from the hype-fueled chaos of potential pump-and-dump schemes. They also explore the evolving role of AI in reshaping global labor markets and the massive $500 billion push toward AI data centers in the U.S. Tune in for insights, conversations, and recent news on the digital ecosystem. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
This ABCD roundup episode examines the recent surge in Ripple (XRP). We also discuss President Biden's executive order to accelerate sustainable AI data centers while new export controls on AI chips spark innovation debates. In blockchain, Senator Lummis questions the FDIC, and we discuss asset class returns in our chart of the week. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or a solicitation for the sale of any security, advisory, or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
ABCD is for Down: No Limit Noah by Cindy LasiABCD is for Down is an inspiring, joyful childrens book featuring a cool cartoon character that just so happens to have been born with Down Syndrome. The hero of this alphabet story book is called No Limit Noah. NLN was written in the spirit of inclusion and integration. This book was written for all children, sharing positive messages through the fun imaginative antics of NLN.ABCD is for Down, through its fun, bright, colourful pages teaches the meaning of acceptance and inclusion, regardless of differences or disabilities. This heartfelt book reflects the pure positive spirit of NLN. It is the author's desire to bring joy and inspiration to parents, families, caregivers, and all children living with challenges today. No Limit Noah brings fun and adventure to life! Showing that when you live with kindness and joy in your heart… there really are no limits!Cindy Lasi is a first-time author, former nurse, and spent nearly 10 years as a manager for a global not for profit organization. Advocacy for those who may not have the means to speak up for themselves has always been important to her. Cindy grew up in a town of 32,000 residents in the Southern most part of Ontario Canada. Living in such a small uneventful area sparked Cindy's imagination which would allow her to dream big and begin a life of service. This passion and compassion for people lead her to careers that allowed her the opportunity to use her natural gifts and talents in service to others. Cindy is a faithful Christian who believes we can all offer big or small gifts of kindness and compassion. And that a loving heart can inspire and empower those who may not know how to advocate for themselves.https://www.amazon.com/ABCD-Down-Cindy-Lasi/dp/B0DNVNJRDW/ref=sr_1_1?crid=2OE9J090J50O&dib=eyJ2IjoiMSJ9.thRVMTlrU-hIoYQnphFdvQ.v0w-LxL8icmC4LX2IPtmGjzSVUHA9Wa4NkTTHck65ag&dib_tag=se&keywords=ABCD+is+for+Down&qid=1733876526&s=books&sprefix=abcd+is+for+down%2Cstripbooks-intl-ship%2C277&sr=1-1https://nolimitnoahbooks.com/ https://gothambooksinc.com/http://www.bluefunkbroadcasting.com/root/twia/11625gb1.mp3
Learn a systematic approach to management of wide complex tachycardia that doesn't rely on exact rhythm identification or scoring systems with guest Dr. Kevin Butler. By understanding the physiology rather than memorizing a score or algorithm you can use Kevin's approach and ABCD mnemonic to help differentiate between ventricular tachycardia, SVT with aberrancy, and other arrhythmias. Learn how not to miss the vital mimics with his SPAM filter. And finally by understanding which treatments are safe and which treatments potentially dangerous you can understand how to move forward even when you're not exactly sure what the arrhythmia is. Whether you're an EMT, paramedic, or emergency physician, this episode provides the essential principles and stepwise strategies to ensure effective and safe patient management. 00:00 Introduction to Arrhythmia Management 00:29 Personal Anecdote: Lessons from Paramedic School 02:04 Introducing Dr. Kevin Butler 02:16 Sponsor Message: The Recess Tailor 03:35 Welcome to EMS Cast 04:04 Understanding Wide Complex Tachycardia 05:37 Approach to Arrhythmia Management 07:02 Principles of Managing Wide Complex Tachycardia 08:39 Pathophysiology of Wide Complex Tachycardia 18:30 Identifying and Treating Mimics 26:55 Regular vs. Irregular Wide Complex Tachycardia 29:59 Understanding Torsades and Its Management 30:25 Identifying Atrial Fibrillation with Pre-Excitation 32:14 Treatment Considerations for AFib with WPW 35:02 Distinguishing Between VT and SVT with Aberrancy 37:09 Physiological Approach to ECG Interpretation 40:24 ABCD Mnemonic for VT Diagnosis 47:49 EMS Treatment Protocols for Wide Complex Tachycardia 51:54 Pre-Hospital Cardioversion Tips 55:22 Emergency Department Approach to Wide Complex Tachycardia 58:00 Final Thoughts and Key Takeaways Resources The Resus Tailor Episode Blog Post with EKG examples Survey- Help us learn what content you want Gear We Like Good Stethoscope - https://amzn.to/3YJJrf2 Good Shears - https://amzn.to/40FROuF or https://amzn.to/3ChZ4Tn Notepad for taking notes on calls - https://amzn.to/3Z1X21J Sunglasses - https://frontline-optics.com/discount/EMSCAST15 Books we recommend - The Dichotomy of Leadership - https://amzn.to/4fiCAjN Extreme Ownership - https://amzn.to/3O1FWfa Managing the Unexpected: Sustained Performance in a Complex World - https://amzn.to/3V7BwYf Thinking Fast and Slow - https://amzn.to/4fiJG85 A Thousand Naked Strangers: A Paramedic's Wild Ride to the Edge and Back - https://amzn.to/3YJJrf2 Guest/Cast/Crew information- Guest- Dr. Kevin Butler, Emergency Physician, Lead Instructor for DHREM's EKG didactic curriculum Host- Ross Orpet, Will Berry Catch up with us after the show Instagram- @emscast Twitter- @ems_cast Website- www.emspodcast.com
Exam Study Expert: study tips and psychology hacks to learn effectively and get top grades
Today, Nkechi L Ifediora joins us to break down her ABCD principles of smarter living:Assess / acceptanceBoundariesCommunityDecision makingNkechi is a doctor, director, Mum, executive coach, writer and influencer, who's built a large following through her practical and compassionate advice on living smart.Join Nkechi's 50,000+ followers on instagram: https://www.instagram.com/nkechilifedioraofficial/And learn about her work and her book on her website: https://nkechilifediora.com/The Exam Study Expert podcast is hosted by William Wadsworth, memory psychologist, independent researcher and study skills coach. I help ambitious students to study smarter, not harder, so they can ace their exams with less work and less stress.BOOK 1:1 COACHING to supercharge your exam success: https://examstudyexpert.com/workwithme/Get a copy of Outsmart Your Exams, my award-winning exam technique book, at https://geni.us/exams*** As an Amazon Associate, I earn from qualifying purchases on suggested books.Questions? Comments? Requests? Or just want to say "thanks" - send me a text message (I read them all!).
Digital Currents hosts Mark Yusko and Xavier Segura are back together to highlight major headlines in Tech this week on the ABCD Roundup. The episode explores AI's leap toward agentic intelligence, NVIDIA's dominance in chip innovation, and China's growing influence in semiconductors. They also discuss major themes at CES 2025 and the potential impact of digital adoption on global markets in the year ahead. Tune in for an engaging conversation about volatility, opportunity, and the forces shaping the digital economy in 2025. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com. Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or as a solicitation for the sale of any security or any advisory or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
Digital Currents host Mark Yusko leads 2025's inaugural episode of the ABCD Roundup. Mark highlights topics like Bitcoin's Sweet 16, insights into Ethereum's trajectory in the new year, Solana's resurgence, and the growing influence of DeFi. We dive into the transformative tech trends shaping the year to come, from AI's evolution into agentic intelligence to groundbreaking advancements in chip technology like photonics and fully homomorphic encryption. We also explore the shifting tides of global markets, wealth transfers, and digital adoption as traditional finance faces generational change. Join us for a thoughtful discussion on innovation, volatility, and progress in the digital frontier. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com Legal Disclaimer This podcast is for informational purposes only and should not be construed as investment advice or as a solicitation for the sale of any security or any advisory or other service. Investments related to the themes and ideas discussed may be owned by funds managed by the host and podcast guests. Any conflicts mentioned by the host are subject to change. Listeners should consult their personal financial advisors before making any investment decisions.
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe
Take a trip down memory lane with this special episode that recounts the major stories shaping 2024. From the SEC's historic approval of a Bitcoin ETF and Ethereum's Dencun upgrade to Nvidia's staggering climb to a 3 trillion valuation. We explore SBF's day in court and the repositioning of crypto with the Bitcoin Act. Further we recount the year's key records spanning from the $1.1 billion dollar Bridge acquisition to Groq's big raise and Willow's quantum leap. Join us as we bring you the ABCD advancements and global financial shifts that defined 2024. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com
In this episode, we explore the transformative power of mindfulness and meditation to support drinking less alcohol with Simonette Vaja, a trauma-informed psychologist and mindfulness teacher. Simonette introduces her ABCD model—Attention, Breathing, Compassion, and Detox—a practical framework for fostering emotional resilience and supporting an alcohol-free lifestyle. From enhancing decision-making to cultivating inner peace, her approach offers profound insights into how mindfulness can be a cornerstone in the journey to sobriety. Simonette introduces us to techniques like vagus nerve breathing, which activates the body's relaxation response, and guided meditations to nurture self-compassion and overcome challenges such as perfectionism and anxiety. By addressing not only physical detox but also mental and emotional renewal, her approach provides a holistic path toward personal growth and emotional healing. This inspiring episode highlights the profound impact of practices like Loving-Kindness Meditation, encouraging listeners to embrace self-love and deepen their emotional connections. Whether you're navigating early recovery or seeking greater balance in life, Simonette's insights offer a roadmap to a mindful, alcohol-free life. Join us to discover how mindfulness can transform your journey toward freedom and well-being. LEARN MORE ABOUT SIMONETTE VAJA'S WORKWeb: https://www.artistryofwellbeing.com/Insight Timer: https://insighttimer.com/simonettevaja MEGMegan Webb: https://glassfulfilled.com.auInstagram: @glassfulfilledUnwined Bookclub: https://www.alcoholfreedom.com.au/unwinedbookclubFacebook UpsideAF: https://www.facebook.com/groups/1168716054214678 Small group coaching: https://www.elizaparkinson.com/groupcoaching BELLAWeb: https://isabellaferguson.com.auInsta: @alcoholandstresswithisabellaBi-Yearly 6-Week Small Group Challenges: Learn more: https://www.isabellaferguson.com.au/feb-2025-challengeFree Do I Have A Drinking Problem 3 x Video Series: https://resources.isabellaferguson.com.au/offers/JTFFgjJL/checkoutFree HOW DO I STOP DRINKING SO MUCH Masterclass: https://resources.isabellaferguson.com.au/offers/7fvkb3FF/checkoutOnline Alcohol Self-...
With so many families already struggling to cover monthly expenses, it can be even more complicated to try and stretch the paycheck to ensure kids get some special presents for the holidays. Action Boston Community Development, a non-profit helping people in the Greater Boston area connect to resources to get them out of poverty, says it's received thousands of requests for help with toys, but they have been struggling to meet demand. They are also collecting donations for their Winter Fund to keep families warm with winter clothing, heating assistance, and home winterization. President and CEO Sharon Scott-Chandler talks with Nichole about their efforts and shares important information about how you can help, or access assistance if necessary.
In this episode, we discuss Google's breakthrough quantum chip, Willow, and its potential implications for scalable quantum computing. We also examine Riot Platforms' $525M Bitcoin strategy, Microsoft's rejection of a Bitcoin reserve proposal, and Tether's approval in Abu Dhabi's financial ecosystem. Finally, don't miss our Chart of the Week, which analyzes the CEX-DEX volume dynamics. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital/. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com
On the 100th episode of What is a Good Life? podcast, I am delighted to introduce our guest, Cormac Russell. Cormac is a social explorer, an author and a much sought-after speaker. He is the Founding Director of Nurture Development and a member of the Asset-Based Community Development (ABCD) Institute, at DePaul University, Chicago. Over the last 25 years, Cormac's work has demonstrated an enduring impact in 35 countries around the world. He has trained communities, agencies, NGOs and governments in ABCD and other community-based approaches in Africa, Asia, Australia/Oceania, Europe and North America.His most recent books are The Connected Community- Discovering the Health, Wealth, and Power of Neighborhoods (Coauthor John McKnight) and Rekindling Democracy – A Professional's Guide to Working in Citizen Space. Cormac's TEDx talk is really beautiful and can be viewed here. In this glorious conversation, Cormac shares his journey into community development, marked by a focus on being over doing. We discuss the thresholds and limits of self-help, the problem with trying to fix others, and ways of making sense of the world. Cormac suggests that the world is not getting worse, but becoming clearer, and emphasises how to channel our emotions into meaningful action.I found Cormac to be deeply insightful and rooted in his being. This conversation offers perspectives to help make sense of the world while encouraging us to reflect on our roles—not just in relation to humans, but as members of an interconnected, living, breathing Earth.For further content and information check out the following:- Cormac's LinkedIn: https://www.linkedin.com/in/cormacrussell/ - Nurture Development website: https://www.nurturedevelopment.org/who-we-are/cormac-russell/- His most recent book: https://wipfandstock.com/9781725253636/rekindling-democracy/ - For the What is a Good Life? podcast's YouTube page: https://www.youtube.com/@whatisagoodlife/videos- My newsletter: https://www.whatisagood.life/- My LinkedIn: https://www.linkedin.com/in/mark-mccartney-14b0161b4/Contact me at mark@whatisagood.life if you'd like to explore your own lines of self-inquiry through 1-on-1 coaching, take part in my weekly free silent conversations, discuss experiences I create to stimulate greater trust, communication, and connection, amongst your leadership teams, or you simply want to get in touch.00:00 Introduction03:45 Exploring vocations08:25 Focus on being rather than doing15:18 Being both rooted and free18:35 Attraction to people helping others22:25 What is help?26:45 Thresholds within self-help and helping31:32 Hidden persuaders37:06 The path of making sense in the world42:15 Doing life together in mutual solidarity47:35 The problem with trying to help or fix52:45 Getting a sense of our intentions57:45 The world is getting clearer not worse1:06:45 What is a good life for Cormac?
This episode explores the transformative forces shaping global technology and geopolitics. We discuss Bitcoin's historic surge to $100K, driven by U.S. demand and optimism around crypto-friendly policies, alongside MicroStrategy's continued BTC accumulation. Shifting to AI, we examine the rise of Chinese open-source models, their remarkable performance in complex tasks like coding and reasoning, and the controversies surrounding censorship and geopolitical implications. Additionally, we highlight the Baltic undersea cable suspected sabotage, a stark reminder of hybrid warfare risks amid rising global tensions. Remember to Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com
In this episode, we're joined by researcher and podcast host Aoife O'Brien to explore imposter syndrome - one of the most common challenges facing leaders at all levels. Drawing on her extensive research, Aoife reveals five distinct imposter identities and shares practical strategies for recognizing and managing imposter syndrome. Whether you're experiencing it now or supporting team members who might be, this episode provides evidence-based insights and actionable advice.Key points from this episodeWhy imposter syndrome differs from general self-doubt and how to spot the signsThe surprising gender differences in how imposter syndrome manifests throughout careersUnderstanding the five imposter identitiesHow leaders can create an environment where it's safe to discuss these feelingsPractical steps for managing imposter syndrome using the ABCD methodology00:00 Introduction and Welcome01:58 Aoife's Background and Journey into Researching Imposter Syndrome05:37 Defining Imposter Syndrome vs Self-Doubt08:45 Common Triggers and How It Shows Up13:24 The Identity Gap in Leadership Roles19:23 Gender Differences in Imposter Syndrome Experience24:28 The Five Imposter Identities Explained36:52 Practical Steps for Managing Imposter Syndrome41:15 The Importance of Speaking Up Despite Fears43:28 Where to Find More Resources and Connect with AoifeUseful LinksConnect with Aoife on LinkedIn https://www.linkedin.com/in/aoifemobrienListen to the (excellent) Happier at Work podcast Take Aoife's Imposter Identity AssessmentInterested in working with us? Get in touch about career or leadership development, outplacement workshops or recruitment support via the Catalyst Careers website Mentioned in this episode:Catalyst Career Club for £50k+ Leaders & Managers Moving up the career ladder needs an inside edge - strategies that allow you to unlock your full potential and position yourself as a true leadership talent. And that's exactly what the Catalyst Career Club for 50k+ Leaders provides. No fluff, no corporate jargon. Just a down to earth, purposeful injection of oomph for your career from Pamela & Jacqui Use the code PODCAST to get your first month for £1 https://www.pamelalangan.com/catalystcareerclub
This week, we discuss BTC's continuing march to $100k and the impact of leverage on price. We also take a deep dive into what has made MicroStrategy one of the best-performing stocks since 2020. In addition, we chat about the latest $50 billion valuation of Elon Musk's xAI, achieved in just 16 months. Lastly, we cover Mastercard and JPMorgan's integration of payment networks using blockchain and spotlight the top ten crypto movers post-election in the chart of the week. Remember To Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com
In this ABCD Roundup, we explore the crypto industry's optimism following a new U.S. government administration, Red Hat's acquisition of Neural Magic for hybrid AI optimization. Lastly, we analyze the BITCOIN Act proposing a U.S. Bitcoin reserve, highlighting its potential economic impact and the financial complexities surrounding its implementation. Remember To Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com
In this week's ABCD Roundup, we unpack the 2024 US election's impact on crypto and AI with a strategic bitcoin reserve, crypto council and US mining investments all being proposed. We also shed light on how big tech is beginning to respond. In addition, we delve into Google's newest AI center in Saudi Arabia, Coatue Management's $1 billion fund for AI, and Anthropic's partnership with Palantir and AWS to use Claude AI in secure government environments— underscoring AI's potential role in national security. Tune in as we explore the potential implications of these advancements on technology, security, and the economy. Remember To Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com
his week's ABCD Roundup covers the largest crypto acquisition in history with Stripe reported to acquire Bridge for $1.1 Billion. Meta and Microsoft's advancing AI infrastructure objectives amid investor caution, KKR and Energy Capital Partners funding data centers to meet AI's energy demands, and Abu Dhabi firms launching a tokenized U.S. Treasuries fund. Further, we discuss Galaxy Digital's Q3 crypto report highlighting venture investment. Remember To Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com
In this ABCD roundup, we discuss the expanded partnership between OpenAI and Bain & Co., which aims to tailor AI solutions for industries like retail and life sciences. Next, we cover Microsoft's UAE AI National Skills Initiative, a large-scale effort to upskill 100,000 government employees, supporting the UAE's ambitions to become a global AI hub. On the blockchain front, we dive into Ethereum's future with co-founder Vitalik Buterin's plan to hit 100,000 TPS using Layer 2 rollups, alongside the upcoming "The Verge" upgrade. This upgrade aims to make Ethereum more secure and accessible by enabling nodes to run on smaller devices, such as phones, through "stateless verification." Finally, we examine IBM's Q3 2024 results, highlighting its $15 billion in revenue and strong performance in generative AI and software growth. Remember To Stay Current! To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital. To speak to a team member or sign up for additional content, please email mcdigital@morgancreekcap.com
Limitless Possibilities - Overcoming Massive Challenges To Live Your Best Life
In this episode, Anamika sits down with Sarika, a first-generation Indian-American doctor, to discuss overcoming challenges, balancing cultural identities, and navigating life with an autoimmune condition. Sarika shares her journey from fulfilling her parents' dream of becoming a doctor to finding her own path in the pharmaceutical industry. She opens up about dealing with chronic pain, embracing resilience, and rejecting stereotypes that often come with being part of two worlds. Her story is a testament to the strength it takes to overcome obstacles and build a life of purpose and balance. All views expressed in this Episode or either the host or the guests own, there is no representation of any company or firm.
Envíame un mensajeLa inclusión en la lista de Nuevos Cardenales del Padre Radcliffe, O.P., es la más notable y controvertida. Radcliffe fue maestro de la Orden Dominicana de 1992 a 2001, pero su nombre es posiblemente más notorio por su prominente y persistente promoción de la ideología ABCD, en contradicción con la enseñanza católicaPulsa Aqui para ver el video del programaSupport the show YouTube Facebook Telegram Instagram Tik Tok Twitter
It's the Stro Show this week on Knuckleheads! Stromile Swift joins Q and D as they take a trip down memory lane. The guys look back at their star-studded Nike and Adidas camps growing up, the 2000 NBA Draft, and Stromile's worldwide pro career that started in Vancouver and finished in China. Don't miss!50 Cent's Humor and Harmony Weekend, growing up in Louisiana (3:10)ABCD camp, Boo Williams, McDonald's All-American game (11:50)Committing to LSU, John Thompson recruitment (36:20)Draft night, rookie year in Vancouver, his signature dunk celebration (44:50)2001 dunk contest, Grizzlies move to Memphis, Dikembe Mutombo (1:01:35)‘06 Grizzlies with Kyle Lowry, Rudy Gay, Damon Stoudamire and Pau Gasol (1:13:35)About Our Hosts:NBA veterans Quentin Richardson and Darius Miles are lifelong friends and bona fide truth-tellers. Listen as they invite special guests, high-profile athletes, musicians and entertainers to get brutally honest about everything from current events to untold stories from the golden era of sports and culture. Named for the on-court celebration they made wildly popular, this unfiltered, hilarious and surprising podcast is like playing NBA 2K with no fouls.Other places to find Knuckleheads: Subscribe on YoutubeFollow on InstagramFollow on Facebook
Ayuda a sostener este apostolado. Envía mensaje de WhatsApp al +52 33 2813 6085 diciendo: "Quiero apoyar", y te diremos cómo puedes hacernos llegar tu ayuda. Para recibir aviso de nuevos videos o unirte a la Red de Familias Conservando la Fe, envía mensaje de WhatsApp al +52 33 2813 6085Ayúdanos a crecer en YouTube dando click al botón que dice "suscribirse" o "suscribirme" (es gratuito, no te cuesta nada); también da un like a este video dando click al icono