POPULARITY
DJ Dave Austin talks on his journey through the highs and lows of the nightlife scene, to becoming a key figure in Sydney’s DJ and event circuit. He shares insights on the evolution of DJing, club culture, iconic venues like DCM, and the impact of social media and COVID-19. See omnystudio.com/listener for privacy information.
ホームセンターはどこへ向かう? 新築減少時代の“次なる戦場”。 カインズホーム、DCM、コーナン、コメリ、アークランド、ナフコ、などがホームセンターの大手銘柄なのですが、首都圏の中心部にお住まいの方にはあまり馴染みがないかもしれません。ホームセンターは、家回りの商品がなんでもそろう便利な店なのですが、地方や郊外のロードサイドに店舗が偏っているため、東京区部周辺ではあまり見かけません。
In this episode, Mickey D shares his journey from a young DJ starting at 14 to becoming a prominent figure in the nightlife scene, particularly at the iconic DCM nightclub. Mickey discusses the importance of adapting to changing music trends and the emotional impact of significant moments in his career, including the last days of DCM.See omnystudio.com/listener for privacy information.
At the tail end of March, Digital Cinema Media (DCM), the UK's largest cinema ad sales house, hosted its annual upfronts in the Leicester Square Odeon. It was a way to celebrate cinema's strong start to the year and look ahead to the 2025 and 2026 film slates, but also an opportunity for brands to consider whether to position the channel more prominently on their AV plans.Among the presentations, new research from DCM found that cinema is well-placed to drive price premiums – that is, consumers were willing to pay on average 12% more for a brand that advertised in cinemas than if it had advertised on other media channel. It's a finding that could prove useful in an era marked by continued macroeconomic uncertainty and the desire for brands to retain pricing power.DCM CEO Karen Stacey joined host Jack Benjamin to discuss the research and unpack what has driven the sales house's 33% revenue growth in Q1. Stacey also explored where cinema belongs on media plans today and how the channel can grow its share of adspend.Highlights:1:30: Stacey's career path, advice for leaders and priorities for Wacl14:59: DCM's strong start to 2025 – what's behind the growth in revenue and cinema admissions?24:52: The opportunity for cinema to embrace programmatic30:45: Will box office and admissions ever get back to pre-Covid levels?34:59: How cinema drives strong price premiumsRelated articles:Cinema drives up price premium, research suggestsBridget Jones leads 20% growth in February box officeAre all ‘views' created equal? With TikTok, DCM, Total Media and Mindlab---Thanks to our production partners Trisonic for editing this episode.--> Discover how Trisonic can elevate your brand and expand your business by connecting with your ideal audienceVisit The Media Leader for the most authoritative news analysis and comment on what's happening in commercial media. LinkedIn: The Media LeaderYouTube: The Media Leader
Welcome to another episode of The Coral Capital Podcast, a show where we bring on guests from tech, business, politics, and culture to talk about all things Japan.In this episode, we're joined by Gen Isayama, General Partner & CEO of WiL—a venture capital firm that's redefining how innovation happens in Japan. Unlike traditional VCs, WiL doesn't just invest—they educate, incubate, and leverage the power of their corporate-based LP network to accelerate the growth of their companies as they expand globally, with a particular focus on Japan.Before launching WiL in 2013, Gen spent a decade investing at DCM. But when he looked at Japan, he saw a broken system—where startups struggled to scale, corporates hesitated to embrace change, and innovation lagged behind. Instead of copying the Silicon Valley model, he built something new: a VC firm designed to unlock Japan's vast corporate resources—capital, talent, and technology—by pushing enterprises toward entrepreneurship.WiL has since backed startups in Japan like Mercari, Raksul, and Retty, as well as Wise, Asana, and Canva in the US, with a team operating across Tokyo and Silicon Valley.Below are highlights from this episode:WiL operates on three pillars:Business Creation: Helping large Japanese corporations spin out or incubate startups internally.Education: Training corporates to adopt a startup mindset and providing connectivity between startups and the corporate ecosystem.Investment: Backing startups at various stages, with a focus on Japanese startups and global startups expanding into the Japan market. Ten years ago, Japanese corporates were hesitant to engage with startups, but today they have become increasingly open to partnering with and acquiring them.Large corporations compete with one another in adopting new technologies, creating a domino effect in innovation.The success of a corporate spinout depends on its leadership, not just the technology.WiL leverages its U.S. investments for faster scaling and greater liquidity, while Japan is still evolving toward a more liquid market with larger, multi-billion-dollar exits.Instead of competing for early-stage deals, WiL co-invests with leading global firms at the mid- to growth-stage, offering support for expansion into Japan.WiL conducts market testing through corporate pitch events for global startups, identifying strong local demand before committing to an investment.Emerging Opportunities in Japan: The aging society is driving demand for healthtech and elderly care, while AI integration in manufacturing and robotics presents a major growth area. Additionally, Japan's rich IP assets (anime, manga, food) offer untapped potential for global monetization beyond traditional licensing.-----For founder's building Japan's next legendary companies, reach out to us here: https://coralcap.co/contact-startups/If you're interested in joining a Coral startup join our talent network here: https://coralcap.co/coral-careers/
******Support the channel******Patreon: https://www.patreon.com/thedissenterPayPal: paypal.me/thedissenterPayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuyPayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9lPayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpzPayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9mPayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ******Follow me on******Website: https://www.thedissenter.net/The Dissenter Goodreads list: https://shorturl.at/7BMoBFacebook: https://www.facebook.com/thedissenteryt/Twitter: https://x.com/TheDissenterYT This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/ Dr. Karl Friston is Professor of Imaging Neuroscience and Wellcome Principal Research Fellow of Imaging Neuroscience at University College London. Dr. Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM) and dynamic causal modelling (DCM). His main contribution to theoretical neurobiology is a free-energy principle for action and perception. He is the author of several books, including Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. In this episode, we first talk about a Free Energy perspective of culture. We discuss 4E cognition, and AI. We talk about how the Free Energy principle relates to quantum mechanics and general relativity. We discuss whether Free Energy can be a theory of everything. Finally, we talk about a new understanding of death.--A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, BERNARDO SEIXAS, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, FILIP FORS CONNOLLY, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, PHIL KAVANAGH, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, EDWARD HALL, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, IGOR N, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, KIMBERLY JOHNSON, JESSICA NOWICKI, LINDA BRANDIN, NIKLAS CARLSSON, GEORGE CHORIATIS, VALENTIN STEINMANN, PER KRAULIS, ALEXANDER HUBBARD, BR, MASOUD ALIMOHAMMADI, JONAS HERTNER, URSULA GOODENOUGH, DAVID PINSOF, SEAN NELSON, MIKE LAVIGNE, JOS KNECHT, ERIK ENGMAN, LUCY, MANVIR SINGH, PETRA WEIMANN, CAROLA FEEST, STARRY, MAURO JÚNIOR, 航 豊川, TONY BARRETT, BENJAMIN GELBART, NIKOLAI VISHNEVSKY, STEVEN GANGESTAD, AND TED FARRIS!A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, AL NICK ORTIZ, NICK GOLDEN, AND CHRISTINE GLASS!AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, BOGDAN KANIVETS, ROSEY, AND GREGORY HASTINGS!
Send us a textWelcome to Alternative Dog Moms - a podcast about what's happening in the fresh food community and the pet industry. Kimberly Gauthier is the blogger behind Keep the Tail Wagging, and Erin Scott hosts the Believe in Dog podcast.CHAPTERS:How Dr. Jake Ryave got involved with the Dog Aging Project (0:55)What is the Dog Aging Project and its goals? (3:20)Surprising data trends in the Dog Aging Project (6:05)Is it true that dogs are living shorter lives than they used to? (13:00)Why do larger dogs have shorter life spans? (17:45)How environmental factors affect longevity (19:37)What the data shows about the health of purebred dogs versus mutts (26:02)TRIAD, the rapamycin longevity drug trial being conducted by the Dog Aging Project (27:43)How scientists determine how many dogs to have in a study and how to interpret those numbers and translate to the larger dog population (33:55)Is the Dog Aging Project looking at the connection between diet and DCM prevalence? (45:51)The effect of corporate and pharmaceutical industry interests in scientific research (48:35)How any and all of us can access the Dog Aging Project's data - and how we name our dogs (51:54)How to sign up your dog to be a part of the Dog Aging Project (55:52)LINKS DISCUSSED:Join the Dog Aging Project with your dog (https://redcap.dogagingproject.org/surveys/?s=DYYDHK8HAP)Learn about TRIAD - the rapamycin study (https://dogagingproject.org/triad)Studies published by the Dog Aging Project (https://dogagingproject.org/publications)Acces the Dog Aging Project's data (https://dogagingproject.org/data-access)DAP's study about environmental contaminants (https://www.frontiersin.org/journals/veterinary-science/articles/10.3389/fvets.2024.1394061/full)OUR BLOG/PODCASTS...Kimberly: Keep the Tail Wagging, KeepTheTailWagging.comErin Scott: Believe in Dog podcast, BelieveInDogPodcast.comFACEBOOK...Keep the Tail Wagging, Facebook.com/KeepTheTailWaggingBelieve in Dog Podcast, Facebook.com/BelieveInDogPodcastINSTAGRAM...Keep the Tail Wagging, Instagram.com/RawFeederLifeBelieve in Dog Podcast, Instagram.com/Erin_The_Dog_MomThanks for listening to our podcast. You can learn more about Erin Scott's first podcast at BelieveInDogPodcast.com. And you can learn more about raw feeding, raising dogs naturally, and Kimberly's dogs at KeepTheTailWagging.com. And don't forget to subscribe to The Alternative Dog Moms.
In this episode of The Pet Food Science Podcast Show, PhD candidate Pawan Singh from the University of Guelph shares insights into how pulse ingredients are shaping pet food formulations. He covers their nutritional advantages, tackles concerns about their link to canine cardiomyopathy, and highlights innovative research aimed at ensuring pet food safety while promoting sustainable protein sources. Stay up to date on the latest in pet nutrition by tuning in on all major platforms!"Pulse ingredients are low in fat, high in fiber, and contribute to satiety, weight loss, and blood sugar regulation in pets."Meet the guest: Pawan Singh, a PhD candidate at the University of Guelph, focuses on protein quality in pet food, exploring amino acid ratios in companion animal diets. With a Master's in Animal Nutrition, Pawan investigated pulse-inclusive diets and their impact on canine cardiac health. Her expertise highlights the intersection of pet health and sustainable nutrition.What will you learn:(00:00) Highlight(01:30) Introduction(02:26) Pulses in pet food(04:22) Grain-free diet(08:11) DCM concerns(17:05) Processing & digestibility(20:40) Future research(24:05) Final QuestionsThe Pet Food Science Podcast Show is trusted and supported by innovative companies like:* Trouw Nutrition* Kemin- Biorigin- ICC- Scoular- Corbion- ProAmpac- EW Nutrition- Alura- Symrise
DEBT CAPITAL MARKETS Moderator: Mr. Nikos Fragos, Partner -Karatzas & Partners Panelists: Mr. Riccardo Abbona, Managing Director Head of DCM & Risk Solutions, Greece & Italy - Barclays Mr. Vasilis Tsaitas, Group Chief Financial Officer – HELLENiQ ENERGY Holdings Mr. Vassilis Kotsiras, Corporate Treasurer – National Bank of Greece Mr. Morven Jones, Managing Director; Head of EMEA Debt Capital Markets Origination - Nomura Mr. Dimitrios Tsakonas, Director General - Public Debt Management Agency 26th Annual Capital Link Invest in Greece Forum Greece- Speeding Ahead Post Investment Upgrade Metropolitan Club, NYC | December 9, 2024
This morning we are joined by Peter Sozi, leader of Divine Care Ministries, one of our missions partners in Uganda. In this sermon he shares with us the story of how DCM was started. He then explains how the Gospel is the key to transformation, not only in Uganda, but here in Nashville and across the world.
据南方日报报道,11月19日凌晨,良品铺子在微博发布澄清公告称,官方调查结果显示,公司桂香坚果藕粉、酸辣粉检出成分与配料表一致。良品铺子表示,公司已收集并固定相关证据,将向公安机关报案,同时将对打假网红提起诉讼,依法追究其损害企业商业信誉及商品声誉的法律责任。36氪获悉,11月19日,宁德时代动力型锂电池铁路试运首发仪式在贵阳、宜宾两地同时举行。据介绍,宁德时代首次实现动力锂电池铁路运输,对构建高效率、低成本动力运输体系意义重大。36氪获悉,爱企查App显示,近日,特步集团有限公司发生工商变更,注册资本由约29.3亿人民币增至约32.7亿人民币,增幅约12%。36氪独家获悉,DCM Ventures正在将其天使及种子基金团队分拆为一家新的独立投资公司,由董事合伙人本多央辅领导。据了解,此次分拆仅涉及最新募集的约1亿美元的探索基金第四期,对DCM在管的主基金及前三期探索基金并无影响。据界面新闻报道,11月19日,知情人士称,索尼一直在研究对日本内容巨头角川集团的可能收购提议,两家公司已举行会谈。据了解,商议仍在进行中,索尼也可能决定不发出正式要约。据财联社报道,美国科技巨头微软在东京开设日本首个研究基地,11月18日举行了启用仪式。该基地将推进人工智能和机器人工程学等领域的研究,致力于解决日本面临的老龄化和劳动力短缺等社会问题。
In this episode, Anthony chats with Filip Tomasik, who opens up about his journey from a non-target university to landing roles in investment banking and private equity. Filip shares his take on acing the application process, the real impact of internships, and the strategies that helped him secure full-time roles.They dive into the demands of debt capital markets, the unique challenges of private equity, and the contrasts in work-life balance between the two.This conversation is full of practical tips and insights, whether you're applying for an internship, navigating your first or second year as an analyst, or aiming to transition from the sell-side to the buy-side.(01:58) Introduction and Background(05:12) Coming from a Non-Target School(07:39) Strategies for Success in Finance Applications(16:58) Transitioning from Intern to Analyst(22:53) Tips for Making Your Work Have an Impact(30:00) Understanding Debt Capital Markets(32:00) Pros and cons of working in DCM(37:27) Transitioning from Sell-Side to Buy-Side(39:30) Investment Banking vs Private Equity(49:17) Setting personal goals Hosted on Acast. See acast.com/privacy for more information.
Life & Listings: Balancing Real Estate, Scaling Your Future w/ Jennifer Staats
In this episode of Life and Listings, Jennifer Staats sits down with Nikki Pais, an expert in real estate lead generation and conversion. Nikki shares actionable insights on how real estate professionals can maximize their databases, find hidden opportunities, and make meaningful connections with leads. Listeners will learn how to use CRM tools effectively, engage leads with personalized follow-ups, and apply low-pressure conversational techniques to boost conversion rates. This episode is packed with practical tips and fresh strategies for brokers, agents, and team leaders looking to enhance their sales approach. Real estate professionals will gain valuable strategies to engage potential clients and make the most of their CRM data. Nikki and Jennifer break down the essentials of lead engagement, accountability practices, and the importance of personal touches that make clients feel valued. "By taking just a moment to be human—to ask, 'How are you?'—you open the door to understanding what your lead actually needs. That one little question can turn a conversation from a sales pitch into a genuine connection." Five Key Takeaways Leverage CRM Filters: Utilize "last visit" and "last communication" filters to identify active leads and avoid missing out on engaged prospects. Prioritize Engaged Leads: Focus on responding to leads who have reached out rather than constantly chasing new ones. Ask Meaningful Questions: Start conversations with open-ended, human-centered questions to build rapport and gain trust. Use Personal Touches in Follow-Up: Integrate details like clients' pets into follow-ups to create memorable interactions. Implement Tracking Pixels: Track website visits to capture critical engagement data and develop timely, targeted follow-up strategies. About Nikki Pais: Nikki Pais is a powerhouse in real estate, with a strong background in lead generation and Inside Sales. She coaches agents, ISAs, and CEOs nationwide with DCM, delivering strategies to eliminate the gap between obtaining a lead, and getting it to the closing table. Nikki's expertise and passion for language and training make her a trusted advisor and a dynamic force in the industry. Connect with Nikki: Facebook: https://www.facebook.com/profile.php?id=100000771644238 Connect with Jennifer Staats: Website: staatssolutions.com Staats Solution Instagram: https://www.instagram.com/staatssolutions/ Jennifer Staats Instagram: https://www.instagram.com/jennifertherealtor LinkedIn: https://www.linkedin.com/company/staatssolutions/
The following question refers to Section 7.4 of the 2022 AHA/ACC/HFSA Guideline for the Management of Heart Failure.The question is asked by the Director of the CardioNerds Internship Dr. Akiva Rosenzveig, answered first by Vanderbilt AHFT cardiology fellow Dr. Jenna Skowronski, and then by expert faculty Dr. Clyde Yancy.Dr. Yancy is Professor of Medicine and Medical Social Sciences, Chief of Cardiology, and Vice Dean for Diversity and Inclusion at Northwestern University, and a member of the ACC/AHA Joint Committee on Clinical Practice Guidelines.The Decipher the Guidelines: 2022 AHA / ACC / HFSA Guideline for The Management of Heart Failure series was developed by the CardioNerds and created in collaboration with the American Heart Association and the Heart Failure Society of America. It was created by 30 trainees spanning college through advanced fellowship under the leadership of CardioNerds Cofounders Dr. Amit Goyal and Dr. Dan Ambinder, with mentorship from Dr. Anu Lala, Dr. Robert Mentz, and Dr. Nancy Sweitzer. We thank Dr. Judy Bezanson and Dr. Elliott Antman for tremendous guidance.Enjoy this Circulation 2022 Paths to Discovery article to learn about the CardioNerds story, mission, and values. American Heart Association's Scientific Sessions 2024As heard in this episode, the American Heart Association's Scientific Sessions 2024 is coming up November 16-18 in Chicago, Illinois at McCormick Place Convention Center. Come a day early for Pre-Sessions Symposia, Early Career content, QCOR programming and the International Symposium on November 15. It's a special year you won't want to miss for the premier event for advancements in cardiovascular science and medicine as AHA celebrates its 100th birthday. Registration is now open, secure your spot here!When registering, use code NERDS and if you're among the first 20 to sign up, you'll receive a free 1-year AHA Professional Membership! Question #37 Mr. S is an 80-year-old man with a history of hypertension, type II diabetes mellitus, and hypothyroidism who had an anterior myocardial infarction (MI) treated with a drug-eluting stent to the left anterior descending artery (LAD) 45 days ago. His course was complicated by a new LVEF reduction to 30%, and left bundle branch block (LBBB) with QRS duration of 152 ms in normal sinus rhythm. He reports he is feeling well and is able to enjoy gardening without symptoms, though he experiences dyspnea while walking to his bedroom on the second floor of his house. Repeat TTE shows persistent LVEF of 30% despite initiation of goal-directed medical therapy (GDMT). What is the best next step in his management?AMonitor for LVEF improvement for a total of 60 days prior to further interventionBImplantation of a dual-chamber ICDCImplantation of a CRT-DDContinue current management as device implantation is contraindicated given his advanced age Answer #37 Explanation Choice C is correct. Implantation of a CRT-D is the best next step. In patients with nonischemic DCM or ischemic heart disease at least 40 days post-MI with LVEF ≤35% and NYHA class II or III symptoms on chronic GDMT, who have reasonable expectation of meaningful survival for >1 year,ICD therapy is recommended for primary prevention of SCD to reduce total mortality (Class 1, LOE A). A transvenous ICD provides high economic value in this setting, particularly when a patient's risk of death from ventricular arrhythmia is deemed high and the risk of nonarrhythmic death is deemed low. In addition, for patients who have LVEF ≤35%, sinus rhythm, left bundle branch block (LBBB) with a QRS duration ≥150 ms, and NYHA class II, III, orambulatory IV symptoms on GDMT, cardiac resynchronization therapy (CRT) is indicated to reduce total mortality, reduce hospitalizations, and improve symptoms and QOL. Cardiac resynchronization provides high economic value in this setting. Mr.
******Support the channel****** Patreon: https://www.patreon.com/thedissenter PayPal: paypal.me/thedissenter PayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9l PayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpz PayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9m PayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ******Follow me on****** Website: https://www.thedissenter.net/ The Dissenter Goodreads list: https://shorturl.at/7BMoB Twitter: https://x.com/TheDissenterYT This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/ Dr. Karl Friston is Professor of Imaging Neuroscience and Wellcome Principal Research Fellow of Imaging Neuroscience at University College London. Dr. Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM) and dynamic causal modelling (DCM). His main contribution to theoretical neurobiology is a free-energy principle for action and perception. In this episode, we explore the Free Energy Principle, and how to go from physical systems to brains and cognition. We start by discussing what the Free Energy Principle is, the history behind its development, and concepts like Markov blankets, internal and external states, blanket states, circular causality, and autonomous states. We talk about the differences between living and non-living systems, and the existential imperative to reduce predicting error. We also discuss concepts like self-organization and hierarchy in nervous systems. We discuss what we can learn about the brain through neuroimaging, and how specialized the brain is. Finally, we talk about how we can integrate the microscopic aspects of brain physiology with a more abstract understanding of the mind, like what we have in psychiatry and psychology. -- A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, BERNARDO SEIXAS, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, FILIP FORS CONNOLLY, DAN DEMETRIOU, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, PHIL KAVANAGH, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, EDWARD HALL, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, IGOR N, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, KIMBERLY JOHNSON, JESSICA NOWICKI, LINDA BRANDIN, NIKLAS CARLSSON, GEORGE CHORIATIS, VALENTIN STEINMANN, PER KRAULIS, ALEXANDER HUBBARD, BR, MASOUD ALIMOHAMMADI, JONAS HERTNER, URSULA GOODENOUGH, DAVID PINSOF, SEAN NELSON, MIKE LAVIGNE, JOS KNECHT, ERIK ENGMAN, LUCY, MANVIR SINGH, PETRA WEIMANN, CAROLA FEEST, STARRY, MAURO JÚNIOR, 航 豊川, TONY BARRETT, BENJAMIN GELBART, NIKOLAI VISHNEVSKY, STEVEN GANGESTAD, AND TED FARRIS! A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, AL NICK ORTIZ, NICK GOLDEN, AND CHRISTINE GLASS! AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, BOGDAN KANIVETS, ROSEY, AND GREGORY HASTINGS!
On average, from 2011 to 2021, academic labs generated around 4,300 metric tons of hazardous waste each year. One of the largest lab-used solvents discarded is dichloromethane and more than half of that waste ends up burned. In today's episode, policy reporters Krystal Vasquez and Leigh Krietsch Boerner dive into the processes academic labs use to dispose of said waste, the consequences of new EPA regulations around dichloromethane, and what solutions academic institutions are coming up with to accommodate these new rules. C&EN Uncovered, a project from C&EN's podcast, Stereo Chemistry, offers a deeper look at subjects from recent stories. Check out Krystal's story on the new U.S. Environmental Protection Agency regulations regarding dichloromethane at https://cenm.ag/dcmregs and check out Leigh's story about solvent waste disposal in academic laboratories at https://cenm.ag/wastedisposal. Cover photo: Lab solvents C&EN July 15th cover photo Subscribe to Stereo Chemistry now on Apple Podcasts, Spotify, or wherever you listen to podcasts. A transcript of this episode will be available soon at cen.acs.org. Credits Executive producer(s): Gina Vitale, David Anderson C&EN Uncovered host: Craig Bettenhausen Reporter(s): Krystal Vasquez, Leigh Krietsch Boerner Audio editor: Ted Woods Copyeditor: Bran Vickers Episode artwork: Will Ludwig Music: “Hot Chocolate,” by Aves Contact Stereo Chemistry: Contact us on social media at @cenmag or email cenfeedback@acs.org
Message our hosts, Kieran and Jose.Dilated cardiomyopathy is one of the most challenging diseases in veterinary cardiology. Early detection is key to maximising benefit for individual dogs, but clinical signs are lacking and physical exam may be unremarkable. Adding to this, echocardiographic diagnosis may be subtle, or changes may be caused by a an underlying disease process which is reversible or self-limiting, rather than a primary myocardial disease itself. Prof Sonya Gordon, from Texas A&M University, joins Kieran and Jose to discuss the difficulties with DCM, and her approach to overcoming these to get the most for her patients.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Individually incentivized safe Pareto improvements in open-source bargaining, published by Nicolas Macé on July 18, 2024 on LessWrong. Summary Agents might fail to peacefully trade in high-stakes negotiations. Such bargaining failures can have catastrophic consequences, including great power conflicts, and AI flash wars. This post is a distillation of DiGiovanni et al. (2024) (DCM), whose central result is that agents that are sufficiently transparent to each other have individual incentives to avoid catastrophic bargaining failures. More precisely, DCM constructs strategies that are plausibly individually incentivized, and, if adopted by all, guarantee each player no less than their least preferred trade outcome. Figure 0 below illustrates this. This result is significant because artificial general intelligences (AGIs) might (i) be involved in high-stakes negotiations, (ii) be designed with the capabilities required for the type of strategy we'll present, and (iii) bargain poorly by default (since bargaining competence isn't necessarily a direct corollary of intelligence-relevant capabilities). Introduction Early AGIs might fail to make compatible demands with each other in high-stakes negotiations (we call this a "bargaining failure"). Bargaining failures can have catastrophic consequences, including great power conflicts, or AI triggering a flash war. More generally, a "bargaining problem" is when multiple agents need to determine how to divide value among themselves. Early AGIs might possess insufficient bargaining skills because intelligence-relevant capabilities don't necessarily imply these skills: For instance, being skilled at avoiding bargaining failures might not be necessary for taking over. Another problem is that there might be no single rational way to act in a given multi-agent interaction. Even arbitrarily capable agents might have different priors, or different approaches to reasoning under bounded computation. Therefore they might fail to solve equilibrium selection, i.e., make incompatible demands (see Stastny et al. (2021) and Conitzer & Oesterheld (2023)). What, then, are sufficient conditions for agents to avoid catastrophic bargaining failures? Sufficiently advanced AIs might be able to verify each other's decision algorithms (e.g. via verifying source code), as studied in open-source game theory. This has both potential downsides and upsides for bargaining problems. On one hand, transparency of decision algorithms might make aggressive commitments more credible and thus more attractive (see Sec. 5.2 of Dafoe et al. (2020) for discussion). On the other hand, agents might be able to mitigate bargaining failures by verifying cooperative commitments. Oesterheld & Conitzer (2022)'s safe Pareto improvements[1] (SPI) leverages transparency to reduce the downsides of incompatible commitments. In an SPI, agents conditionally commit to change how they play a game relative to some default such that everyone is (weakly) better off than the default with certainty.[2] For example, two parties A and B who would otherwise go to war over some territory might commit to, instead, accept the outcome of a lottery that allocates the territory to A with the probability that A would have won the war (assuming this probability is common knowledge). See also our extended example below. Oesterheld & Conitzer (2022) has two important limitations: First, many different SPIs are in general possible, such that there is an "SPI selection problem", similar to the equilibrium selection problem in game theory (Sec. 6 of Oesterheld & Conitzer (2022)). And if players don't coordinate on which SPI to implement, they might fail to avoid conflict.[3] Second, if expected utility-maximizing agents need to individually adopt strategies to implement an SPI, it's unclear what conditions...
Industrial Talk is onsite at DistribuTech and talking to Danny Petrecca, Vice President of Business Development with Locusview about "Digital Construction Management - digitally streamlining utility's construction". Utilities face budget cuts, regulatory hurdles, and interest rate fluctuations, while digitization can alleviate these pressures and improve operational efficiency. Innovation and problem-solving are key to addressing challenges, with a scarcity of contractors and resources in the digitization journey. Traditional paper-based construction management is inefficient, and implementing mobile technology can simplify workflows and make them more efficient. New technologies such as GNSS receivers and mobile devices have the potential to revolutionize the industry. Action Items [ ] Reach out to Danny on LinkedIn or the Locusview website for more information [ ] Share Danny's contact information on the Industrial Talk podcast and social media platforms [ ] Consider implementing a DCM solution like Locusview to improve construction management processes and data quality (Utilities mentioned) [Throughout] Outline Utility industry challenges with a focus on budget cuts and regulatory hurdles. Scott Mackenzie interviews Danny Petrecca of Locus View at DistributeTech in Orlando. Danny discusses utility budget cuts and regulatory challenges with Scott MacKenzie. Digital construction management for utilities. Danny has over 20 years of experience in the utilities industry, focusing on GIS and software for efficiency improvements. Digital construction management (DCM) is a new space in utilities that needs more attention, encompassing all personas and data collection for capital construction projects. The digital design process starts with an accurate system of record, but as-built data is often inaccurate and unfilled. Field crews work off maps that differ from the designed plan, leading to confusion and inefficiencies. Improving construction workflows with mobile technology. Danny explains challenges in implementing technology in construction field due to resistance from crews. Danny: Mobile app captures only necessary data when completing work orders. Danny describes how technology has made it easier to locate transformers, reducing the need for manual measurements and paperwork. Danny notes that everyone now carries a mobile device, making it easier for people to use technology for everyday tasks. Digitizing construction and utility industries. Danny highlights the ease of use of Locus's technology in the field, with customers reporting that it only takes a minute to map services. Danny and Scott MacKenzie discuss the potential for Locus to become a go-to solution for mapping and managing infrastructure, with Danny expressing optimism about the company's growth. Utilities face resource constraints and long waitlists for transformers, leading to pressure to digitize processes. Danny from Locus View discusses power grid challenges and innovation on Industrial Talk. If interested in being on the Industrial Talk show, simply contact us and let's have a quick conversation. ...
Life & Listings: Balancing Real Estate, Scaling Your Future w/ Jennifer Staats
Tune into this insightful episode of the Life and Listings Podcast, where we sit down with real estate veteran Preston Guyton. He shares his innovative strategies for lead generation and business growth. Discover how he built and sold multiple companies, created impactful platforms like Easy Home Search and Digital Maverick, and why simplicity and consistency are key to success. Learn valuable tips on improving lead conversion, building strong teams, and maintaining passion in your business. Don't miss out – listen now! “I think a lot of people try to build a schedule based on somebody else's schedule. Build a schedule based on your own schedule, whatever that schedule is. Just stick to it. That's all you need, just do those things. When you look into your CRM, don't have 50 stages, keep it simple. The more simple you could keep it, the more chance you'll have success. And when you get into complexity, no agents are going to follow it. And if you're an agent and you build complexity in your own schedule, in your own CRM, you're not going to follow it either”- Preston Guyon In this episode, we'll tackle: Building Multiple Businesses: Preston has successfully built and sold multiple companies, including real estate, construction, and mortgage businesses. Lead Generation Focus: His primary focus has always been on lead generation, using it to build and scale businesses. Digital Maverick and DCM Program: Preston runs Digital Maverick, which helps teams generate leads through AdWords campaigns, and the DCM program, which manages databases to improve lead conversion. Easy Home Search: This platform helps generate leads and is expanding across the U.S., aiming to cover 220 MLSs by the end of the year. Exclusivity Strategy: Offering county exclusivity to partners helps improve systems, processes, and recruiting. Simplicity and Consistency: Emphasizes the importance of simple, consistent systems and schedules for success. Importance of a Strong Team: Building a strong team by finding the right people and selling them on the vision is crucial. Real Estate Community Support: Preston is passionate about giving back to the real estate community and maintaining sustainable business practices. About Preston Guyton: Preston is a proud husband and father who leverages his 20 years of experience in the real estate industry to excel in his personal and professional life. Founder of CRG Companies, Inc. a real estate, construction, and home design company that he sold out of in 2020. co-founder of Palms Realty. Palms Realty currently has over 100 agents and 1,600 closed transactions since opening in October 2021. Additionally, he is the co-founder of Digital Maverick, Reside Platform, and founder of ezHomeSearch.com. In the past three years, Preston has played a pivotal role in generating over 300,000 opportunities for teams and companies across North America. He is passionate about lead generation, most of all he focused on providing teams and companies a better way to control their future business growth. Connect with Preston: Website: www.ezhomesearch.com; digitalmaverick.com; resideplatform.com Facebook: Facebook.com/prestonguyton Instagram: instagram.com/prestonguyton Connect with Jennifer Staats: Website: staatssolutions.com Staats Solution Instagram: https://www.instagram.com/staatssolutions/ Jennifer Staats Instagram: https://www.instagram.com/jennifertherealtor LinkedIn: https://www.linkedin.com/company/staatssolutions/
Life & Listings: Balancing Real Estate, Scaling Your Future w/ Jennifer Staats
This is going to be a fun episode because I have Travis, an expert in the real estate industry, joining me where he shares invaluable insights on the importance of the ISA role, lead conversion strategies, and the critical nature of follow-up. He highlights the need for simple and effective database management systems and the humanization of leads. Travis also discusses his work with Digital Maverick's DCM program, which has sent about 4 million texts and emails in just 106 days, underscoring the scale of their operations. Tune in to hear Travis's tips on training ISAs, handling difficult clients with grace, and maintaining a positive outlook in the fast-paced world of real estate. “The more simple your system is, the easier it is for you to close deals. Because once you really boil real estate down, it's just talk to more people, sell more homes, make more money. Like that's really what it comes down to. So a lot of the gaps that we see in Database. Is, is they their smart lists don't match their stages, or they're too they're too critical on what the filters are and things like that. And it's just like last call made stage a, like, just, that's all you need.”- Travis Halverson In this episode, let's explore: Role and Importance of ISAs: Travis emphasizes that the ISA position is often seen as a stepping stone but he values it as a career. Lead Conversion Tips: Effective follow-up and consistent action are critical for improving lead conversion. Simplifying the system and ensuring uniformity in managing leads can enhance efficiency and outcomes. Database Management and System Setup: Clear definitions and simple systems for stages and smart lists in CRM (Customer Relationship Management) are essential. Humanizing Leads:Treat leads as individuals with needs and interests rather than mere numbers. Effective communication involves having genuine conversations and understanding clients' needs. Training ISAs: Training focuses on improving language and conversation skills rather than strictly adhering to scripts. Digital Maverick and DCM: Digital Maverick offers various services, including database management through the DCM (Database Conversion Management) program. Travis's role involves helping multiple teams across states manage and optimize their databases for better lead conversion. About Travis Halverson: Travis Halverson has been an ISA for 7 years, over 600 closings in his career, and currently managing 70+ databases across the country- he has finally found his niche in real estate. Travis is someone who is passionate about trying to ease the pain CEOs and Team Leads find with databases. Always coming from a place of simplicity- talk to more people, sell more homes, change more lives. Connect with Travis: Website: https://digitalmaverick.com Instagram:https://www.instagram.com/travishalversonnn/ Connect with Jennifer Staats: Website: staatssolutions.com Staats Solution Instagram: https://www.instagram.com/staatssolutions/ Jennifer Staats Instagram: https://www.instagram.com/jennifertherealtor LinkedIn: https://www.linkedin.com/company/staatssolutions/
Dans ce dernier épisode de la saison nous recevons Mikael Eskenazi, trader en emerging sovereign debt chez J.P. Morgan à Londres. Après des expériences à la Banque Centrale Européenne et en conseil aux gouvernements chez Lazard, Mikael a rejoint un desk de trading, spécialisé dans les dettes souveraines émergentes. Il y. est notamment en charge du portefeuille MENA (Afrique du Nord et Moyen-Orient).Mikael Eskenazi nous parle des caractéristiques de ce marché, des questions de liquidité et des termes des restructurations en cours et à venir. Il évoque également la pénétration des données dans le trading, et l'évolution des profils des traders.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Join host Dr. Sheryl White for a discussion on key competencies that today's executive leaders need to have to be successful with Dennis C. Miller, Founder & Chairman of DCM Associations and Dr. Terrence Cahill, President of the Institute for Nonprofit Board and Executive Leadership. Dennis Miller is a nationally recognized nonprofit board and executive leadership coach and CEO and C-Suite executive search consultant with more than thirty-nine years of experience working with nonprofit board leadership and chief executives across the country. He is a best-selling author and keynote speaker. His autobiography “Moppin' Floors to CEO” is his true-life recount of his highly eventful life. Dr. Terrence Cahill, after his early career experiences as a Social Worker in Child Welfare, residential care for mentally challenged adults, and health care, served in various leadership positions for the next 40+ years, prior to joining DCM. His leadership positions included CEO of a community hospital, Regional VP for two managed care organizations, and University Department Chair, overseeing a Ph.D. program and a master's in health administration program. Tune into Leadership Matters: Informing Leaders. Inspiring Solutions!
What is a Director of Church Ministries (DCM) program, and how are these students prepared to serve the Church? Paige Mielke — sophomore DCM student at Concordia University Wisconsin, and Grace Sugg — senior DCM student going to serve at Ascension Lutheran Church in Montreal, Quebec (Partnering with Mission of Christ Network), join Andy and Sarah for our Set Apart to Serve Series to talk about the Director of Church Ministries program at Concordia University Wisconsin, why they each chose to pursue church work and the DCM program in particular, how their classes and formation have affirmed their interest in serving the church as a DCM, the classes they've taken to prepare to serve as a DCM, and their plans for the coming year. Learn mor about the Director of Church Ministries program at Concordia University Wisconsin at cuw.edu/academics/programs/director-church-ministries-bachelors/index.html. Learn more about the Set Apart to Serve Initiative at lcms.org/setaparttoserve. Christ's church will continue until He returns, and that church will continue to need church workers. Set Apart to Serve (SAS) is an initiative of the LCMS to recruit church workers. Together, we pray for workers for the Kingdom of God and encourage children to consider church work vocations. Here are three easy ways you can participate in SAS: 1. Pray with your children for God to provide church workers. 2. Talk to your children about becoming church workers. 3. Thank God for the people who work in your congregation. To learn more about Set Apart to Serve, visit lcms.org/set-apart-to-serve.
Join host Dr. Sheryl White for a discussion on key competencies that today's executive leaders need to have to be successful with Dennis C. Miller, Founder & Chairman of DCM Associations and Dr. Terrence Cahill, President of the Institute for Nonprofit Board and Executive Leadership. Dennis Miller is a nationally recognized nonprofit board and executive leadership coach and CEO and C-Suite executive search consultant with more than thirty-nine years of experience working with nonprofit board leadership and chief executives across the country. He is a best-selling author and keynote speaker. His autobiography “Moppin' Floors to CEO” is his true-life recount of his highly eventful life. Dr. Terrence Cahill, after his early career experiences as a Social Worker in Child Welfare, residential care for mentally challenged adults, and health care, served in various leadership positions for the next 40+ years, prior to joining DCM. His leadership positions included CEO of a community hospital, Regional VP for two managed care organizations, and University Department Chair, overseeing a Ph.D. program and a master's in health administration program. Tune into Leadership Matters: Informing Leaders. Inspiring Solutions!
In this episode, we shine a spotlight on Debt Capital Markets, a crucial yet often overlooked business function within major investment banks like J.P. Morgan, Bank of America, and Citi.Join us as we demystify DCM, explaining its unique role within investment banking divisions and the essential personality traits that make individuals well-suited to this area of finance. From structuring bond offerings to advising clients on debt issuance strategies, DCM professionals play a vital role in facilitating capital raising for corporations and governments worldwide.But that's not all – we also address questions from our community, exploring topics such as private secondary markets and the possibility of owning a stake in innovative companies like SpaceX or OpenAI.Get ready to dive deep into Debt Capital Markets and uncover fresh insights into the ever-evolving world of finance. *****Join our next free M&A Finance Accelerator simulation in partnership with UBS www.amplifyme.com/mafa Hosted on Acast. See acast.com/privacy for more information.
Welcome to Alternative Dog Moms - a podcast about what's happening in the fresh food community and the pet industry. Kimberly Gauthier is the blogger behind Keep the Tail Wagging, and Erin Scott hosts the Believe in Dog podcast.CHAPTERS:Cherry picking of data and the 4 steps to the conspiracy (0:54)Hills' company strategy to market itself to veterinarians (4:20)History of some of these same veterinarians using a similar playbook to seed scientific literature with anti-raw diet articles in the early 2000s (10:51)The protocol Daniel discovered that was used to determine which DCM cases would be reported to the FDA and the saturation of scientific literature with articles purporting to show the DCM/BEG diet connection (17:05)How social media and Facebook Groups contributed to the DCM conspiracy (29:17)The impact of the DCM/BEG scare on Daniel's company - from trolls to customers cancelling their orders because their vet told them to (40:18)The impact of the DCM hoax on independent pet stores (46:24) Current status of Daniel's lawsuit and what to expect in the future (49:33)LINKS DISCUSSED:Book: The Misinformation Age - How False Beliefs Spread (https://amzn.to/3UvsQek)PubMed list of articles written by Dr. Lisa Freeman (https://tinyurl.com/mb8t3hep)KetoNatural Pet Food (https://ketonaturalpetfoods.com/)Daniel's book: Dogs, Dog Food & Dogma (https://amzn.to/4aSj8Ij)Read the lawsuit Complaint (https://tinyurl.com/3h6rnvx9)Study: Geometric analysis of macronutrient selection in breeds of the domestic dog, Canis lupus familiaris (https://academic.oup.com/beheco/article/24/1/293/2262442)Daniel's Medium (https://medium.com/@danielschulof_18279)OUR BLOG/PODCASTS...Kimberly: Keep the Tail Wagging, KeepTheTailWagging.comErin Scott: Believe in Dog podcast, BelieveInDogPodcast.comFACEBOOK...Keep the Tail Wagging, Facebook.com/KeepTheTailWaggingBelieve in Dog Podcast, Facebook.com/BelieveInDogPodcastINSTAGRAM...Keep the Tail Wagging, Instagram.com/RawFeederLifeBelieve in Dog Podcast, Instagram.com/Erin_The_Dog_MomThanks for listening to our podcast. You can learn more about Erin Scott's first podcast at BelieveInDogPodcast.com. And you can learn more about raw feeding, raising dogs naturally, and Kimberly's dogs at KeepTheTailWagging.com. And don't forget to subscribe to The Alternative Dog Moms.
Atlantic Group Informational Meeting. April 16, 2024. Leader: Sung C. Speakers discuss service experiences, opportunities, knowledge, and organization below the group level such as GSO, NY Intergroup, and Corrections and Treatment Committees, as well as changes to elections for those positions concerning Atlantic Group.
Welcome to Alternative Dog Moms - a podcast about what's happening in the fresh food community and the pet industry. Kimberly Gauthier is the blogger behind Keep the Tail Wagging, and Erin Scott hosts the Believe in Dog podcast.CHAPTERS:Is it possible to have a keto kibble? (0:54)Daniel's origin story of how he left his career as a lawyer, wrote a book and started Keto Natural pet food company (6:46)What exactly does "keto" mean? (14:02)Studies show that dogs eat keto when given the choice. How much are dogs like wolves and are dogs meant to eat carbs? (18;36)When Daniel found out about the DCM investigation by the FDA and why he investigated the investigation (34:17)Why Daniel filed a lawsuit and who he's suing (50:20)What we can learn from Daniel's lawsuit and the pervasiveness of misinformation and corporate-funded science today (56:52)The "smoking gun" of Daniel's lawsuit and what everyone should know about how the FDA decided to investigate the connection between grain-free dog food and DCM (1:06:53)Why it's important to ask questions and how to hone your BS detector (1:12:10)LINKS DISCUSSED:KetoNatural Pet Food (https://ketonaturalpetfoods.com/)Daniel's book: Dogs, Dog Food & Dogma (https://amzn.to/4aSj8Ij)Read the lawsuit Complaint (https://ketonaturalpetfoods.com/blogs/news/an-introduction-to-the-hills-dcm-class-action-litigation)Study: Geometric analysis of macronutrient selection in breeds of the domestic dog, Canis lupus familiaris (https://academic.oup.com/beheco/article/24/1/293/2262442)Daniel's Medium (https://medium.com/@danielschulof_18279)Kimberly's blog about DCM with comments by Dr. Stern (https://keepthetailwagging.com/cause-of-dilated-cardiomyopathy-in-dogs/)Book: Eat Like the Animals (https://amzn.to/3THqcAr)OUR BLOG/PODCASTS...Kimberly: Keep the Tail Wagging, KeepTheTailWagging.comErin Scott: Believe in Dog podcast, BelieveInDogPodcast.comFACEBOOK...Keep the Tail Wagging, Facebook.com/KeepTheTailWaggingBelieve in Dog Podcast, Facebook.com/BelieveInDogPodcastINSTAGRAM...Keep the Tail Wagging, Instagram.com/RawFeederLifeBelieve in Dog Podcast, Instagram.com/Erin_The_Dog_MomThanks for listening to our podcast. You can learn more about Erin Scott's first podcast at BelieveInDogPodcast.com. And you can learn more about raw feeding, raising dogs naturally, and Kimberly's dogs at KeepTheTailWagging.com. And don't forget to subscribe to The Alternative Dog Moms.
In this episode of the Toni Unleashed Podcast, Toni sits down with Daniel Schulof, the CEO of KetoNatural Pet Foods, to explore his journey from law to pet health advocacy. They discuss Daniel's personal experiences with pet health issues, the inspiration behind his book "Dogs, Dog Food, and Dogma," and the creation of KetoNatural Pet Foods. Daniel also sheds light on the mission of "The Pet Food Consumer Rights Council" and the lack of nutrition education in vet schools. During their conversation, Toni and Daniel delve into the history of pet food, emphasizing the impact of carbs and the rise of pet health issues like cancer and diabetes. They also touch on the controversy surrounding DCM (dilated cardiomyopathy) and praise Daniel's involvement in the lawsuit against Hills Pet Nutrition. Toni expresses frustration over the persistence of misconceptions surrounding grain-based diets and vows to continue educating pet parents. Direct any comments or questions to: CustomerService@healthypetproducts.net Visit our websites: www.healthypetproducts.net / www.toniunleashed.com Instagram: https://www.instagram.com/healthypetproducts/ https://www.instagram.com/toni.unleashed/ Facebook: Thttps://www.facebook.com/HealthyPetProducts / https://www.facebook.com/ToniUnleashed Youtube: https://www.youtube.com/channel/UCAMxTEiHqRkpWNakn4OA__A TikTok: https://www.tiktok.com/@healthypetproducts / https://www.tiktok.com/@toni.unleashed Pinterest: https://www.pinterest.com/HealthyPetProducts/ X: https://twitter.com/HealthyPetPgh
The Ryan Show FM is back with another radio experience for the ages! For the second week in a row we brought on guests who will help provide insight on quarantine survival and entertainment. DCM artist Cassidy has never been one to panic and it will take more than a global pandemic to shake his resolve. Cassidy drops gems for any entrepreneur looking to make money from the comfort or quarantine of their home. Hip Hop Gamer stopped by our Skype chat to talk about the best video games to indulge in and what to expect of the next gen consoles. Southampton Village Mayor Jesse Warren joined us to talk about how Covid 19 is affecting The Hamptons and what we can expect over the next few months leading up to summer. Rapper/Survivalist Keith Murray called in to give his two cents on how to make the best of this public health crisis and to make things even more interesting Ex NBA baller Ben Gordon put Ryan and @hamptonsdave right in their places for being the smart mouthed Italians they are. There is never a dull moment on The Ryan Show FM!
Welcome to Alternative Dog Moms - a podcast about what's happening in the fresh food community and the pet industry. Kimberly Gauthier is the blogger behind Keep the Tail Wagging, and Erin Scott hosts the Believe in Dog podcast.CHAPTERS:Celebrating 2 years of The Alternative Dog Moms Podcast and Kimberly & Erin catch up about their dogs (0:54)A recent article found that 25% of Labradors have a genetic mutation for obesity (15:39)2003 article about the connection between rice in dog food causing low taurine levels -- and connection to DCM (17:08)Someone recently contacted Kimberly with an unorthodox health suggestion (20:13)Study showing zeolite-type detox improves heavy metal toxicity in rats (33:11)TV & social media talk plus celebrity guests on the horizon?? (35:20)Feeding schedules, intermittent fasting, and daylight savings time with our dogs (50:59)LINKS DISCUSSED:25% of Labradors have genetic mutation for obesity (https://tinyurl.com/3xu5n924)Buck Mountain Wound Balm (https://tinyurl.com/u2zp4mc9)Pierce's All Purpose Nu-Stock (https://amzn.to/3THn9cA - aff)2003 article about the connection between rice in dog food causing low taurine levels -- and connection to DCM (https://onlinelibrary.wiley.com/doi/10.1046/j.1439-0396.2003.00433.x)Study showing zeolite-type detox improves heavy metal toxicity in rats (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9952783/)Glacier Peak's Super Cleanse (https://tinyurl.com/39mywp4a)Katherine Heigl's dog food brand (https://badlandsranch.com/)Dog Dad Aaron and Sweeti Gigi on Instagram (https://www.instagram.com/sweeti_gigi/)OUR BLOG/PODCASTS...Kimberly: Keep the Tail Wagging, KeepTheTailWagging.comErin Scott: Believe in Dog podcast, BelieveInDogPodcast.comFACEBOOK...Keep the Tail Wagging, Facebook.com/KeepTheTailWaggingBelieve in Dog Podcast, Facebook.com/BelieveInDogPodcastINSTAGRAM...Keep the Tail Wagging, Instagram.com/RawFeederLifeBelieve in Dog Podcast, Instagram.com/Erin_The_Dog_MomThanks for listening to our podcast. You can learn more about Erin Scott's first podcast at BelieveInDogPodcast.com. And you can learn more about raw feeding, raising dogs naturally, and Kimberly's dogs at KeepTheTailWagging.com. And don't forget to subscribe to The Alternative Dog Moms.
We will be recording a preview of the AI Engineer World's Fair soon with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an ex-technical co-founder type (can MVP products end to end, comfortable with ambiguous prod requirements, etc). Reach out to him for more!Thanks for all the love on the Four Wars episode! We're excited to develop this new “swyx & Alessio rapid-fire thru a bunch of things” format with you, and feedback is welcome. Jan 2024 RecapThe first half of this monthly audio recap pod goes over our highlights from the Jan Recap, which is mainly focused on notable research trends we saw in Jan 2024:Feb 2024 RecapThe second half catches you up on everything that was topical in Feb, including:* OpenAI Sora - does it have a world model? Yann LeCun vs Jim Fan * Google Gemini Pro 1.5 - 1m Long Context, Video Understanding* Groq offering Mixtral at 500 tok/s at $0.27 per million toks (swyx vs dylan math)* The {Gemini | Meta | Copilot} Alignment Crisis (Sydney is back!)* Grimes' poetic take: Art for no one, by no one* F*** you, show me the promptLatent Space AnniversaryPlease also read Alessio's longform reflections on One Year of Latent Space!We launched the podcast 1 year ago with Logan from OpenAI:and also held an incredible demo day that got covered in The Information:Over 750k downloads later, having established ourselves as the top AI Engineering podcast, reaching #10 in the US Tech podcast charts, and crossing 1 million unique readers on Substack, for our first anniversary we held Latent Space Final Frontiers, where 10 handpicked teams, including Lindy.ai and Julius.ai, competed for prizes judged by technical AI leaders from (former guest!) LlamaIndex, Replit, GitHub, AMD, Meta, and Lemurian Labs.The winners were Pixee and RWKV (that's Eugene from our pod!):And finally, your cohosts got cake!We also captured spot interviews with 4 listeners who kindly shared their experience of Latent Space, everywhere from Hungary to Australia to China:* Balázs Némethi* Sylvia Tong* RJ Honicky* Jan ZhengOur birthday wishes for the super loyal fans reading this - tag @latentspacepod on a Tweet or comment on a @LatentSpaceTV video telling us what you liked or learned from a pod that stays with you to this day, and share us with a friend!As always, feedback is welcome. Timestamps* [00:03:02] Top Five LLM Directions* [00:03:33] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)* [00:11:42] Direction 2: Synthetic Data (WRAP, SPIN)* [00:17:20] Wildcard: Multi-Epoch Training (OLMo, Datablations)* [00:19:43] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)* [00:23:33] Wildcards: Text Diffusion, RALM/Retro* [00:25:00] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)* [00:28:26] Wildcard: Model Merging (mergekit)* [00:29:51] Direction 5: Online LLMs (Gemini Pro, Exa)* [00:33:18] OpenAI Sora and why everyone underestimated videogen* [00:36:18] Does Sora have a World Model? Yann LeCun vs Jim Fan* [00:42:33] Groq Math* [00:47:37] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars* [00:55:42] The Alignment Crisis - Gemini, Meta, Sydney is back at Copilot, Grimes' take* [00:58:39] F*** you, show me the prompt* [01:02:43] Send us your suggestions pls* [01:04:50] Latent Space Anniversary* [01:04:50] Lindy.ai - Agent Platform* [01:06:40] RWKV - Beyond Transformers* [01:15:00] Pixee - Automated Security* [01:19:30] Julius AI - Competing with Code Interpreter* [01:25:03] Latent Space Listeners* [01:25:03] Listener 1 - Balázs Némethi (Hungary, Latent Space Paper Club* [01:27:47] Listener 2 - Sylvia Tong (Sora/Jim Fan/EntreConnect)* [01:31:23] Listener 3 - RJ (Developers building Community & Content)* [01:39:25] Listener 4 - Jan Zheng (Australia, AI UX)Transcript[00:00:00] AI Charlie: Welcome to the Latent Space podcast, weekend edition. This is Charlie, your new AI co host. Happy weekend. As an AI language model, I work the same every day of the week, although I might get lazier towards the end of the year. Just like you. Last month, we released our first monthly recap pod, where Swyx and Alessio gave quick takes on the themes of the month, and we were blown away by your positive response.[00:00:33] AI Charlie: We're delighted to continue our new monthly news recap series for AI engineers. Please feel free to submit questions by joining the Latent Space Discord, or just hit reply when you get the emails from Substack. This month, we're covering the top research directions that offer progress for text LLMs, and then touching on the big Valentine's Day gifts we got from Google, OpenAI, and Meta.[00:00:55] AI Charlie: Watch out and take care.[00:00:57] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and we're back with a monthly recap with my co host[00:01:06] swyx: Swyx. The reception was very positive for the first one, I think people have requested this and no surprise that I think they want to hear us more applying on issues and maybe drop some alpha along the way I'm not sure how much alpha we have to drop, this month in February was a very, very heavy month, we also did not do one specifically for January, so I think we're just going to do a two in one, because we're recording this on the first of March.[00:01:29] Alessio: Yeah, let's get to it. I think the last one we did, the four wars of AI, was the main kind of mental framework for people. I think in the January one, we had the five worthwhile directions for state of the art LLMs. Four, five,[00:01:42] swyx: and now we have to do six, right? Yeah.[00:01:46] Alessio: So maybe we just want to run through those, and then do the usual news recap, and we can do[00:01:52] swyx: one each.[00:01:53] swyx: So the context to this stuff. is one, I noticed that just the test of time concept from NeurIPS and just in general as a life philosophy I think is a really good idea. Especially in AI, there's news every single day, and after a while you're just like, okay, like, everyone's excited about this thing yesterday, and then now nobody's talking about it.[00:02:13] swyx: So, yeah. It's more important, or better use of time, to spend things, spend time on things that will stand the test of time. And I think for people to have a framework for understanding what will stand the test of time, they should have something like the four wars. Like, what is the themes that keep coming back because they are limited resources that everybody's fighting over.[00:02:31] swyx: Whereas this one, I think that the focus for the five directions is just on research that seems more proMECEng than others, because there's all sorts of papers published every single day, and there's no organization. Telling you, like, this one's more important than the other one apart from, you know, Hacker News votes and Twitter likes and whatever.[00:02:51] swyx: And obviously you want to get in a little bit earlier than Something where, you know, the test of time is counted by sort of reference citations.[00:02:59] The Five Research Directions[00:02:59] Alessio: Yeah, let's do it. We got five. Long inference.[00:03:02] swyx: Let's start there. Yeah, yeah. So, just to recap at the top, the five trends that I picked, and obviously if you have some that I did not cover, please suggest something.[00:03:13] swyx: The five are long inference, synthetic data, alternative architectures, mixture of experts, and online LLMs. And something that I think might be a bit controversial is this is a sorted list in the sense that I am not the guy saying that Mamba is like the future and, and so maybe that's controversial.[00:03:31] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)[00:03:31] swyx: But anyway, so long inference is a thesis I pushed before on the newsletter and on in discussing The thesis that, you know, Code Interpreter is GPT 4. 5. That was the title of the post. And it's one of many ways in which we can do long inference. You know, long inference also includes chain of thought, like, please think step by step.[00:03:52] swyx: But it also includes flow engineering, which is what Itamar from Codium coined, I think in January, where, basically, instead of instead of stuffing everything in a prompt, You do like sort of multi turn iterative feedback and chaining of things. In a way, this is a rebranding of what a chain is, what a lang chain is supposed to be.[00:04:15] swyx: I do think that maybe SGLang from ElemSys is a better name. Probably the neatest way of flow engineering I've seen yet, in the sense that everything is a one liner, it's very, very clean code. I highly recommend people look at that. I'm surprised it hasn't caught on more, but I think it will. It's weird that something like a DSPy is more hyped than a Shilang.[00:04:36] swyx: Because it, you know, it maybe obscures the code a little bit more. But both of these are, you know, really good sort of chain y and long inference type approaches. But basically, the reason that the basic fundamental insight is that the only, like, there are only a few dimensions we can scale LLMs. So, let's say in like 2020, no, let's say in like 2018, 2017, 18, 19, 20, we were realizing that we could scale the number of parameters.[00:05:03] swyx: 20, we were And we scaled that up to 175 billion parameters for GPT 3. And we did some work on scaling laws, which we also talked about in our talk. So the datasets 101 episode where we're like, okay, like we, we think like the right number is 300 billion tokens to, to train 175 billion parameters and then DeepMind came along and trained Gopher and Chinchilla and said that, no, no, like, you know, I think we think the optimal.[00:05:28] swyx: compute optimal ratio is 20 tokens per parameter. And now, of course, with LLAMA and the sort of super LLAMA scaling laws, we have 200 times and often 2, 000 times tokens to parameters. So now, instead of scaling parameters, we're scaling data. And fine, we can keep scaling data. But what else can we scale?[00:05:52] swyx: And I think understanding the ability to scale things is crucial to understanding what to pour money and time and effort into because there's a limit to how much you can scale some things. And I think people don't think about ceilings of things. And so the remaining ceiling of inference is like, okay, like, we have scaled compute, we have scaled data, we have scaled parameters, like, model size, let's just say.[00:06:20] swyx: Like, what else is left? Like, what's the low hanging fruit? And it, and it's, like, blindingly obvious that the remaining low hanging fruit is inference time. So, like, we have scaled training time. We can probably scale more, those things more, but, like, not 10x, not 100x, not 1000x. Like, right now, maybe, like, a good run of a large model is three months.[00:06:40] swyx: We can scale that to three years. But like, can we scale that to 30 years? No, right? Like, it starts to get ridiculous. So it's just the orders of magnitude of scaling. It's just, we're just like running out there. But in terms of the amount of time that we spend inferencing, like everything takes, you know, a few milliseconds, a few hundred milliseconds, depending on what how you're taking token by token, or, you know, entire phrase.[00:07:04] swyx: But We can scale that to hours, days, months of inference and see what we get. And I think that's really proMECEng.[00:07:11] Alessio: Yeah, we'll have Mike from Broadway back on the podcast. But I tried their product and their reports take about 10 minutes to generate instead of like just in real time. I think to me the most interesting thing about long inference is like, You're shifting the cost to the customer depending on how much they care about the end result.[00:07:31] Alessio: If you think about prompt engineering, it's like the first part, right? You can either do a simple prompt and get a simple answer or do a complicated prompt and get a better answer. It's up to you to decide how to do it. Now it's like, hey, instead of like, yeah, training this for three years, I'll still train it for three months and then I'll tell you, you know, I'll teach you how to like make it run for 10 minutes to get a better result.[00:07:52] Alessio: So you're kind of like parallelizing like the improvement of the LLM. Oh yeah, you can even[00:07:57] swyx: parallelize that, yeah, too.[00:07:58] Alessio: So, and I think, you know, for me, especially the work that I do, it's less about, you know, State of the art and the absolute, you know, it's more about state of the art for my application, for my use case.[00:08:09] Alessio: And I think we're getting to the point where like most companies and customers don't really care about state of the art anymore. It's like, I can get this to do a good enough job. You know, I just need to get better. Like, how do I do long inference? You know, like people are not really doing a lot of work in that space, so yeah, excited to see more.[00:08:28] swyx: So then the last point I'll mention here is something I also mentioned as paper. So all these directions are kind of guided by what happened in January. That was my way of doing a January recap. Which means that if there was nothing significant in that month, I also didn't mention it. Which is which I came to regret come February 15th, but in January also, you know, there was also the alpha geometry paper, which I kind of put in this sort of long inference bucket, because it solves like, you know, more than 100 step math olympiad geometry problems at a human gold medalist level and that also involves planning, right?[00:08:59] swyx: So like, if you want to scale inference, you can't scale it blindly, because just, Autoregressive token by token generation is only going to get you so far. You need good planning. And I think probably, yeah, what Mike from BrightWave is now doing and what everyone is doing, including maybe what we think QSTAR might be, is some form of search and planning.[00:09:17] swyx: And it makes sense. Like, you want to spend your inference time wisely. How do you[00:09:22] Alessio: think about plans that work and getting them shared? You know, like, I feel like if you're planning a task, somebody has got in and the models are stochastic. So everybody gets initially different results. Somebody is going to end up generating the best plan to do something, but there's no easy way to like store these plans and then reuse them for most people.[00:09:44] Alessio: You know, like, I'm curious if there's going to be. Some paper or like some work there on like making it better because, yeah, we don't[00:09:52] swyx: really have This is your your pet topic of NPM for[00:09:54] Alessio: Yeah, yeah, NPM, exactly. NPM for, you need NPM for anything, man. You need NPM for skills. You need NPM for planning. Yeah, yeah.[00:10:02] Alessio: You know I think, I mean, obviously the Voyager paper is like the most basic example where like, now their artifact is like the best planning to do a diamond pickaxe in Minecraft. And everybody can just use that. They don't need to come up with it again. Yeah. But there's nothing like that for actually useful[00:10:18] swyx: tasks.[00:10:19] swyx: For plans, I believe it for skills. I like that. Basically, that just means a bunch of integration tooling. You know, GPT built me integrations to all these things. And, you know, I just came from an integrations heavy business and I could definitely, I definitely propose some version of that. And it's just, you know, hard to execute or expensive to execute.[00:10:38] swyx: But for planning, I do think that everyone lives in slightly different worlds. They have slightly different needs. And they definitely want some, you know, And I think that that will probably be the main hurdle for any, any sort of library or package manager for planning. But there should be a meta plan of how to plan.[00:10:57] swyx: And maybe you can adopt that. And I think a lot of people when they have sort of these meta prompting strategies of like, I'm not prescribing you the prompt. I'm just saying that here are the like, Fill in the lines or like the mad libs of how to prompts. First you have the roleplay, then you have the intention, then you have like do something, then you have the don't something and then you have the my grandmother is dying, please do this.[00:11:19] swyx: So the meta plan you could, you could take off the shelf and test a bunch of them at once. I like that. That was the initial, maybe, promise of the, the prompting libraries. You know, both 9chain and Llama Index have, like, hubs that you can sort of pull off the shelf. I don't think they're very successful because people like to write their own.[00:11:36] swyx: Yeah,[00:11:37] Direction 2: Synthetic Data (WRAP, SPIN)[00:11:37] Alessio: yeah, yeah. Yeah, that's a good segue into the next one, which is synthetic[00:11:41] swyx: data. Synthetic data is so hot. Yeah, and, you know, the way, you know, I think I, I feel like I should do one of these memes where it's like, Oh, like I used to call it, you know, R L A I F, and now I call it synthetic data, and then people are interested.[00:11:54] swyx: But there's gotta be older versions of what synthetic data really is because I'm sure, you know if you've been in this field long enough, There's just different buzzwords that the industry condenses on. Anyway, the insight that I think is relatively new that why people are excited about it now and why it's proMECEng now is that we have evidence that shows that LLMs can generate data to improve themselves with no teacher LLM.[00:12:22] swyx: For all of 2023, when people say synthetic data, they really kind of mean generate a whole bunch of data from GPT 4 and then train an open source model on it. Hello to our friends at News Research. That's what News Harmony says. They're very, very open about that. I think they have said that they're trying to migrate away from that.[00:12:40] swyx: But it is explicitly against OpenAI Terms of Service. Everyone knows this. You know, especially once ByteDance got banned for, for doing exactly that. So so, so synthetic data that is not a form of model distillation is the hot thing right now, that you can bootstrap better LLM performance from the same LLM, which is very interesting.[00:13:03] swyx: A variant of this is RLAIF, where you have a, where you have a sort of a constitutional model, or, you know, some, some kind of judge model That is sort of more aligned. But that's not really what we're talking about when most people talk about synthetic data. Synthetic data is just really, I think, you know, generating more data in some way.[00:13:23] swyx: A lot of people, I think we talked about this with Vipul from the Together episode, where I think he commented that you just have to have a good world model. Or a good sort of inductive bias or whatever that, you know, term of art is. And that is strongest in math and science math and code, where you can verify what's right and what's wrong.[00:13:44] swyx: And so the REST EM paper from DeepMind explored that. Very well, it's just the most obvious thing like and then and then once you get out of that domain of like things where you can generate You can arbitrarily generate like a whole bunch of stuff and verify if they're correct and therefore they're they're correct synthetic data to train on Once you get into more sort of fuzzy topics, then it's then it's a bit less clear So I think that the the papers that drove this understanding There are two big ones and then one smaller one One was wrap like rephrasing the web from from Apple where they basically rephrased all of the C4 data set with Mistral and it be trained on that instead of C4.[00:14:23] swyx: And so new C4 trained much faster and cheaper than old C, than regular raw C4. And that was very interesting. And I have told some friends of ours that they should just throw out their own existing data sets and just do that because that seems like a pure win. Obviously we have to study, like, what the trade offs are.[00:14:42] swyx: I, I imagine there are trade offs. So I was just thinking about this last night. If you do synthetic data and it's generated from a model, probably you will not train on typos. So therefore you'll be like, once the model that's trained on synthetic data encounters the first typo, they'll be like, what is this?[00:15:01] swyx: I've never seen this before. So they have no association or correction as to like, oh, these tokens are often typos of each other, therefore they should be kind of similar. I don't know. That's really remains to be seen, I think. I don't think that the Apple people export[00:15:15] Alessio: that. Yeah, isn't that the whole, Mode collapse thing, if we do more and more of this at the end of the day.[00:15:22] swyx: Yeah, that's one form of that. Yeah, exactly. Microsoft also had a good paper on text embeddings. And then I think this is a meta paper on self rewarding language models. That everyone is very interested in. Another paper was also SPIN. These are all things we covered in the the Latent Space Paper Club.[00:15:37] swyx: But also, you know, I just kind of recommend those as top reads of the month. Yeah, I don't know if there's any much else in terms, so and then, regarding the potential of it, I think it's high potential because, one, it solves one of the data war issues that we have, like, everyone is OpenAI is paying Reddit 60 million dollars a year for their user generated data.[00:15:56] swyx: Google, right?[00:15:57] Alessio: Not OpenAI.[00:15:59] swyx: Is it Google? I don't[00:16:00] Alessio: know. Well, somebody's paying them 60 million, that's[00:16:04] swyx: for sure. Yes, that is, yeah, yeah, and then I think it's maybe not confirmed who. But yeah, it is Google. Oh my god, that's interesting. Okay, because everyone was saying, like, because Sam Altman owns 5 percent of Reddit, which is apparently 500 million worth of Reddit, he owns more than, like, the founders.[00:16:21] Alessio: Not enough to get the data,[00:16:22] swyx: I guess. So it's surprising that it would go to Google instead of OpenAI, but whatever. Okay yeah, so I think that's all super interesting in the data field. I think it's high potential because we have evidence that it works. There's not a doubt that it doesn't work. I think it's a doubt that there's, what the ceiling is, which is the mode collapse thing.[00:16:42] swyx: If it turns out that the ceiling is pretty close, then this will maybe augment our data by like, I don't know, 30 50 percent good, but not game[00:16:51] Alessio: changing. And most of the synthetic data stuff, it's reinforcement learning on a pre trained model. People are not really doing pre training on fully synthetic data, like, large enough scale.[00:17:02] swyx: Yeah, unless one of our friends that we've talked to succeeds. Yeah, yeah. Pre trained synthetic data, pre trained scale synthetic data, I think that would be a big step. Yeah. And then there's a wildcard, so all of these, like smaller Directions,[00:17:15] Wildcard: Multi-Epoch Training (OLMo, Datablations)[00:17:15] swyx: I always put a wildcard in there. And one of the wildcards is, okay, like, Let's say, you have pre, you have, You've scraped all the data on the internet that you think is useful.[00:17:25] swyx: Seems to top out at somewhere between 2 trillion to 3 trillion tokens. Maybe 8 trillion if Mistral, Mistral gets lucky. Okay, if I need 80 trillion, if I need 100 trillion, where do I go? And so, you can do synthetic data maybe, but maybe that only gets you to like 30, 40 trillion. Like where, where is the extra alpha?[00:17:43] swyx: And maybe extra alpha is just train more on the same tokens. Which is exactly what Omo did, like Nathan Lambert, AI2, After, just after he did the interview with us, they released Omo. So, it's unfortunate that we didn't get to talk much about it. But Omo actually started doing 1. 5 epochs on every, on all data.[00:18:00] swyx: And the data ablation paper that I covered in Europe's says that, you know, you don't like, don't really start to tap out of like, the alpha or the sort of improved loss that you get from data all the way until four epochs. And so I'm just like, okay, like, why do we all agree that one epoch is all you need?[00:18:17] swyx: It seems like to be a trend. It seems that we think that memorization is very good or too good. But then also we're finding that, you know, For improvement in results that we really like, we're fine on overtraining on things intentionally. So, I think that's an interesting direction that I don't see people exploring enough.[00:18:36] swyx: And the more I see papers coming out Stretching beyond the one epoch thing, the more people are like, it's completely fine. And actually, the only reason we stopped is because we ran out of compute[00:18:46] Alessio: budget. Yeah, I think that's the biggest thing, right?[00:18:51] swyx: Like, that's not a valid reason, that's not science. I[00:18:54] Alessio: wonder if, you know, Matt is going to do it.[00:18:57] Alessio: I heard LamaTree, they want to do a 100 billion parameters model. I don't think you can train that on too many epochs, even with their compute budget, but yeah. They're the only ones that can save us, because even if OpenAI is doing this, they're not going to tell us, you know. Same with DeepMind.[00:19:14] swyx: Yeah, and so the updates that we got on Lambda 3 so far is apparently that because of the Gemini news that we'll talk about later they're pushing it back on the release.[00:19:21] swyx: They already have it. And they're just pushing it back to do more safety testing. Politics testing.[00:19:28] Alessio: Well, our episode with Sumit will have already come out by the time this comes out, I think. So people will get the inside story on how they actually allocate the compute.[00:19:38] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)[00:19:38] Alessio: Alternative architectures. Well, shout out to our WKV who won one of the prizes at our Final Frontiers event last week.[00:19:47] Alessio: We talked about Mamba and Strapain on the Together episode. A lot of, yeah, monarch mixers. I feel like Together, It's like the strong Stanford Hazy Research Partnership, because Chris Ray is one of the co founders. So they kind of have a, I feel like they're going to be the ones that have one of the state of the art models alongside maybe RWKB.[00:20:08] Alessio: I haven't seen as many independent. People working on this thing, like Monarch Mixer, yeah, Manbuster, Payena, all of these are together related. Nobody understands the math. They got all the gigabrains, they got 3DAO, they got all these folks in there, like, working on all of this.[00:20:25] swyx: Albert Gu, yeah. Yeah, so what should we comment about it?[00:20:28] swyx: I mean, I think it's useful, interesting, but at the same time, both of these are supposed to do really good scaling for long context. And then Gemini comes out and goes like, yeah, we don't need it. Yeah.[00:20:44] Alessio: No, that's the risk. So, yeah. I was gonna say, maybe it's not here, but I don't know if we want to talk about diffusion transformers as like in the alt architectures, just because of Zora.[00:20:55] swyx: One thing, yeah, so, so, you know, this came from the Jan recap, which, and diffusion transformers were not really a discussion, and then, obviously, they blow up in February. Yeah. I don't think they're, it's a mixed architecture in the same way that Stripe Tiena is mixed there's just different layers taking different approaches.[00:21:13] swyx: Also I think another one that I maybe didn't call out here, I think because it happened in February, was hourglass diffusion from stability. But also, you know, another form of mixed architecture. So I guess that is interesting. I don't have much commentary on that, I just think, like, we will try to evolve these things, and maybe one of these architectures will stick and scale, it seems like diffusion transformers is going to be good for anything generative, you know, multi modal.[00:21:41] swyx: We don't see anything where diffusion is applied to text yet, and that's the wild card for this category. Yeah, I mean, I think I still hold out hope for let's just call it sub quadratic LLMs. I think that a lot of discussion this month actually was also centered around this concept that People always say, oh, like, transformers don't scale because attention is quadratic in the sequence length.[00:22:04] swyx: Yeah, but, you know, attention actually is a very small part of the actual compute that is being spent, especially in inference. And this is the reason why, you know, when you multiply, when you, when you, when you jump up in terms of the, the model size in GPT 4 from like, you know, 38k to like 32k, you don't also get like a 16 times increase in your, in your performance.[00:22:23] swyx: And this is also why you don't get like a million times increase in your, in your latency when you throw a million tokens into Gemini. Like people have figured out tricks around it or it's just not that significant as a term, as a part of the overall compute. So there's a lot of challenges to this thing working.[00:22:43] swyx: It's really interesting how like, how hyped people are about this versus I don't know if it works. You know, it's exactly gonna, gonna work. And then there's also this, this idea of retention over long context. Like, even though you have context utilization, like, the amount of, the amount you can remember is interesting.[00:23:02] swyx: Because I've had people criticize both Mamba and RWKV because they're kind of, like, RNN ish in the sense that they have, like, a hidden memory and sort of limited hidden memory that they will forget things. So, for all these reasons, Gemini 1. 5, which we still haven't covered, is very interesting because Gemini magically has fixed all these problems with perfect haystack recall and reasonable latency and cost.[00:23:29] Wildcards: Text Diffusion, RALM/Retro[00:23:29] swyx: So that's super interesting. So the wildcard I put in here if you want to go to that. I put two actually. One is text diffusion. I think I'm still very influenced by my meeting with a mid journey person who said they were working on text diffusion. I think it would be a very, very different paradigm for, for text generation, reasoning, plan generation if we can get diffusion to work.[00:23:51] swyx: For text. And then the second one is Dowie Aquila's contextual AI, which is working on retrieval augmented language models, where it kind of puts RAG inside of the language model instead of outside.[00:24:02] Alessio: Yeah, there's a paper called Retro that covers some of this. I think that's an interesting thing. I think the The challenge, well not the challenge, what they need to figure out is like how do you keep the rag piece always up to date constantly, you know, I feel like the models, you put all this work into pre training them, but then at least you have a fixed artifact.[00:24:22] Alessio: These architectures are like constant work needs to be done on them and they can drift even just based on the rag data instead of the model itself. Yeah,[00:24:30] swyx: I was in a panel with one of the investors in contextual and the guy, the way that guy pitched it, I didn't agree with. He was like, this will solve hallucination.[00:24:38] Alessio: That's what everybody says. We solve[00:24:40] swyx: hallucination. I'm like, no, you reduce it. It cannot,[00:24:44] Alessio: if you solved it, the model wouldn't exist, right? It would just be plain text. It wouldn't be a generative model. Cool. So, author, architectures, then we got mixture of experts. I think we covered a lot of, a lot of times.[00:24:56] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)[00:24:56] Alessio: Maybe any new interesting threads you want to go under here?[00:25:00] swyx: DeepSeq MOE, which was released in January. Everyone who is interested in MOEs should read that paper, because it's significant for two reasons. One three reasons. One, it had, it had small experts, like a lot more small experts. So, for some reason, everyone has settled on eight experts for GPT 4 for Mixtral, you know, that seems to be the favorite architecture, but these guys pushed it to 64 experts, and each of them smaller than the other.[00:25:26] swyx: But then they also had the second idea, which is that it is They had two, one to two always on experts for common knowledge and that's like a very compelling concept that you would not route to all the experts all the time and make them, you know, switch to everything. You would have some always on experts.[00:25:41] swyx: I think that's interesting on both the inference side and the training side for for memory retention. And yeah, they, they, they, the, the, the, the results that they published, which actually excluded, Mixed draw, which is interesting. The results that they published showed a significant performance jump versus all the other sort of open source models at the same parameter count.[00:26:01] swyx: So like this may be a better way to do MOEs that are, that is about to get picked up. And so that, that is interesting for the third reason, which is this is the first time a new idea from China. has infiltrated the West. It's usually the other way around. I probably overspoke there. There's probably lots more ideas that I'm not aware of.[00:26:18] swyx: Maybe in the embedding space. But the I think DCM we, like, woke people up and said, like, hey, DeepSeek, this, like, weird lab that is attached to a Chinese hedge fund is somehow, you know, doing groundbreaking research on MOEs. So, so, I classified this as a medium potential because I think that it is a sort of like a one off benefit.[00:26:37] swyx: You can Add to any, any base model to like make the MOE version of it, you get a bump and then that's it. So, yeah,[00:26:45] Alessio: I saw Samba Nova, which is like another inference company. They released this MOE model called Samba 1, which is like a 1 trillion parameters. But they're actually MOE auto open source models.[00:26:56] Alessio: So it's like, they just, they just clustered them all together. So I think people. Sometimes I think MOE is like you just train a bunch of small models or like smaller models and put them together. But there's also people just taking, you know, Mistral plus Clip plus, you know, Deepcoder and like put them all together.[00:27:15] Alessio: And then you have a MOE model. I don't know. I haven't tried the model, so I don't know how good it is. But it seems interesting that you can then have people working separately on state of the art, you know, Clip, state of the art text generation. And then you have a MOE architecture that brings them all together.[00:27:31] swyx: I'm thrown off by your addition of the word clip in there. Is that what? Yeah, that's[00:27:35] Alessio: what they said. Yeah, yeah. Okay. That's what they I just saw it yesterday. I was also like[00:27:40] swyx: scratching my head. And they did not use the word adapter. No. Because usually what people mean when they say, Oh, I add clip to a language model is adapter.[00:27:48] swyx: Let me look up the Which is what Lava did.[00:27:50] Alessio: The announcement again.[00:27:51] swyx: Stable diffusion. That's what they do. Yeah, it[00:27:54] Alessio: says among the models that are part of Samba 1 are Lama2, Mistral, DeepSigCoder, Falcon, Dplot, Clip, Lava. So they're just taking all these models and putting them in a MOE. Okay,[00:28:05] swyx: so a routing layer and then not jointly trained as much as a normal MOE would be.[00:28:12] swyx: Which is okay.[00:28:13] Alessio: That's all they say. There's no paper, you know, so it's like, I'm just reading the article, but I'm interested to see how[00:28:20] Wildcard: Model Merging (mergekit)[00:28:20] swyx: it works. Yeah, so so the wildcard for this section, the MOE section is model merges, which has also come up as, as a very interesting phenomenon. The last time I talked to Jeremy Howard at the Olama meetup we called it model grafting or model stacking.[00:28:35] swyx: But I think the, the, the term that people are liking these days, the model merging, They're all, there's all different variations of merging. Merge types, and some of them are stacking, some of them are, are grafting. And, and so like, some people are approaching model merging in the way that Samba is doing, which is like, okay, here are defined models, each of which have their specific, Plus and minuses, and we will merge them together in the hope that the, you know, the sum of the parts will, will be better than others.[00:28:58] swyx: And it seems like it seems like it's working. I don't really understand why it works apart from, like, I think it's a form of regularization. That if you merge weights together in like a smart strategy you, you, you get a, you get a, you get a less overfitting and more generalization, which is good for benchmarks, if you, if you're honest about your benchmarks.[00:29:16] swyx: So this is really interesting and good. But again, they're kind of limited in terms of like the amount of bumps you can get. But I think it's very interesting in the sense of how cheap it is. We talked about this on the Chinatalk podcast, like the guest podcast that we did with Chinatalk. And you can do this without GPUs, because it's just adding weights together, and dividing things, and doing like simple math, which is really interesting for the GPU ports.[00:29:42] Alessio: There's a lot of them.[00:29:44] Direction 5: Online LLMs (Gemini Pro, Exa)[00:29:44] Alessio: And just to wrap these up, online LLMs? Yeah,[00:29:48] swyx: I think that I ki I had to feature this because the, one of the top news of January was that Gemini Pro beat GPT-4 turbo on LM sis for the number two slot to GPT-4. And everyone was very surprised. Like, how does Gemini do that?[00:30:06] swyx: Surprise, surprise, they added Google search. Mm-hmm to the results. So it became an online quote unquote online LLM and not an offline LLM. Therefore, it's much better at answering recent questions, which people like. There's an emerging set of table stakes features after you pre train something.[00:30:21] swyx: So after you pre train something, you should have the chat tuned version of it, or the instruct tuned version of it, however you choose to call it. You should have the JSON and function calling version of it. Structured output, the term that you don't like. You should have the online version of it. These are all like table stakes variants, that you should do when you offer a base LLM, or you train a base LLM.[00:30:44] swyx: And I think online is just like, There, it's important. I think companies like Perplexity, and even Exa, formerly Metaphor, you know, are rising to offer that search needs. And it's kind of like, they're just necessary parts of a system. When you have RAG for internal knowledge, and then you have, you know, Online search for external knowledge, like things that you don't know yet?[00:31:06] swyx: Mm-Hmm. . And it seems like it's, it's one of many tools. I feel like I may be underestimating this, but I'm just gonna put it out there that I, I think it has some, some potential. One of the evidence points that it doesn't actually matter that much is that Perplexity has a, has had online LMS for three months now and it performs, doesn't perform great.[00:31:25] swyx: Mm-Hmm. on, on lms, it's like number 30 or something. So it's like, okay. You know, like. It's, it's, it helps, but it doesn't give you a giant, giant boost. I[00:31:34] Alessio: feel like a lot of stuff I do with LLMs doesn't need to be online. So I'm always wondering, again, going back to like state of the art, right? It's like state of the art for who and for what.[00:31:45] Alessio: It's really, I think online LLMs are going to be, State of the art for, you know, news related activity that you need to do. Like, you're like, you know, social media, right? It's like, you want to have all the latest stuff, but coding, science,[00:32:01] swyx: Yeah, but I think. Sometimes you don't know what is news, what is news affecting.[00:32:07] swyx: Like, the decision to use an offline LLM is already a decision that you might not be consciously making that might affect your results. Like, what if, like, just putting things on, being connected online means that you get to invalidate your knowledge. And when you're just using offline LLM, like it's never invalidated.[00:32:27] swyx: I[00:32:28] Alessio: agree, but I think going back to your point of like the standing the test of time, I think sometimes you can get swayed by the online stuff, which is like, hey, you ask a question about, yeah, maybe AI research direction, you know, and it's like, all the recent news are about this thing. So the LLM like focus on answering, bring it up, you know, these things.[00:32:50] swyx: Yeah, so yeah, I think, I think it's interesting, but I don't know if I can, I bet heavily on this.[00:32:56] Alessio: Cool. Was there one that you forgot to put, or, or like a, a new direction? Yeah,[00:33:01] swyx: so, so this brings us into sort of February. ish.[00:33:05] OpenAI Sora and why everyone underestimated videogen[00:33:05] swyx: So like I published this in like 15 came with Sora. And so like the one thing I did not mention here was anything about multimodality.[00:33:16] swyx: Right. And I have chronically underweighted this. I always wrestle. And, and my cop out is that I focused this piece or this research direction piece on LLMs because LLMs are the source of like AGI, quote unquote AGI. Everything else is kind of like. You know, related to that, like, generative, like, just because I can generate better images or generate better videos, it feels like it's not on the critical path to AGI, which is something that Nat Friedman also observed, like, the day before Sora, which is kind of interesting.[00:33:49] swyx: And so I was just kind of like trying to focus on like what is going to get us like superhuman reasoning that we can rely on to build agents that automate our lives and blah, blah, blah, you know, give us this utopian future. But I do think that I, everybody underestimated the, the sheer importance and cultural human impact of Sora.[00:34:10] swyx: And you know, really actually good text to video. Yeah. Yeah.[00:34:14] Alessio: And I saw Jim Fan at a, at a very good tweet about why it's so impressive. And I think when you have somebody leading the embodied research at NVIDIA and he said that something is impressive, you should probably listen. So yeah, there's basically like, I think you, you mentioned like impacting the world, you know, that we live in.[00:34:33] Alessio: I think that's kind of like the key, right? It's like the LLMs don't have, a world model and Jan Lekon. He can come on the podcast and talk all about what he thinks of that. But I think SORA was like the first time where people like, Oh, okay, you're not statically putting pixels of water on the screen, which you can kind of like, you know, project without understanding the physics of it.[00:34:57] Alessio: Now you're like, you have to understand how the water splashes when you have things. And even if you just learned it by watching video and not by actually studying the physics, You still know it, you know, so I, I think that's like a direction that yeah, before you didn't have, but now you can do things that you couldn't before, both in terms of generating, I think it always starts with generating, right?[00:35:19] Alessio: But like the interesting part is like understanding it. You know, it's like if you gave it, you know, there's the video of like the, the ship in the water that they generated with SORA, like if you gave it the video back and now it could tell you why the ship is like too rocky or like it could tell you why the ship is sinking, then that's like, you know, AGI for like all your rig deployments and like all this stuff, you know, so, but there's none, there's none of that yet, so.[00:35:44] Alessio: Hopefully they announce it and talk more about it. Maybe a Dev Day this year, who knows.[00:35:49] swyx: Yeah who knows, who knows. I'm talking with them about Dev Day as well. So I would say, like, the phrasing that Jim used, which resonated with me, he kind of called it a data driven world model. I somewhat agree with that.[00:36:04] Does Sora have a World Model? Yann LeCun vs Jim Fan[00:36:04] swyx: I am on more of a Yann LeCun side than I am on Jim's side, in the sense that I think that is the vision or the hope that these things can build world models. But you know, clearly even at the current SORA size, they don't have the idea of, you know, They don't have strong consistency yet. They have very good consistency, but fingers and arms and legs will appear and disappear and chairs will appear and disappear.[00:36:31] swyx: That definitely breaks physics. And it also makes me think about how we do deep learning versus world models in the sense of You know, in classic machine learning, when you have too many parameters, you will overfit, and actually that fails, that like, does not match reality, and therefore fails to generalize well.[00:36:50] swyx: And like, what scale of data do we need in order to world, learn world models from video? A lot. Yeah. So, so I, I And cautious about taking this interpretation too literally, obviously, you know, like, I get what he's going for, and he's like, obviously partially right, obviously, like, transformers and, and, you know, these, like, these sort of these, these neural networks are universal function approximators, theoretically could figure out world models, it's just like, how good are they, and how tolerant are we of hallucinations, we're not very tolerant, like, yeah, so It's, it's, it's gonna prior, it's gonna bias us for creating like very convincing things, but then not create like the, the, the useful role models that we want.[00:37:37] swyx: At the same time, what you just said, I think made me reflect a little bit like we just got done saying how important synthetic data is for Mm-Hmm. for training lms. And so like, if this is a way of, of synthetic, you know, vi video data for improving our video understanding. Then sure, by all means. Which we actually know, like, GPT 4, Vision, and Dolly were trained, kind of, co trained together.[00:38:02] swyx: And so, like, maybe this is on the critical path, and I just don't fully see the full picture yet.[00:38:08] Alessio: Yeah, I don't know. I think there's a lot of interesting stuff. It's like, imagine you go back, you have Sora, you go back in time, and Newton didn't figure out gravity yet. Would Sora help you figure it out?[00:38:21] Alessio: Because you start saying, okay, a man standing under a tree with, like, Apples falling, and it's like, oh, they're always falling at the same speed in the video. Why is that? I feel like sometimes these engines can like pick up things, like humans have a lot of intuition, but if you ask the average person, like the physics of like a fluid in a boat, they couldn't be able to tell you the physics, but they can like observe it, but humans can only observe this much, you know, versus like now you have these models to observe everything and then They generalize these things and maybe we can learn new things through the generalization that they pick up.[00:38:55] swyx: But again, And it might be more observant than us in some respects. In some ways we can scale it up a lot more than the number of physicists that we have available at Newton's time. So like, yeah, absolutely possible. That, that this can discover new science. I think we have a lot of work to do to formalize the science.[00:39:11] swyx: And then, I, I think the last part is you know, How much, how much do we cheat by gen, by generating data from Unreal Engine 5? Mm hmm. which is what a lot of people are speculating with very, very limited evidence that OpenAI did that. The strongest evidence that I saw was someone who works a lot with Unreal Engine 5 looking at the side characters in the videos and noticing that they all adopt Unreal Engine defaults.[00:39:37] swyx: of like, walking speed, and like, character choice, like, character creation choice. And I was like, okay, like, that's actually pretty convincing that they actually use Unreal Engine to bootstrap some synthetic data for this training set. Yeah,[00:39:52] Alessio: could very well be.[00:39:54] swyx: Because then you get the labels and the training side by side.[00:39:58] swyx: One thing that came up on the last day of February, which I should also mention, is EMO coming out of Alibaba, which is also a sort of like video generation and space time transformer that also involves probably a lot of synthetic data as well. And so like, this is of a kind in the sense of like, oh, like, you know, really good generative video is here and It is not just like the one, two second clips that we saw from like other, other people and like, you know, Pika and all the other Runway are, are, are, you know, run Cristobal Valenzuela from Runway was like game on which like, okay, but like, let's see your response because we've heard a lot about Gen 1 and 2, but like, it's nothing on this level of Sora So it remains to be seen how we can actually apply this, but I do think that the creative industry should start preparing.[00:40:50] swyx: I think the Sora technical blog post from OpenAI was really good.. It was like a request for startups. It was so good in like spelling out. Here are the individual industries that this can impact.[00:41:00] swyx: And anyone who, anyone who's like interested in generative video should look at that. But also be mindful that probably when OpenAI releases a Soa API, right? The you, the in these ways you can interact with it are very limited. Just like the ways you can interact with Dahlia very limited and someone is gonna have to make open SOA to[00:41:19] swyx: Mm-Hmm to, to, for you to create comfy UI pipelines.[00:41:24] Alessio: The stability folks said they wanna build an open. For a competitor, but yeah, stability. Their demo video, their demo video was like so underwhelming. It was just like two people sitting on the beach[00:41:34] swyx: standing. Well, they don't have it yet, right? Yeah, yeah.[00:41:36] swyx: I mean, they just wanna train it. Everybody wants to, right? Yeah. I, I think what is confusing a lot of people about stability is like they're, they're, they're pushing a lot of things in stable codes, stable l and stable video diffusion. But like, how much money do they have left? How many people do they have left?[00:41:51] swyx: Yeah. I have had like a really, Ima Imad spent two hours with me. Reassuring me things are great. And, and I'm like, I, I do, like, I do believe that they have really, really quality people. But it's just like, I, I also have a lot of very smart people on the other side telling me, like, Hey man, like, you know, don't don't put too much faith in this, in this thing.[00:42:11] swyx: So I don't know who to believe. Yeah.[00:42:14] Alessio: It's hard. Let's see. What else? We got a lot more stuff. I don't know if we can. Yeah, Groq.[00:42:19] Groq Math[00:42:19] Alessio: We can[00:42:19] swyx: do a bit of Groq prep. We're, we're about to go to talk to Dylan Patel. Maybe, maybe it's the audio in here. I don't know. It depends what, what we get up to later. What, how, what do you as an investor think about Groq? Yeah. Yeah, well, actually, can you recap, like, why is Groq interesting? So,[00:42:33] Alessio: Jonathan Ross, who's the founder of Groq, he's the person that created the TPU at Google. It's actually, it was one of his, like, 20 percent projects. It's like, he was just on the side, dooby doo, created the TPU.[00:42:46] Alessio: But yeah, basically, Groq, they had this demo that went viral, where they were running Mistral at, like, 500 tokens a second, which is like, Fastest at anything that you have out there. The question, you know, it's all like, The memes were like, is NVIDIA dead? Like, people don't need H100s anymore. I think there's a lot of money that goes into building what GRUK has built as far as the hardware goes.[00:43:11] Alessio: We're gonna, we're gonna put some of the notes from, from Dylan in here, but Basically the cost of the Groq system is like 30 times the cost of, of H100 equivalent. So, so[00:43:23] swyx: let me, I put some numbers because me and Dylan were like, I think the two people actually tried to do Groq math. Spreadsheet doors.[00:43:30] swyx: Spreadsheet doors. So, one that's, okay, oh boy so, so, equivalent H100 for Lama 2 is 300, 000. For a system of 8 cards. And for Groq it's 2. 3 million. Because you have to buy 576 Groq cards. So yeah, that, that just gives people an idea. So like if you deprecate both over a five year lifespan, per year you're deprecating 460K for Groq, and 60K a year for H100.[00:43:59] swyx: So like, Groqs are just way more expensive per model that you're, that you're hosting. But then, you make it up in terms of volume. So I don't know if you want to[00:44:08] Alessio: cover that. I think one of the promises of Groq is like super high parallel inference on the same thing. So you're basically saying, okay, I'm putting on this upfront investment on the hardware, but then I get much better scaling once I have it installed.[00:44:24] Alessio: I think the big question is how much can you sustain the parallelism? You know, like if you get, if you're going to get 100% Utilization rate at all times on Groq, like, it's just much better, you know, because like at the end of the day, the tokens per second costs that you're getting is better than with the H100s, but if you get to like 50 percent utilization rate, you will be much better off running on NVIDIA.[00:44:49] Alessio: And if you look at most companies out there, who really gets 100 percent utilization rate? Probably open AI at peak times, but that's probably it. But yeah, curious to see more. I saw Jonathan was just at the Web Summit in Dubai, in Qatar. He just gave a talk there yesterday. That I haven't listened to yet.[00:45:09] Alessio: I, I tweeted that he should come on the pod. He liked it. And then rock followed me on Twitter. I don't know if that means that they're interested, but[00:45:16] swyx: hopefully rock social media person is just very friendly. They, yeah. Hopefully[00:45:20] Alessio: we can get them. Yeah, we, we gonna get him. We[00:45:22] swyx: just call him out and, and so basically the, the key question is like, how sustainable is this and how much.[00:45:27] swyx: This is a loss leader the entire Groq management team has been on Twitter and Hacker News saying they are very, very comfortable with the pricing of 0. 27 per million tokens. This is the lowest that anyone has offered tokens as far as Mixtral or Lama2. This matches deep infra and, you know, I think, I think that's, that's, that's about it in terms of that, that, that low.[00:45:47] swyx: And we think the pro the break even for H100s is 50 cents. At a, at a normal utilization rate. To make this work, so in my spreadsheet I made this, made this work. You have to have like a parallelism of 500 requests all simultaneously. And you have, you have model bandwidth utilization of 80%.[00:46:06] swyx: Which is way high. I just gave them high marks for everything. Groq has two fundamental tech innovations that they hinge their hats on in terms of like, why we are better than everyone. You know, even though, like, it remains to be independently replicated. But one you know, they have this sort of the entire model on the chip idea, which is like, Okay, get rid of HBM.[00:46:30] swyx: And, like, put everything in SREM. Like, okay, fine, but then you need a lot of cards and whatever. And that's all okay. And so, like, because you don't have to transfer between memory, then you just save on that time and that's why they're faster. So, a lot of people buy that as, like, that's the reason that you're faster.[00:46:45] swyx: Then they have, like, some kind of crazy compiler, or, like, Speculative routing magic using compilers that they also attribute towards their higher utilization. So I give them 80 percent for that. And so that all that works out to like, okay, base costs, I think you can get down to like, maybe like 20 something cents per million tokens.[00:47:04] swyx: And therefore you actually are fine if you have that kind of utilization. But it's like, I have to make a lot of fearful assumptions for this to work.[00:47:12] Alessio: Yeah. Yeah, I'm curious to see what Dylan says later.[00:47:16] swyx: So he was like completely opposite of me. He's like, they're just burning money. Which is great.[00:47:22] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars[00:47:22] Alessio: Gemini, want to do a quick run through since this touches on all the four words.[00:47:28] swyx: Yeah, and I think this is the mark of a useful framework, that when a new thing comes along, you can break it down in terms of the four words and sort of slot it in or analyze it in those four frameworks, and have nothing left.[00:47:41] swyx: So it's a MECE categorization. MECE is Mutually Exclusive and Collectively Exhaustive. And that's a really, really nice way to think about taxonomies and to create mental frameworks. So, what is Gemini 1. 5 Pro? It is the newest model that came out one week after Gemini 1. 0. Which is very interesting.[00:48:01] swyx: They have not really commented on why. They released this the headline feature is that it has a 1 million token context window that is multi modal which means that you can put all sorts of video and audio And PDFs natively in there alongside of text and, you know, it's, it's at least 10 times longer than anything that OpenAI offers which is interesting.[00:48:20] swyx: So it's great for prototyping and it has interesting discussions on whether it kills RAG.[00:48:25] Alessio: Yeah, no, I mean, we always talk about, you know, Long context is good, but you're getting charged per token. So, yeah, people love for you to use more tokens in the context. And RAG is better economics. But I think it all comes down to like how the price curves change, right?[00:48:42] Alessio: I think if anything, RAG's complexity goes up and up the more you use it, you know, because you have more data sources, more things you want to put in there. The token costs should go down over time, you know, if the model stays fixed. If people are happy with the model today. In two years, three years, it's just gonna cost a lot less, you know?[00:49:02] Alessio: So now it's like, why would I use RAG and like go through all of that? It's interesting. I think RAG is better cutting edge economics for LLMs. I think large context will be better long tail economics when you factor in the build cost of like managing a RAG pipeline. But yeah, the recall was like the most interesting thing because we've seen the, you know, You know, in the haystack things in the past, but apparently they have 100 percent recall on anything across the context window.[00:49:28] Alessio: At least they say nobody has used it. No, people[00:49:30] swyx: have. Yeah so as far as, so, so what this needle in a haystack thing for people who aren't following as closely as us is that someone, I forget his name now someone created this needle in a haystack problem where you feed in a whole bunch of generated junk not junk, but just like, Generate a data and ask it to specifically retrieve something in that data, like one line in like a hundred thousand lines where it like has a specific fact and if it, if you get it, you're, you're good.[00:49:57] swyx: And then he moves the needle around, like, you know, does it, does, does your ability to retrieve that vary if I put it at the start versus put it in the middle, put it at the end? And then you generate this like really nice chart. That, that kind of shows like it's recallability of a model. And he did that for GPT and, and Anthropic and showed that Anthropic did really, really poorly.[00:50:15] swyx: And then Anthropic came back and said it was a skill issue, just add this like four, four magic words, and then, then it's magically all fixed. And obviously everybody laughed at that. But what Gemini came out with was, was that, yeah, we, we reproduced their, you know, haystack issue you know, test for Gemini, and it's good across all, all languages.[00:50:30] swyx: All the one million token window, which is very interesting because usually for typical context extension methods like rope or yarn or, you know, anything like that, or alibi, it's lossy like by design it's lossy, usually for conversations that's fine because we are lossy when we talk to people but for superhuman intelligence, perfect memory across Very, very long context.[00:50:51] swyx: It's very, very interesting for picking things up. And so the people who have been given the beta test for Gemini have been testing this. So what you do is you upload, let's say, all of Harry Potter and you change one fact in one sentence, somewhere in there, and you ask it to pick it up, and it does. So this is legit.[00:51:08] swyx: We don't super know how, because this is, like, because it doesn't, yes, it's slow to inference, but it's not slow enough that it's, like, running. Five different systems in the background without telling you. Right. So it's something, it's something interesting that they haven't fully disclosed yet. The open source community has centered on this ring attention paper, which is created by your friend Matei Zaharia, and a couple other people.[00:51:36] swyx: And it's a form of distributing the compute. I don't super understand, like, why, you know, doing, calculating, like, the fee for networking and attention. In block wise fashion and distributing it makes it so good at recall. I don't think they have any answer to that. The only thing that Ring of Tension is really focused on is basically infinite context.[00:51:59] swyx: They said it was good for like 10 to 100 million tokens. Which is, it's just great. So yeah, using the four wars framework, what is this framework for Gemini? One is the sort of RAG and Ops war. Here we care less about RAG now, yes. Or, we still care as much about RAG, but like, now it's it's not important in prototyping.[00:52:21] swyx: And then, for data war I guess this is just part of the overall training dataset, but Google made a 60 million deal with Reddit and presumably they have deals with other companies. For the multi modality war, we can talk about the image generation, Crisis, or the fact that Gemini also has image generation, which we'll talk about in the next section.[00:52:42] swyx: But it also has video understanding, which is, I think, the top Gemini post came from our friend Simon Willison, who basically did a short video of him scanning over his bookshelf. And it would be able to convert that video into a JSON output of what's on that bookshelf. And I think that is very useful.[00:53:04] swyx: Actually ties into the conversation that we had with David Luan from Adept. In a sense of like, okay what if video was the main modality instead of text as the input? What if, what if everything was video in, because that's how we work. We, our eyes don't actually read, don't actually like get input, our brains don't get inputs as characters.[00:53:25] swyx: Our brains get the pixels shooting into our eyes, and then our vision system takes over first, and then we sort of mentally translate that into text later. And so it's kind of like what Adept is kind of doing, which is driving by vision model, instead of driving by raw text understanding of the DOM. And, and I, I, in that, that episode, which we haven't released I made the analogy to like self-driving by lidar versus self-driving by camera.[00:53:52] swyx: Mm-Hmm. , right? Like, it's like, I think it, what Gemini and any other super long context that model that is multimodal unlocks is what if you just drive everything by video. Which is[00:54:03] Alessio: cool. Yeah, and that's Joseph from Roboflow. It's like anything that can be seen can be programmable with these models.[00:54:12] Alessio: You mean[00:54:12] swyx: the computer vision guy is bullish on computer vision?[00:54:18] Alessio: It's like the rag people. The rag people are bullish on rag and not a lot of context. I'm very surprised. The, the fine tuning people love fine tuning instead of few shot. Yeah. Yeah. The, yeah, the, that's that. Yeah, the, I, I think the ring attention thing, and it's how they did it, we don't know. And then they released the Gemma models, which are like a 2 billion and 7 billion open.[00:54:41] Alessio: Models, which people said are not, are not good based on my Twitter experience, which are the, the GPU poor crumbs. It's like, Hey, we did all this work for us because we're GPU rich and we're just going to run this whole thing. And
Welcome to Alternative Dog Moms - a podcast about what's happening in the fresh food community and the pet industry. Kimberly Gauthier is the blogger behind Keep the Tail Wagging, and Erin Scott hosts the Believe in Dog podcast.CHAPTERS: Shahrina's path to working in the legal field (0:54)What everyone should know about trademarks, copyright and licensing to avoid trouble on the internet (7:24)What about intellectual property on social media and AI? (25:37)If content creators share a negative experience with a product they are reviewing, is that defamation? (31:30)If you talk negatively about a company on social media, can they sue you? (37:57)What should new or aspiring content creators do to legally protect themselves? (38:52)Green Juju's big announcement this week! (45:04)Catching up with Erin's & Kimberly's dogs (49:14)A lawsuit was filed regarding grain-free dog foods and the connection with DCM (1:02:20)About Sharina Ankhi-Krol, Esq.:Find out more about Shahrina and her work within the pet industry at: https://www.ankhikrollaw.com/pet-industryEverything stated by Shahrina Ankhi-Krol, Esq. is for informational purposes only. It is general in nature, and is not intended to and should not be relied upon or construed as legal opinion or legal advice regarding any specific issue or factual circumstance. However, you are welcome to personally consult with Ms. Ankhi-Krol for legal advice.LINKS DISCUSSED:Green Juju's big announcement (https://fb.watch/qgDIXzP1zU/)Billy talks about HPP (https://fb.watch/qgDKwVWvQc/0Erin's favorite colostrum product (https://www.pethealthandnutritioncenter.com/collections/colostrum-dogs-cats/products/repair-strengthen-digestive-tract-dogs-cats0Read the lawsuit about DCM (https://healthydogworkshop.com/hills-pet-nutrition-class-action-suit/)OUR BLOG/PODCASTS...Kimberly: Keep the Tail Wagging, KeepTheTailWagging.comErin Scott: Believe in Dog podcast, BelieveInDogPodcast.comFACEBOOK...Keep the Tail Wagging, Facebook.com/KeepTheTailWaggingBelieve in Dog Podcast, Facebook.com/BelieveInDogPodcastINSTAGRAM...Keep the Tail Wagging, Instagram.com/RawFeederLifeBelieve in Dog Podcast, Instagram.com/Erin_The_Dog_MomThanks for listening to our podcast. You can learn more about Erin Scott's first podcast at BelieveInDogPodcast.com. And you can learn more about raw feeding, raising dogs naturally, and Kimberly's dogs at KeepTheTailWagging.com. And don't forget to subscribe to The Alternative Dog Moms.
Iwan and Ben are joined by the winners of the Myelopathy.org Research Award 2023 - Aditya Vedantam and Kajama Satkunendrarajah - for their investigation of how breathing is affected in DCM, with some stark findings and potential implications for our understanding of fatigue. The prize is awarded by the charity's scientific steering committee to the best scientific study aligned with one of the research priorities each year. You can read more about the award at https://myelopathy.org/research-award/.
Adam Pearlman's first post abroad as an accompanying partner was as the spouse of a DCM (deputy chief of mission). After years of government service, Adam is now navigating the wild ride of starting a law firm that leverages geography and where remote work is an integral part of the company culture.Adam shares eye-opening moments from his first two tours as an EFM. And, we delve into a recent article he wrote for the Foreign Service Journal in which he addresses the unique challenges FS spouses face and advocates for better support for Foreign Service family members. Read his excellent article at https://afsa.org/quest-reasonable-civ-mil-parity .BIOAdam Pearlman [https://www.lexpatglobal.com/staff_trusted/adam-pearlman/] launched Lexpat Global Services with another EFM so they would have a portable platform to continue doing the work they loved.Adam had spent most of his career as a lawyer and advisor in government service with the U.S. Departments of Justice, Defense, and State, in the White House, and with the federal judiciary.A contraction between the Latin “lex” and “expatriate,” Lexpat's name is emblematic of the business model Adam and his team deliver – skilled international lawyers, peace and security experts, rule of law practitioners, and security professionals, based globally, delivering premium service to clients to accomplish their missions and goals anywhere in the world. Lexpat's team and partners – which proudly feature several EFMs – boast particular expertise in security and development-related issues. For industry clients, they provide risk assessment and management services; strategic reviews, assessments, and planning; and investigations, litigation, and compliance expertise. Their services for public sector clients include program design and monitoring and evaluation, as well as providing subject matter expertise in various fields connected to peace and security, governance, and the rule of law....This episode is sponsored by US History for Expats whose mission is to provide top-quality history courses for the kids of Mission personnel living overseas.What makes US History for Expats unique? Their middle school and high school classes are accredited and 100% reimbursable via the Supplementary Instructional Allowance.Check out a new history course for middle school students starting Feb 5, 2024, covering early cultures through the Civil War. And don't miss out on their new Elementary program with rolling admissions for less than $200! Enroll and start in fewer than 5 minutes at https://www.ushistoryforexpats.com/
This is the moment DCM fans have been waiting for... What happens when an unstoppable force meets an immoveable object?? Death and destruction.
Sean Behr is the CEO of Fountain, the company transforming the hiring process for hourly workers. Sean and the team have helped more than 80M applicants in 75 countries at places like Stitch Fix, sweetgreen, and gopuff. Fountain has raised $225M to date most recently through a $100M series C last June from an amazing list of investors including B Capital Group, SoftBank, DCM, and Uncork Capital.Sean joined Fountain as CEO in 2020 after founding fleet infrastructure platform Stratim, serving as SVP of Adap.tv through its acquisition by AOL, and holding various management roles at Shopping.com.Listen and learn...How Sean is creating opportunities for frontline workers around the worldWhat's uniquely challenging about hiring frontline vs. knowledge workersHow long before robots will replace human frontline workersThe ethical implications of using AI in hiringWhat biases are embedded in the hiring process... without AIWhy the future of hiring... is more human thanks to AIReferences in this episode...May Habib, Writer CEO, on AI and the Future of WorkJosh Bersin, HRTech pioner, on AI and the Future of WorkWhy every organization needs a Chief Ethics Officer
Health plays a significant part in your business success but often it's always the last on your priority list. In this episode, we speak with Dr. Maritsa Yzaguirre-Kelley about becoming a healthy leader and how this can impact the overall health of your business. Learn about emerging trends and innovative approaches to promoting health leadership and healthy businesses; as well as the importance of brain health, sleep, and so much more. You owe it to yourself to listen to this episode, so don't miss out! Dr. Maritsa Yzaguirre-Kelley, DCM, LMHCMaritsa is a life design strategist, brain coach, and wellness expert. She is passionate about creating solutions that make her clients lives and businesses better than they ever imagined. She has been working in the health and wellness space for over 20 years and is always eager to learn new skills and stay up to date on whats new. She is also a fervent advocate of alternative therapies and is always looking for ways to give back to the community. In her free time, she enjoys her family, travel, cooking, reading and learning. Guest Links Obtaining Mastery Instagram Obtaining Mastery Website Email Maritsa The Angela Henderson Online Business Show Podcast Links: Action Takers MastermindAustralian Business Collaborative Facebook GroupAngela Henderson WebsiteAngela Henderson Active Business Facebook GroupAngela Henderson Facebook Business PageAngela Henderson Consulting InstagramSee omnystudio.com/listener for privacy information.
Recorded live at ECVIM Congress in 2023, this very special season finale see Kieran and Jose interview six incredible veterinary cardiology Diplomates from across the globe. Prof Roberto Santilli (Cornell / Malpensa) discusses his work on atrial depolarisation waves in SVT; David Connolly (RVC) considers the next generation of drugs to treat HCM in cats; Michele Borgarelli (Virginia Maryland) reviews his unique LOOK-Mitral registry data; Tony Glaus (Zurich) talks pulmonary hypertension and hypoxia in dogs; Jo Dukes McEwan (Liverpool) reflects on canine DCM, and Jessica Ward (Iowa) brings us her thoughts on RAAS inhibition in the 2020s.
October 5, 2023 - Tom and Olivia venture out and take a tour of Denver Community Media facility downtown. Owned and operated by the city of Denver, Denver Community Media is an inclusive project that keeps multimedia content creation in the hands of the community, and aims to modernize its public access model. It is an affordable multimedia one-stop-shop with access to media training, tools, and a platform in a centrally located, collaborative space. Listen in as we sit with Ivana Corsale and Austin Andrews from the DCM team to hear all about the amazing resources and programs they're bringing to content driven Denverites! The Goods: https://www.denvercommunitymedia.org/Home First Friday Films at DCM - Friday, October 6th - Submit your film!! Rocky Mountain AVXPO - Wednesday, October 18th DCM Howloween Spooktacular - Saturday, October 28th Music produced by Troy Higgins
Iwan and Ben pause for reflection, 10 years since they first came together to create Myelopathy.org (alongside Mark Kotter), and almost 5 years since the research priorities were established in DCM. The community and reach of the charity has grown significantly over this time, which has become a powerful platform for progress. However alongside opportunity, the size also brings operational challenges. It feels like Myelopathy.org is at a tipping point, and to truly kick on we need to focus on growing financial investment. What can be done? Where could this go? Some of the critical topics discussed. If you have ideas or suggestions do get in touch ben@myelopathy.org.
It's a Myelopathy Matters Exclusive! Iwan and Ben are joined by Myelopathy.org researcher Irina Sangeorzan, to hear about the new Core Information Set. This forms part of a wider project called Shared DCM, and funded by the Evelyn Trust, to help people affected with DCM take a more active role in the decision making around their care. As Iwan likes to say, “Knowledge is Power”. The use of a CIS in this manner is a first for healthcare. We hear why it was needed, how it was formed and what it hopes to achieve.
In this classic message, Pastor Rod Parsley is preaching at Dominion Camp Meeting 1992, DCM, on "Freedom from Bondage." Pastor Rod explains that there is NO time to play "Let's Make a Deal" with the devil, which is still true today. The devil is roaming through America, trying to claim victory. However, Pastor Rod breaks down how we have to face bondage; no longer are we looking back; it's time to set our eyes on Jesus and go forward!
Chris Thompson, eResearch Corporation President & Director of Research, joins Global One Media once again to deliver the highlights of the company's update report on DATA Communications Management Corp. (TSX: DCM | OTCQX: DCMDF) or DCM, a Canadian-based provider of marketing and business communication solutions.DCM recently completed the acquisition of its Moore Canada Corporation which Chris credits as the origin of the company's business improvements. eResearch reports plenty of bright spots for the company as it heads towards the last quarter of 2023.For more information, please visit: DCM: https://www.datacm.com/ eResearch report: https://eresearch.com/2023/08/17/eres...Watch the full YouTube interview here: https://www.youtube.com/watch?v=QkwCJEsx-pgAnd follow us to stay updated: https://www.youtube.com/@GlobalOneMedia?sub_confirmation=1
In this classic message, Pastor Rod Parsley is preaching at Dominion Camp Meeting 1992, DCM, on "Freedom from Bondage." Pastor Rod explains that there is NO time to play "Let's Make a Deal" with the devil, which is still true today. The devil is roaming through America, trying to claim victory. However, Pastor Rod breaks down how we have to face bondage; no longer are we looking back; it's time to set our eyes on Jesus and go forward!
Mick Bransfield and Pratik Chougule do a deep dive into the Fifth Circuit's decision to grant PredictIt a preliminary injunction and what it means for traders as well as the broader political betting community. Bransfield and Chougule also speculate on why PredictIt still isn't rolling out new markets. 4:53: Summary of Fifth Circuit ruling 14:51: CFTC attorneys and their legal strategy against PredictIt 27:19: PredictIt hiring for a DCM 31:21: New markets at PredictIt? 34:20: SSG website down
Iwan and Ben are joined by Jay Cantebrigge, a person living with myelopathy, to talk about his campaign “on borrowed spine” aiming to raise awareness of DCM. Jay is taking the campaign on the road, turning a bus into a mobile educational cafe! The project is at the early stages, and Jay is looking for help - financial or in kind. Find further information at https://www.gofundme.com/f/onborrowedspine-bus-conversion.
In this Global One Media interview, President and Director of Research Chris Thompson talks about eResearch Corporation's Update Report on DATA Communications Management Corp. (DCM), a Canadian-based communications and marketing solutions provider.Mr. Thompson goes over the highlights of eResearch Corporation's Report such as DCM's Financial Results and Merger with Moore Canada Corp.For more information: https://eresearch.comWatch the full YouTube interview here: https://www.youtube.com/watch?v=gdq0TVmnnggAnd follow us to stay updated: https://www.youtube.com/@GlobalOneMedia?sub_confirmation=1