POPULARITY
America's banks are sitting on over $400B in losses. The FDIC says your deposits are safe—but the math says otherwise. Is your savings account the next casualty? Taylor Kenney breaks it down.Questions on Protecting Your Wealth with Gold & Silver? Schedule a Strategy Call Here ➡️ https://calendly.com/itmtrading/podcastor Call 866-349-3310
Few brands impress me the way Kizik has when it comes to everyday innovation that people can use in their daily lives. In this episode, I sit down with Elizabeth Drori, the Chief Marketing Officer leading the charge at Kizik, the brand redefining how we think about footwear. These aren't just shoes; they're hands-free, high-tech, and built for real life. Whether you're a parent juggling kids, an athlete on the go, or someone seeking comfort and ease, Kizik is changing the game. And yes, I've worn the same pair for four years — that's how good they are. Elizabeth shares the evolution of Kizik — from early tech innovation to a growing global brand — and how they're making shoes that are stylish, comfortable, and built to move with you. We dive into marketing strategy, inclusive design, and why hands-free isn't just a feature, it's the future. Here are just a few highlights: * How Kizik is disrupting the footwear industry with hands-free technology * Why their “no-sacrifice” approach balances performance, comfort, and design * Scaling from DTC to retail with stores, Amazon, and international launches * The role of inclusivity and lifestyle in their expanding product line * Marketing to parents, athletes, older adults, and everyone in between Join me, Ramon Vela, in listening to the episode and discovering why hands-free might be the only way forward regarding shoes. For more on Kizik, visit: https://kizik.com/ If you enjoyed this episode, please leave The Story of a Brand Show a rating and review. Plus, don't forget to follow us on Apple and Spotify. Your support helps us bring you more content like this! * Today's Sponsors: Compass Rose Ventures - Advisor for CPG Brands: https://compassroseventures.com/contact/ Compass Rose Ventures can help your CPG brand increase customer lifetime value, expand into the US market, create an omnipresent omnichannel footprint, optimize customer journeys, build brand communities, and more. Visit the link above to learn more. Color More Lines: https://www.colormorelines.com/get-started Color More Lines is a team of ex-Amazonians and e-commerce operators who help brands grow faster on Amazon and Walmart. With a performance-based pricing model and flexible contracts, they've generated triple-digit year-over-year growth for established sellers doing over $5 million per year. Use code "STORY OF A BRAND” and receive a complimentary market opportunity assessment of your e-commerce brand and marketplace positioning.
Have you been applying the 80-20 rule all wrong? Most entrepreneurs have a surface-level understanding that 80% of results come from 20% of efforts, but as Perry Marshall reveals, this barely scratches the surface of this powerful principle's true potential.TODAY'S WIN-WIN:Use the 80/20 principle to identify new opportunities for your business.LINKS FROM THE EPISODE:You can visit our guest's website at: https://www.perrymarshall.com/Get a copy of our guest's book: CLICK HERE.Attend our Franchise Sales Training Workshop: https://bigskyfranchiseteam.com/franchisesalestraining/If you are ready to franchise your business or take it to the next level: CLICK HERE.Connect with our guest on social:https://x.com/perrymarshallhttps://www.facebook.com/perrymarshallcom You can here Tom's interview on his podcast: https://podcasts.apple.com/us/podcast/path-2-freedom/id1505372686ABOUT OUR GUEST:Perry Marshall is a renowned business strategist, best-selling author, and expert in digital advertising. Known for Ultimate Guide to Google Ads and 80/20 Sales and Marketing, Perry has consulted across 300+ industries, shaping the $400B digital ad space. He's the founder of the $10M Evolution 2.0 Prize and co-founder of the Cancer & Evolution Working Group. Perry's insights bridge the worlds of marketing, science, and entrepreneurship, making him a must-listen for anyone seeking growth and innovation. ABOUT BIG SKY FRANCHISE TEAM:This episode is powered by Big Sky Franchise Team. If you are ready to talk about franchising your business you can schedule your free, no-obligation, franchise consultation online at: https://bigskyfranchiseteam.com/.The information provided in this podcast is for informational and educational purposes only and should not be considered financial, legal, or professional advice. Always consult with a qualified professional before making any business decisions. The views and opinions expressed by guests are their own and do not necessarily reflect those of the host, Big Sky Franchise Team, or our affiliates. Additionally, this podcast may feature sponsors or advertisers, but any mention of products or services does not constitute an endorsement. Please do your own research before making any purchasing or business decisions.
What does Coinbase's pivot back to Bitcoin mean for this cycles price runup and the adoption of bitcoin?► Bitcoin Well: bitcoinwell.com/simplybtc► Ledn: https://learn.ledn.io/simplySimply Bitcoin clients get 0.25% off their first loan► Coldcard: https://store.coinkite.com/promo/simplyPROMO CODE: SIMPLY for 5% OFF► Stamp Seed: www.stampseed.comPROMO CODE: SIMPLY for a 15% discount► HIVE Digital Technologies: hivedigitaltech.com► Casa: casa.io/simplyPROMO CODE: SIMPLY for 5% OFF your first year of Casa Standard or Premium ► Bitcoin Conference 2025: b.tc/conference/2025PROMO CODE: “SIMPLY” for 10% offFOLLOW US► https://twitter.com/SimplyBitcoinTV► https://twitter.com/bitvolt► https://twitter.com/Optimistfields► Nostr: npub1vzjukpr2vrxqg2m9q3a996gpzx8qktg82vnl9jlxp7a9yawnwxfsqnx9gcJOIN OUR TELEGRAM, GIVE US A MEME TO REVIEW!► https://t.me/SimplyBitcoinTVSUBSCRIBE TO OUR YOUTUBE► https://bit.ly/3QbgqTQSUPPORT US► On-Chain: bc1qpm5j7wsnk46l2ukgpm7w3deesx2mdrzcgun6ms► Lightning: simplybitcoin@walletofsatoshi.com#bitcoin #bitcoinnews #simplybitcoinDISCLAIMER: All views in this episode are our own and DO NOT reflect the views of any of our guests or sponsors.Copyright Disclaimer under section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research. If you are or represent the copyright owner of materials used in this video and have a problem with the use of said material, please contact Simply Bitcoin.
Jigar Shah served as Director of the Loan Programs Office (LPO) at the U.S. Department of Energy (DOE) from March 2021 to January 2025, where he oversaw a $400B budget. Prior, Shah was co-founder and President at Generate Capital, where he focused on helping entrepreneurs accelerate decarbonization solutions through the use of low-cost infrastructure-as-a service financing. Generate has raised over $10 billion, investing in 50+ technology and development partnerships with more than 2,000 assets globally.Prior to Generate Capital, Shah founded SunEdison, a company that pioneered “pay as you save” solar financing (i.e., PPAs).After SunEdison, Shah served as the founding CEO of the Carbon War Room, a global non-profit founded by Sir Richard Branson to help entrepreneurs address climate change.--Here are six topics we covered in the podcast:1. Post-LPO ResetAfter managing $107B in deals at DOE's Loan Programs Office, Jigar Shah hit pause and rebranded as a “podcaster.” He's taking time to reflect before diving into the next chapter.2. Climate VC Is BrokenShah says the 100x-return VC model doesn't fit climate tech's reality. He pushes for an “East Coast” model: aim for 18% IRR, win 7 of 10 bets, and skip the moonshots.3. Evergreen Capital > 2-and-20At Generate Capital, Shah turned down big checks to build an evergreen structure that aligns with long-term climate infrastructure. It's less lucrative for managers, but way better for founders.4. FOAK Risk, ExplainedHe breaks project finance into five risks: tech, feedstock, offtake, construction, and ops. LPO, unlike most investors, can stomach execution risk, like 12 methane pyrolysis reactors, not just one.5. Think Like a DeveloperClean tech needs dev capital like real estate: risky early bets, then stable returns once built. It's not “risk-free”—just “risk-you-can-understand.”6. Deep Tech's Fatal FlawToo many founders chase giant, low-margin markets. Shah says to start with high-margin niches (like InventWood selling to data centers) and then scale.--
David Rubenstein helped pioneer modern private equity—building The Carlyle Group into a $400B global investment firm from a modest D.C. office and a relentless fundraising streak. But beyond PE, his legacy spans presidential libraries, historic American artifacts, and a lifelong obsession with civic contribution.In this episode, David shares how he raised billions without a background in finance, why owning a baseball team was more than just a trophy purchase—and what building true generational success really means beyond wealth alone.Chapters:00:00 Trailer00:53 Introduction01:40 Family, wealth, class14:40 Happiness disparity and longevity19:25 I need more to give away more25:04 The relentless fundraiser 33:53 Kids and travel36:06 No track record, the great white buffalo38:59 Business and politics43:53 Fired from Washington45:52 Fundraising, presidents, podcast guests48:04 Private equity and sports53:44 Expenses — no charges55:49 Waking up with energy 57:26 Preserving copies1:02:05 Organizational architecture1:03:41 Bury me in my plane1:08:11 Not a big luxury spender1:10:32 What “grit” means to David1:10:50 OutroMentioned in this episode: Andrew Rubenstein, Stanford University, Bill Gates, Melinda Gates, Warren Buffett, Morgan Guaranty Trust Company, International Business Machines Corporation (IBM), Procter & Gamble Company, Forbes 400, Duke University, University of Chicago, Harvard Corporation, Johns Hopkins University, California Public Employees' Retirement System (CalPERS), President of the United States of America, Donald J. Trump, Jimmy Carter, John F. Kennedy Center for the Performing Arts, Smithsonian Institution, National Gallery of Art, George W. Bush, Barack Obama, Joe Biden, Arianna Huffington, Xi Jinping, Hank Greenberg, Stephen A. Schwarzman, Tim Cook, Jeff Bezos, Baltimore Orioles, Fred Trammell Crow, Harlan Crow, National Basketball Association (NBA), National Football League (NFL), Arctos Partners LP, Anthropic, Magna Carta Libertatum, Declaration of Independence, Emancipation Proclamation, Abraham Lincoln, US Constitution, National Archives, Lincoln Memorial, Thomas Jefferson Memorial, Mount Vernon, Monticello, Montpelier, Mark Cuban, Paul McCartneyConnect with David:X: @DM_RubensteinConnect with Joubin:X: @JoubinmirLinkedIn: Joubin MirzadeganEmail: grit@kleinerperkins.comkleinerperkins.com
Paul Lane and Marc Fandetti discuss Tesla's earnings possibly offering some hope for the stock market. As the dollar falters, the world's central banks tread a tightrope. Vance calls for new era of US-India ties as trade talks advance. Gold hits $3,500, setting another record. Warren Buffett timed his Apple stock sale to perfection. Wide federal workforce may shrink by 1.2 million. Trumps millionaire tax would generate $400B in revenue.
Do you have a defined referral plan? Do you get as many referrals from CPAs and attorneys as you send to them? Have you segmented your book by how likely clients are to refer people to you, not just their net worth? Molly Bennard is the president of international operations at Focus Financial, a firm with over $400B in AUM (as of December 2024). She joins Capital Group practice management consultants Jon Wainman and Max McQuiston to help advisors create, improve and execute their client and center of influence (COI) referral plans.
Hey Folks, Alex here, celebrating an absolutely crazy (to me) milestone, of #100 episodes of ThursdAI
Kassandra and Martin were looking for a new home in the core of the City and they hoped to include some green features in their home. But then they discovered Blatchford Carbon Neutral Community and as Martin said "Boom Done." They found Landmark Homes in Blatchford and their home is solar powered, heated by a super energy efficient air-source heat pump and it's super energy efficient. The big BONUS! This net-zero home has only one utility bill (electricity) and there's a good chance the cost will net out to zero cost over the course of the year. And they're protected from heat domes thanks to their heat pump, future energy costs and more. GreenEnergyFutures.ca CKUA.com Podcast BLOG: GreenEnergyFutures.ca VIDEO: https://youtu.be/Oxm6nTABUtM This is part 5 in our Blatchford, Carbon Neutral Community Series SUBSCRIBE AND HEAT MORE THAN 400 inspiring stories about our green energy future.
Perry Marshall is one of the most expensive business strategists in the world. He is endorsed in FORBES and INC Magazine and has authored ten books. At London's Royal Society he announced the world's largest science research challenge, the 10 million dollars Evolution 2.0 Prize. His reinvention of the Pareto Principle is published in Harvard Business Review, and his Google book laid the foundations for the $400B digital advertising industry. He has a degree in Engineering and lives with his family in Chicago. Top 3 Value Bombs 1. The Network Effect is the most powerful moat you could possibly have around your castle (business). 2. The “If If or else guarantee” recognizes that in every transaction, somebody is taking the risk and it should not be the customer. 3. The only way to succeed is in the high end. A superior product for which you can command for a higher price with a good margin is literally the only way to make a lot of money for a small business. Get a copy of Perry's book on Amazon - Detox, Declutter, Dominate Sponsors Shopify Be ready to sell wherever your customers are scrolling or strolling — on the web, in your store, in their feed, and everywhere in between! Sign up for your 1 dollar per month trial period at Shopify.com/onfire NetSuite Over 41,000 businesses have future-proofed their business with NetSuite, by Oracle - THE number one cloud E.R.P. Download the CFO's Guide to AI and Machine Learning for free at NetSuite.com/fire
Perry Marshall is one of the most expensive business strategists in the world. He is endorsed in FORBES and INC Magazine and has authored ten books. At London's Royal Society he announced the world's largest science research challenge, the 10 million dollars Evolution 2.0 Prize. His reinvention of the Pareto Principle is published in Harvard Business Review, and his Google book laid the foundations for the $400B digital advertising industry. He has a degree in Engineering and lives with his family in Chicago. Top 3 Value Bombs 1. The Network Effect is the most powerful moat you could possibly have around your castle (business). 2. The “If If or else guarantee” recognizes that in every transaction, somebody is taking the risk and it should not be the customer. 3. The only way to succeed is in the high end. A superior product for which you can command for a higher price with a good margin is literally the only way to make a lot of money for a small business. Get a copy of Perry's book on Amazon - Detox, Declutter, Dominate Sponsors Shopify Be ready to sell wherever your customers are scrolling or strolling — on the web, in your store, in their feed, and everywhere in between! Sign up for your 1 dollar per month trial period at Shopify.com/onfire NetSuite Over 41,000 businesses have future-proofed their business with NetSuite, by Oracle - THE number one cloud E.R.P. Download the CFO's Guide to AI and Machine Learning for free at NetSuite.com/fire
Today, you will hear my conversation with one of the OGs of the Internet, Perry Marshall. Let me tell you a little bit about Perry. He is undoubtedly one of the most experienced Internet Marketers of all time. If there's ever an internet marketing hall of fame, he would be inducted in the first round. I remember crossing Perry's path way back in my early days as an online business builder, long before this podcast was even an idea, let alone a full show. I remember thinking how smart he was then, and let me tell you -- he's even smarter now. You will hear Perry talk about his early days online, how he got involved, how he positioned himself with Google AdWords until he recognized hitching his brand to a platform he didn't own was not a great idea. He's written a ton about how to market and sell online, including “80/20 Sales and Marketing -- the Definitive Guide to Working Less and Making More.” You all know how much I love a good book title -- this is exceptional. And what's inside will blow your mind. Perry has consulted across 300+ industries, shaping the $400B digital ad space. He's the founder of the $10M Evolution 2.0 Prize and co-founder of the Cancer & Evolution Working Group. Perry's insights bridge the worlds of marketing, science, and entrepreneurship, making him a must-listen for anyone seeking growth and innovation. You can find Perry on his website: https://perrymarshall.com Get Perry's offer here: https://perrymarshall.com/podcast = = = = = Join the AI Conversation You've Been Waiting to Have without the Hype or the Noise. Get my books here: The River Only Runs One Way The Far Unlit Unknown = = = = = Thank you for supporting the show! Your 5-star rating and review makes a difference -- it's easy to leave one and it helps spread the word about the podcast! Best social places to connect with me: @maryloukayser (Instagram) https://www.linkedin.com/in/mlkayser/ (LinkedIn)
Today's show moves on to the Trump years, with many more shows to come no doubt. It's been only one week but some outlines of where Trump may be going are emerging. Today the discussion focuses on the emerging spending cuts and how they're designed to pay for Trump's new tax cuts. Some DOGE austerity targets are discussed. How $200B/yr will cover just half of Trump's $400B/yr tax cuts. How Tariffs & assumed GDP growth is supposed to cover the rest. That still leaves $1-$1.5 trillion budget deficit for 2025, however. What's behind Trump's talk about providing another $500B stimulus for artificial intelligence? Or his ‘drill baby drill'. What's the multiple roles for tariffs? In global geopolitics, some scenarios behind Trump's emerging attitude to Ukraine war. Why war won't end until late 2025. The likely Trump strategy behind Greenland and Panama Canal.
Sponsorships and applications for the AI Engineer Summit in NYC are live! (Speaker CFPs have closed) If you are building AI agents or leading teams of AI Engineers, this will be the single highest-signal conference of the year for you.Right after Christmas, the Chinese Whale Bros ended 2024 by dropping the last big model launch of the year: DeepSeek v3. Right now on LM Arena, DeepSeek v3 has a score of 1319, right under the full o1 model, Gemini 2, and 4o latest. This makes it the best open weights model in the world in January 2025.There has been a big recent trend in Chinese labs releasing very large open weights models, with TenCent releasing Hunyuan-Large in November and Hailuo releasing MiniMax-Text this week, both over 400B in size. However these extra-large language models are very difficult to serve.Baseten was the first of the Inference neocloud startups to get DeepSeek V3 online, because of their H200 clusters, their close collaboration with the DeepSeek team and early support of SGLang, a relatively new VLLM alternative that is also used at frontier labs like X.ai. Each H200 has 141 GB of VRAM with 4.8 TB per second of bandwidth, meaning that you can use 8 H200's in a node to inference DeepSeek v3 in FP8, taking into account KV Cache needs. We have been close to Baseten since Sarah Guo introduced Amir Haghighat to swyx, and they supported the very first Latent Space Demo Day in San Francisco, which was effectively the trial run for swyx and Alessio to work together! Since then, Philip Kiely also led a well attended workshop on TensorRT LLM at the 2024 World's Fair. We worked with him to get two of their best representatives, Amir and Lead Model Performance Engineer Yineng Zhang, to discuss DeepSeek, SGLang, and everything they have learned running Mission Critical Inference workloads at scale for some of the largest AI products in the world.The Three Pillars of Mission Critical InferenceWe initially planned to focus the conversation on SGLang, but Amir and Yineng were quick to correct us that the choice of inference framework is only the simplest, first choice of 3 things you need for production inference at scale:“I think it takes three things, and each of them individually is necessary but not sufficient: * Performance at the model level: how fast are you running this one model running on a single GPU, let's say. The framework that you use there can, can matter. The techniques that you use there can matter. The MLA technique, for example, that Yineng mentioned, or the CUDA kernels that are being used. But there's also techniques being used at a higher level, things like speculative decoding with draft models or with Medusa heads. And these are implemented in the different frameworks, or you can even implement it yourself, but they're not necessarily tied to a single framework. But using speculative decoding gets you massive upside when it comes to being able to handle high throughput. But that's not enough. Invariably, that one model running on a single GPU, let's say, is going to get too much traffic that it cannot handle.* Horizontal scaling at the cluster/region level: And at that point, you need to horizontally scale it. That's not an ML problem. That's not a PyTorch problem. That's an infrastructure problem. How quickly do you go from, a single replica of that model to 5, to 10, to 100. And so that's the second, that's the second pillar that is necessary for running these machine critical inference workloads.And what does it take to do that? It takes, some people are like, Oh, You just need Kubernetes and Kubernetes has an autoscaler and that just works. That doesn't work for, for these kinds of mission critical inference workloads. And you end up catching yourself wanting to bit by bit to rebuild those infrastructure pieces from scratch. This has been our experience. * And then going even a layer beyond that, Kubernetes runs in a single. cluster. It's a single cluster. It's a single region tied to a single region. And when it comes to inference workloads and needing GPUs more and more, you know, we're seeing this that you cannot meet the demand inside of a single region. A single cloud's a single region. In other words, a single model might want to horizontally scale up to 200 replicas, each of which is, let's say, 2H100s or 4H100s or even a full node, you run into limits of the capacity inside of that one region. And what we had to build to get around that was the ability to have a single model have replicas across different regions. So, you know, there are models on Baseten today that have 50 replicas in GCP East and, 80 replicas in AWS West and Oracle in London, etc.* Developer experience for Compound AI Systems: The final one is wrapping the power of the first two pillars in a very good developer experience to be able to afford certain workflows like the ones that I mentioned, around multi step, multi model inference workloads, because more and more we're seeing that the market is moving towards those that the needs are generally in these sort of more complex workflows. We think they said it very well.Show Notes* Amir Haghighat, Co-Founder, Baseten* Yineng Zhang, Lead Software Engineer, Model Performance, BasetenFull YouTube EpisodePlease like and subscribe!Timestamps* 00:00 Introduction and Latest AI Model Launch* 00:11 DeepSeek v3: Specifications and Achievements* 03:10 Latent Space Podcast: Special Guests Introduction* 04:12 DeepSeek v3: Technical Insights* 11:14 Quantization and Model Performance* 16:19 MOE Models: Trends and Challenges* 18:53 Baseten's Inference Service and Pricing* 31:13 Optimization for DeepSeek* 31:45 Three Pillars of Mission Critical Inference Workloads* 32:39 Scaling Beyond Single GPU* 33:09 Challenges with Kubernetes and Infrastructure* 33:40 Multi-Region Scaling Solutions* 35:34 SG Lang: A New Framework* 38:52 Key Techniques Behind SG Lang* 48:27 Speculative Decoding and Performance* 49:54 Future of Fine-Tuning and RLHF* 01:00:28 Baseten's V3 and Industry TrendsBaseten's previous TensorRT LLM workshop: Get full access to Latent Space at www.latent.space/subscribe
Applications for the 2025 AI Engineer Summit are up, and you can save the date for AIE Singapore in April and AIE World's Fair 2025 in June.Happy new year, and thanks for 100 great episodes! Please let us know what you want to see/hear for the next 100!Full YouTube Episode with Slides/ChartsLike and subscribe and hit that bell to get notifs!Timestamps* 00:00 Welcome to the 100th Episode!* 00:19 Reflecting on the Journey* 00:47 AI Engineering: The Rise and Impact* 03:15 Latent Space Live and AI Conferences* 09:44 The Competitive AI Landscape* 21:45 Synthetic Data and Future Trends* 35:53 Creative Writing with AI* 36:12 Legal and Ethical Issues in AI* 38:18 The Data War: GPU Poor vs. GPU Rich* 39:12 The Rise of GPU Ultra Rich* 40:47 Emerging Trends in AI Models* 45:31 The Multi-Modality War* 01:05:31 The Future of AI Benchmarks* 01:13:17 Pionote and Frontier Models* 01:13:47 Niche Models and Base Models* 01:14:30 State Space Models and RWKB* 01:15:48 Inference Race and Price Wars* 01:22:16 Major AI Themes of the Year* 01:22:48 AI Rewind: January to March* 01:26:42 AI Rewind: April to June* 01:33:12 AI Rewind: July to September* 01:34:59 AI Rewind: October to December* 01:39:53 Year-End Reflections and PredictionsTranscript[00:00:00] Welcome to the 100th Episode![00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host Swyx for the 100th time today.[00:00:12] swyx: Yay, um, and we're so glad that, yeah, you know, everyone has, uh, followed us in this journey. How do you feel about it? 100 episodes.[00:00:19] Alessio: Yeah, I know.[00:00:19] Reflecting on the Journey[00:00:19] Alessio: Almost two years that we've been doing this. We've had four different studios. Uh, we've had a lot of changes. You know, we used to do this lightning round. When we first started that we didn't like, and we tried to change the question. The answer[00:00:32] swyx: was cursor and perplexity.[00:00:34] Alessio: Yeah, I love mid journey. It's like, do you really not like anything else?[00:00:38] Alessio: Like what's, what's the unique thing? And I think, yeah, we, we've also had a lot more research driven content. You know, we had like 3DAO, we had, you know. Jeremy Howard, we had more folks like that.[00:00:47] AI Engineering: The Rise and Impact[00:00:47] Alessio: I think we want to do more of that too in the new year, like having, uh, some of the Gemini folks, both on the research and the applied side.[00:00:54] Alessio: Yeah, but it's been a ton of fun. I think we both started, I wouldn't say as a joke, we were kind of like, Oh, we [00:01:00] should do a podcast. And I think we kind of caught the right wave, obviously. And I think your rise of the AI engineer posts just kind of get people. Sombra to congregate, and then the AI engineer summit.[00:01:11] Alessio: And that's why when I look at our growth chart, it's kind of like a proxy for like the AI engineering industry as a whole, which is almost like, like, even if we don't do that much, we keep growing just because there's so many more AI engineers. So did you expect that growth or did you expect that would take longer for like the AI engineer thing to kind of like become, you know, everybody talks about it today.[00:01:32] swyx: So, the sign of that, that we have won is that Gartner puts it at the top of the hype curve right now. So Gartner has called the peak in AI engineering. I did not expect, um, to what level. I knew that I was correct when I called it because I did like two months of work going into that. But I didn't know, You know, how quickly it could happen, and obviously there's a chance that I could be wrong.[00:01:52] swyx: But I think, like, most people have come around to that concept. Hacker News hates it, which is a good sign. But there's enough people that have defined it, you know, GitHub, when [00:02:00] they launched GitHub Models, which is the Hugging Face clone, they put AI engineers in the banner, like, above the fold, like, in big So I think it's like kind of arrived as a meaningful and useful definition.[00:02:12] swyx: I think people are trying to figure out where the boundaries are. I think that was a lot of the quote unquote drama that happens behind the scenes at the World's Fair in June. Because I think there's a lot of doubt or questions about where ML engineering stops and AI engineering starts. That's a useful debate to be had.[00:02:29] swyx: In some sense, I actually anticipated that as well. So I intentionally did not. Put a firm definition there because most of the successful definitions are necessarily underspecified and it's actually useful to have different perspectives and you don't have to specify everything from the outset.[00:02:45] Alessio: Yeah, I was at um, AWS reInvent and the line to get into like the AI engineering talk, so to speak, which is, you know, applied AI and whatnot was like, there are like hundreds of people just in line to go in.[00:02:56] Alessio: I think that's kind of what enabled me. People, right? Which is what [00:03:00] you kind of talked about. It's like, Hey, look, you don't actually need a PhD, just, yeah, just use the model. And then maybe we'll talk about some of the blind spots that you get as an engineer with the earlier posts that we also had on on the sub stack.[00:03:11] Alessio: But yeah, it's been a heck of a heck of a two years.[00:03:14] swyx: Yeah.[00:03:15] Latent Space Live and AI Conferences[00:03:15] swyx: You know, I was, I was trying to view the conference as like, so NeurIPS is I think like 16, 17, 000 people. And the Latent Space Live event that we held there was 950 signups. I think. The AI world, the ML world is still very much research heavy. And that's as it should be because ML is very much in a research phase.[00:03:34] swyx: But as we move this entire field into production, I think that ratio inverts into becoming more engineering heavy. So at least I think engineering should be on the same level, even if it's never as prestigious, like it'll always be low status because at the end of the day, you're manipulating APIs or whatever.[00:03:51] swyx: But Yeah, wrapping GPTs, but there's going to be an increasing stack and an art to doing these, these things well. And I, you know, I [00:04:00] think that's what we're focusing on for the podcast, the conference and basically everything I do seems to make sense. And I think we'll, we'll talk about the trends here that apply.[00:04:09] swyx: It's, it's just very strange. So, like, there's a mix of, like, keeping on top of research while not being a researcher and then putting that research into production. So, like, people always ask me, like, why are you covering Neuralibs? Like, this is a ML research conference and I'm like, well, yeah, I mean, we're not going to, to like, understand everything Or reproduce every single paper, but the stuff that is being found here is going to make it through into production at some point, you hope.[00:04:32] swyx: And then actually like when I talk to the researchers, they actually get very excited because they're like, oh, you guys are actually caring about how this goes into production and that's what they really really want. The measure of success is previously just peer review, right? Getting 7s and 8s on their um, Academic review conferences and stuff like citations is one metric, but money is a better metric.[00:04:51] Alessio: Money is a better metric. Yeah, and there were about 2200 people on the live stream or something like that. Yeah, yeah. Hundred on the live stream. So [00:05:00] I try my best to moderate, but it was a lot spicier in person with Jonathan and, and Dylan. Yeah, that it was in the chat on YouTube.[00:05:06] swyx: I would say that I actually also created.[00:05:09] swyx: Layen Space Live in order to address flaws that are perceived in academic conferences. This is not NeurIPS specific, it's ICML, NeurIPS. Basically, it's very sort of oriented towards the PhD student, uh, market, job market, right? Like literally all, basically everyone's there to advertise their research and skills and get jobs.[00:05:28] swyx: And then obviously all the, the companies go there to hire them. And I think that's great for the individual researchers, but for people going there to get info is not great because you have to read between the lines, bring a ton of context in order to understand every single paper. So what is missing is effectively what I ended up doing, which is domain by domain, go through and recap the best of the year.[00:05:48] swyx: Survey the field. And there are, like NeurIPS had a, uh, I think ICML had a like a position paper track, NeurIPS added a benchmarks, uh, datasets track. These are ways in which to address that [00:06:00] issue. Uh, there's always workshops as well. Every, every conference has, you know, a last day of workshops and stuff that provide more of an overview.[00:06:06] swyx: But they're not specifically prompted to do so. And I think really, uh, Organizing a conference is just about getting good speakers and giving them the correct prompts. And then they will just go and do that thing and they do a very good job of it. So I think Sarah did a fantastic job with the startups prompt.[00:06:21] swyx: I can't list everybody, but we did best of 2024 in startups, vision, open models. Post transformers, synthetic data, small models, and agents. And then the last one was the, uh, and then we also did a quick one on reasoning with Nathan Lambert. And then the last one, obviously, was the debate that people were very hyped about.[00:06:39] swyx: It was very awkward. And I'm really, really thankful for John Franco, basically, who stepped up to challenge Dylan. Because Dylan was like, yeah, I'll do it. But He was pro scaling. And I think everyone who is like in AI is pro scaling, right? So you need somebody who's ready to publicly say, no, we've hit a wall.[00:06:57] swyx: So that means you're saying Sam Altman's wrong. [00:07:00] You're saying, um, you know, everyone else is wrong. It helps that this was the day before Ilya went on, went up on stage and then said pre training has hit a wall. And data has hit a wall. So actually Jonathan ended up winning, and then Ilya supported that statement, and then Noam Brown on the last day further supported that statement as well.[00:07:17] swyx: So it's kind of interesting that I think the consensus kind of going in was that we're not done scaling, like you should believe in a better lesson. And then, four straight days in a row, you had Sepp Hochreiter, who is the creator of the LSTM, along with everyone's favorite OG in AI, which is Juergen Schmidhuber.[00:07:34] swyx: He said that, um, we're pre trading inside a wall, or like, we've run into a different kind of wall. And then we have, you know John Frankel, Ilya, and then Noam Brown are all saying variations of the same thing, that we have hit some kind of wall in the status quo of what pre trained, scaling large pre trained models has looked like, and we need a new thing.[00:07:54] swyx: And obviously the new thing for people is some make, either people are calling it inference time compute or test time [00:08:00] compute. I think the collective terminology has been inference time, and I think that makes sense because test time, calling it test, meaning, has a very pre trained bias, meaning that the only reason for running inference at all is to test your model.[00:08:11] swyx: That is not true. Right. Yeah. So, so, I quite agree that. OpenAI seems to have adopted, or the community seems to have adopted this terminology of ITC instead of TTC. And that, that makes a lot of sense because like now we care about inference, even right down to compute optimality. Like I actually interviewed this author who recovered or reviewed the Chinchilla paper.[00:08:31] swyx: Chinchilla paper is compute optimal training, but what is not stated in there is it's pre trained compute optimal training. And once you start caring about inference, compute optimal training, you have a different scaling law. And in a way that we did not know last year.[00:08:45] Alessio: I wonder, because John is, he's also on the side of attention is all you need.[00:08:49] Alessio: Like he had the bet with Sasha. So I'm curious, like he doesn't believe in scaling, but he thinks the transformer, I wonder if he's still. So, so,[00:08:56] swyx: so he, obviously everything is nuanced and you know, I told him to play a character [00:09:00] for this debate, right? So he actually does. Yeah. He still, he still believes that we can scale more.[00:09:04] swyx: Uh, he just assumed the character to be very game for, for playing this debate. So even more kudos to him that he assumed a position that he didn't believe in and still won the debate.[00:09:16] Alessio: Get rekt, Dylan. Um, do you just want to quickly run through some of these things? Like, uh, Sarah's presentation, just the highlights.[00:09:24] swyx: Yeah, we can't go through everyone's slides, but I pulled out some things as a factor of, like, stuff that we were going to talk about. And we'll[00:09:30] Alessio: publish[00:09:31] swyx: the rest. Yeah, we'll publish on this feed the best of 2024 in those domains. And hopefully people can benefit from the work that our speakers have done.[00:09:39] swyx: But I think it's, uh, these are just good slides. And I've been, I've been looking for a sort of end of year recaps from, from people.[00:09:44] The Competitive AI Landscape[00:09:44] swyx: The field has progressed a lot. You know, I think the max ELO in 2023 on LMSys used to be 1200 for LMSys ELOs. And now everyone is at least at, uh, 1275 in their ELOs, and this is across Gemini, Chadjibuti, [00:10:00] Grok, O1.[00:10:01] swyx: ai, which with their E Large model, and Enthopic, of course. It's a very, very competitive race. There are multiple Frontier labs all racing, but there is a clear tier zero Frontier. And then there's like a tier one. It's like, I wish I had everything else. Tier zero is extremely competitive. It's effectively now three horse race between Gemini, uh, Anthropic and OpenAI.[00:10:21] swyx: I would say that people are still holding out a candle for XAI. XAI, I think, for some reason, because their API was very slow to roll out, is not included in these metrics. So it's actually quite hard to put on there. As someone who also does charts, XAI is continually snubbed because they don't work well with the benchmarking people.[00:10:42] swyx: Yeah, yeah, yeah. It's a little trivia for why XAI always gets ignored. The other thing is market share. So these are slides from Sarah. We have it up on the screen. It has gone from very heavily open AI. So we have some numbers and estimates. These are from RAMP. Estimates of open AI market share in [00:11:00] December 2023.[00:11:01] swyx: And this is basically, what is it, GPT being 95 percent of production traffic. And I think if you correlate that with stuff that we asked. Harrison Chase on the LangChain episode, it was true. And then CLAUD 3 launched mid middle of this year. I think CLAUD 3 launched in March, CLAUD 3. 5 Sonnet was in June ish.[00:11:23] swyx: And you can start seeing the market share shift towards opening, uh, towards that topic, uh, very, very aggressively. The more recent one is Gemini. So if I scroll down a little bit, this is an even more recent dataset. So RAM's dataset ends in September 2 2. 2024. Gemini has basically launched a price war at the low end, uh, with Gemini Flash, uh, being basically free for personal use.[00:11:44] swyx: Like, I think people don't understand the free tier. It's something like a billion tokens per day. Unless you're trying to abuse it, you cannot really exhaust your free tier on Gemini. They're really trying to get you to use it. They know they're in like third place, um, fourth place, depending how you, how you count.[00:11:58] swyx: And so they're going after [00:12:00] the Lower tier first, and then, you know, maybe the upper tier later, but yeah, Gemini Flash, according to OpenRouter, is now 50 percent of their OpenRouter requests. Obviously, these are the small requests. These are small, cheap requests that are mathematically going to be more.[00:12:15] swyx: The smart ones obviously are still going to OpenAI. But, you know, it's a very, very big shift in the market. Like basically 2023, 2022, To going into 2024 opening has gone from nine five market share to Yeah. Reasonably somewhere between 50 to 75 market share.[00:12:29] Alessio: Yeah. I'm really curious how ramped does the attribution to the model?[00:12:32] Alessio: If it's API, because I think it's all credit card spin. . Well, but it's all, the credit card doesn't say maybe. Maybe the, maybe when they do expenses, they upload the PDF, but yeah, the, the German I think makes sense. I think that was one of my main 2024 takeaways that like. The best small model companies are the large labs, which is not something I would have thought that the open source kind of like long tail would be like the small model.[00:12:53] swyx: Yeah, different sizes of small models we're talking about here, right? Like so small model here for Gemini is AB, [00:13:00] right? Uh, mini. We don't know what the small model size is, but yeah, it's probably in the double digits or maybe single digits, but probably double digits. The open source community has kind of focused on the one to three B size.[00:13:11] swyx: Mm-hmm . Yeah. Maybe[00:13:12] swyx: zero, maybe 0.5 B uh, that's moon dream and that is small for you then, then that's great. It makes sense that we, we have a range for small now, which is like, may, maybe one to five B. Yeah. I'll even put that at, at, at the high end. And so this includes Gemma from Gemini as well. But also includes the Apple Foundation models, which I think Apple Foundation is 3B.[00:13:32] Alessio: Yeah. No, that's great. I mean, I think in the start small just meant cheap. I think today small is actually a more nuanced discussion, you know, that people weren't really having before.[00:13:43] swyx: Yeah, we can keep going. This is a slide that I smiley disagree with Sarah. She's pointing to the scale SEAL leaderboard. I think the Researchers that I talked with at NeurIPS were kind of positive on this because basically you need private test [00:14:00] sets to prevent contamination.[00:14:02] swyx: And Scale is one of maybe three or four people this year that has really made an effort in doing a credible private test set leaderboard. Llama405B does well compared to Gemini and GPT 40. And I think that's good. I would say that. You know, it's good to have an open model that is that big, that does well on those metrics.[00:14:23] swyx: But anyone putting 405B in production will tell you, if you scroll down a little bit to the artificial analysis numbers, that it is very slow and very expensive to infer. Um, it doesn't even fit on like one node. of, uh, of H100s. Cerebras will be happy to tell you they can serve 4 or 5B on their super large chips.[00:14:42] swyx: But, um, you know, if you need to do anything custom to it, you're still kind of constrained. So, is 4 or 5B really that relevant? Like, I think most people are basically saying that they only use 4 or 5B as a teacher model to distill down to something. Even Meta is doing it. So with Lama 3. [00:15:00] 3 launched, they only launched the 70B because they use 4 or 5B to distill the 70B.[00:15:03] swyx: So I don't know if like open source is keeping up. I think they're the, the open source industrial complex is very invested in telling you that the, if the gap is narrowing, I kind of disagree. I think that the gap is widening with O1. I think there are very, very smart people trying to narrow that gap and they should.[00:15:22] swyx: I really wish them success, but you cannot use a chart that is nearing 100 in your saturation chart. And look, the distance between open source and closed source is narrowing. Of course it's going to narrow because you're near 100. This is stupid. But in metrics that matter, is open source narrowing?[00:15:38] swyx: Probably not for O1 for a while. And it's really up to the open source guys to figure out if they can match O1 or not.[00:15:46] Alessio: I think inference time compute is bad for open source just because, you know, Doc can donate the flops at training time, but he cannot donate the flops at inference time. So it's really hard to like actually keep up on that axis.[00:15:59] Alessio: Big, big business [00:16:00] model shift. So I don't know what that means for the GPU clouds. I don't know what that means for the hyperscalers, but obviously the big labs have a lot of advantage. Because, like, it's not a static artifact that you're putting the compute in. You're kind of doing that still, but then you're putting a lot of computed inference too.[00:16:17] swyx: Yeah, yeah, yeah. Um, I mean, Llama4 will be reasoning oriented. We talked with Thomas Shalom. Um, kudos for getting that episode together. That was really nice. Good, well timed. Actually, I connected with the AI meta guy, uh, at NeurIPS, and, um, yeah, we're going to coordinate something for Llama4. Yeah, yeah,[00:16:32] Alessio: and our friend, yeah.[00:16:33] Alessio: Clara Shi just joined to lead the business agent side. So I'm sure we'll have her on in the new year.[00:16:39] swyx: Yeah. So, um, my comment on, on the business model shift, this is super interesting. Apparently it is wide knowledge that OpenAI wanted more than 6. 6 billion dollars for their fundraise. They wanted to raise, you know, higher, and they did not.[00:16:51] swyx: And what that means is basically like, it's very convenient that we're not getting GPT 5, which would have been a larger pre train. We should have a lot of upfront money. And [00:17:00] instead we're, we're converting fixed costs into variable costs, right. And passing it on effectively to the customer. And it's so much easier to take margin there because you can directly attribute it to like, Oh, you're using this more.[00:17:12] swyx: Therefore you, you pay more of the cost and I'll just slap a margin in there. So like that lets you control your growth margin and like tie your. Your spend, or your sort of inference spend, accordingly. And it's just really interesting to, that this change in the sort of inference paradigm has arrived exactly at the same time that the funding environment for pre training is effectively drying up, kind of.[00:17:36] swyx: I feel like maybe the VCs are very in tune with research anyway, so like, they would have noticed this, but, um, it's just interesting.[00:17:43] Alessio: Yeah, and I was looking back at our yearly recap of last year. Yeah. And the big thing was like the mixed trial price fights, you know, and I think now it's almost like there's nowhere to go, like, you know, Gemini Flash is like basically giving it away for free.[00:17:55] Alessio: So I think this is a good way for the labs to generate more revenue and pass down [00:18:00] some of the compute to the customer. I think they're going to[00:18:02] swyx: keep going. I think that 2, will come.[00:18:05] Alessio: Yeah, I know. Totally. I mean, next year, the first thing I'm doing is signing up for Devin. Signing up for the pro chat GBT.[00:18:12] Alessio: Just to try. I just want to see what does it look like to spend a thousand dollars a month on AI?[00:18:17] swyx: Yes. Yes. I think if your, if your, your job is a, at least AI content creator or VC or, you know, someone who, whose job it is to stay on, stay on top of things, you should already be spending like a thousand dollars a month on, on stuff.[00:18:28] swyx: And then obviously easy to spend, hard to use. You have to actually use. The good thing is that actually Google lets you do a lot of stuff for free now. So like deep research. That they just launched. Uses a ton of inference and it's, it's free while it's in preview.[00:18:45] Alessio: Yeah. They need to put that in Lindy.[00:18:47] Alessio: I've been using Lindy lately. I've been a built a bunch of things once we had flow because I liked the new thing. It's pretty good. I even did a phone call assistant. Um, yeah, they just launched Lindy voice. Yeah, I think once [00:19:00] they get advanced voice mode like capability today, still like speech to text, you can kind of tell.[00:19:06] Alessio: Um, but it's good for like reservations and things like that. So I have a meeting prepper thing. And so[00:19:13] swyx: it's good. Okay. I feel like we've, we've covered a lot of stuff. Uh, I, yeah, I, you know, I think We will go over the individual, uh, talks in a separate episode. Uh, I don't want to take too much time with, uh, this stuff, but that suffice to say that there is a lot of progress in each field.[00:19:28] swyx: Uh, we covered vision. Basically this is all like the audience voting for what they wanted. And then I just invited the best people I could find in each audience, especially agents. Um, Graham, who I talked to at ICML in Vienna, he is currently still number one. It's very hard to stay on top of SweetBench.[00:19:45] swyx: OpenHand is currently still number one. switchbench full, which is the hardest one. He had very good thoughts on agents, which I, which I'll highlight for people. Everyone is saying 2025 is the year of agents, just like they said last year. And, uh, but he had [00:20:00] thoughts on like eight parts of what are the frontier problems to solve in agents.[00:20:03] swyx: And so I'll highlight that talk as well.[00:20:05] Alessio: Yeah. The number six, which is the Hacken agents learn more about the environment, has been a Super interesting to us as well, just to think through, because, yeah, how do you put an agent in an enterprise where most things in an enterprise have never been public, you know, a lot of the tooling, like the code bases and things like that.[00:20:23] Alessio: So, yeah, there's not indexing and reg. Well, yeah, but it's more like. You can't really rag things that are not documented. But people know them based on how they've been doing it. You know, so I think there's almost this like, you know, Oh, institutional knowledge. Yeah, the boring word is kind of like a business process extraction.[00:20:38] Alessio: Yeah yeah, I see. It's like, how do you actually understand how these things are done? I see. Um, and I think today the, the problem is that, Yeah, the agents are, that most people are building are good at following instruction, but are not as good as like extracting them from you. Um, so I think that will be a big unlock just to touch quickly on the Jeff Dean thing.[00:20:55] Alessio: I thought it was pretty, I mean, we'll link it in the, in the things, but. I think the main [00:21:00] focus was like, how do you use ML to optimize the systems instead of just focusing on ML to do something else? Yeah, I think speculative decoding, we had, you know, Eugene from RWKB on the podcast before, like he's doing a lot of that with Fetterless AI.[00:21:12] swyx: Everyone is. I would say it's the norm. I'm a little bit uncomfortable with how much it costs, because it does use more of the GPU per call. But because everyone is so keen on fast inference, then yeah, makes sense.[00:21:24] Alessio: Exactly. Um, yeah, but we'll link that. Obviously Jeff is great.[00:21:30] swyx: Jeff is, Jeff's talk was more, it wasn't focused on Gemini.[00:21:33] swyx: I think people got the wrong impression from my tweet. It's more about how Google approaches ML and uses ML to design systems and then systems feedback into ML. And I think this ties in with Lubna's talk.[00:21:45] Synthetic Data and Future Trends[00:21:45] swyx: on synthetic data where it's basically the story of bootstrapping of humans and AI in AI research or AI in production.[00:21:53] swyx: So her talk was on synthetic data, where like how much synthetic data has grown in 2024 in the pre training side, the post training side, [00:22:00] and the eval side. And I think Jeff then also extended it basically to chips, uh, to chip design. So he'd spend a lot of time talking about alpha chip. And most of us in the audience are like, we're not working on hardware, man.[00:22:11] swyx: Like you guys are great. TPU is great. Okay. We'll buy TPUs.[00:22:14] Alessio: And then there was the earlier talk. Yeah. But, and then we have, uh, I don't know if we're calling them essays. What are we calling these? But[00:22:23] swyx: for me, it's just like bonus for late in space supporters, because I feel like they haven't been getting anything.[00:22:29] swyx: And then I wanted a more high frequency way to write stuff. Like that one I wrote in an afternoon. I think basically we now have an answer to what Ilya saw. It's one year since. The blip. And we know what he saw in 2014. We know what he saw in 2024. We think we know what he sees in 2024. He gave some hints and then we have vague indications of what he saw in 2023.[00:22:54] swyx: So that was the Oh, and then 2016 as well, because of this lawsuit with Elon, OpenAI [00:23:00] is publishing emails from Sam's, like, his personal text messages to Siobhan, Zelis, or whatever. So, like, we have emails from Ilya saying, this is what we're seeing in OpenAI, and this is why we need to scale up GPUs. And I think it's very prescient in 2016 to write that.[00:23:16] swyx: And so, like, it is exactly, like, basically his insights. It's him and Greg, basically just kind of driving the scaling up of OpenAI, while they're still playing Dota. They're like, no, like, we see the path here.[00:23:30] Alessio: Yeah, and it's funny, yeah, they even mention, you know, we can only train on 1v1 Dota. We need to train on 5v5, and that takes too many GPUs.[00:23:37] Alessio: Yeah,[00:23:37] swyx: and at least for me, I can speak for myself, like, I didn't see the path from Dota to where we are today. I think even, maybe if you ask them, like, they wouldn't necessarily draw a straight line. Yeah,[00:23:47] Alessio: no, definitely. But I think like that was like the whole idea of almost like the RL and we talked about this with Nathan on his podcast.[00:23:55] Alessio: It's like with RL, you can get very good at specific things, but then you can't really like generalize as much. And I [00:24:00] think the language models are like the opposite, which is like, you're going to throw all this data at them and scale them up, but then you really need to drive them home on a specific task later on.[00:24:08] Alessio: And we'll talk about the open AI reinforcement, fine tuning, um, announcement too, and all of that. But yeah, I think like scale is all you need. That's kind of what Elia will be remembered for. And I think just maybe to clarify on like the pre training is over thing that people love to tweet. I think the point of the talk was like everybody, we're scaling these chips, we're scaling the compute, but like the second ingredient which is data is not scaling at the same rate.[00:24:35] Alessio: So it's not necessarily pre training is over. It's kind of like What got us here won't get us there. In his email, he predicted like 10x growth every two years or something like that. And I think maybe now it's like, you know, you can 10x the chips again, but[00:24:49] swyx: I think it's 10x per year. Was it? I don't know.[00:24:52] Alessio: Exactly. And Moore's law is like 2x. So it's like, you know, much faster than that. And yeah, I like the fossil fuel of AI [00:25:00] analogy. It's kind of like, you know, the little background tokens thing. So the OpenAI reinforcement fine tuning is basically like, instead of fine tuning on data, you fine tune on a reward model.[00:25:09] Alessio: So it's basically like, instead of being data driven, it's like task driven. And I think people have tasks to do, they don't really have a lot of data. So I'm curious to see how that changes, how many people fine tune, because I think this is what people run into. It's like, Oh, you can fine tune llama. And it's like, okay, where do I get the data?[00:25:27] Alessio: To fine tune it on, you know, so it's great that we're moving the thing. And then I really like he had this chart where like, you know, the brain mass and the body mass thing is basically like mammals that scaled linearly by brain and body size, and then humans kind of like broke off the slope. So it's almost like maybe the mammal slope is like the pre training slope.[00:25:46] Alessio: And then the post training slope is like the, the human one.[00:25:49] swyx: Yeah. I wonder what the. I mean, we'll know in 10 years, but I wonder what the y axis is for, for Ilya's SSI. We'll try to get them on.[00:25:57] Alessio: Ilya, if you're listening, you're [00:26:00] welcome here. Yeah, and then he had, you know, what comes next, like agent, synthetic data, inference, compute, I thought all of that was like that.[00:26:05] Alessio: I don't[00:26:05] swyx: think he was dropping any alpha there. Yeah, yeah, yeah.[00:26:07] Alessio: Yeah. Any other new reps? Highlights?[00:26:10] swyx: I think that there was comparatively a lot more work. Oh, by the way, I need to plug that, uh, my friend Yi made this, like, little nice paper. Yeah, that was really[00:26:20] swyx: nice.[00:26:20] swyx: Uh, of, uh, of, like, all the, he's, she called it must read papers of 2024.[00:26:26] swyx: So I laid out some of these at NeurIPS, and it was just gone. Like, everyone just picked it up. Because people are dying for, like, little guidance and visualizations And so, uh, I thought it was really super nice that we got there.[00:26:38] Alessio: Should we do a late in space book for each year? Uh, I thought about it. For each year we should.[00:26:42] Alessio: Coffee table book. Yeah. Yeah. Okay. Put it in the will. Hi, Will. By the way, we haven't introduced you. He's our new, you know, general organist, Jamie. You need to[00:26:52] swyx: pull up more things. One thing I saw that, uh, Okay, one fun one, and then one [00:27:00] more general one. So the fun one is this paper on agent collusion. This is a paper on steganography.[00:27:06] swyx: This is secret collusion among AI agents, multi agent deception via steganography. I tried to go to NeurIPS in order to find these kinds of papers because the real reason Like NeurIPS this year has a lottery system. A lot of people actually even go and don't buy tickets because they just go and attend the side events.[00:27:22] swyx: And then also the people who go and end up crowding around the most popular papers, which you already know and already read them before you showed up to NeurIPS. So the only reason you go there is to talk to the paper authors, but there's like something like 10, 000 other. All these papers out there that, you know, are just people's work that they, that they did on the air and they failed to get attention for one reason or another.[00:27:42] swyx: And this was one of them. Uh, it was like all the way at the back. And this is a deep mind paper that actually focuses on collusion between AI agents, uh, by hiding messages in the text that they generate. Uh, so that's what steganography is. So a very simple example would be the first letter of every word.[00:27:57] swyx: If you Pick that out, you know, and the code sends a [00:28:00] different message than that. But something I've always emphasized is to LLMs, we read left to right. LLMs can read up, down, sideways, you know, in random character order. And it's the same to them as it is to us. So if we were ever to get You know, self motivated, underlined LLMs that we're trying to collaborate to take over the planet.[00:28:19] swyx: This would be how they do it. They spread messages among us in the messages that we generate. And he developed a scaling law for that. So he marked, I'm showing it on screen right now, the emergence of this phenomenon. Basically, for example, for Cypher encoding, GPT 2, Lama 2, mixed trial, GPT 3. 5, zero capabilities, and sudden 4.[00:28:40] swyx: And this is the kind of Jason Wei type emergence properties that people kind of look for. I think what made this paper stand out as well, so he developed the benchmark for steganography collusion, and he also focused on shelling point collusion, which is very low coordination. For agreeing on a decoding encoding format, you kind of need to have some [00:29:00] agreement on that.[00:29:00] swyx: But, but shelling point means like very, very low or almost no coordination. So for example, if I, if I ask someone, if the only message I give you is meet me in New York and you're not aware. Or when you would probably meet me at Grand Central Station. That is the Grand Central Station is a shelling point.[00:29:16] swyx: And it's probably somewhere, somewhere during the day. That is the shelling point of New York is Grand Central. To that extent, shelling points for steganography are things like the, the, the common decoding methods that we talked about. It will be interesting at some point in the future when we are worried about alignment.[00:29:30] swyx: It is not interesting today, but it's interesting that DeepMind is already thinking about this.[00:29:36] Alessio: I think that's like one of the hardest things about NeurIPS. It's like the long tail. I[00:29:41] swyx: found a pricing guy. I'm going to feature him on the podcast. Basically, this guy from NVIDIA worked out the optimal pricing for language models.[00:29:51] swyx: It's basically an econometrics paper at NeurIPS, where everyone else is talking about GPUs. And the guy with the GPUs is[00:29:57] Alessio: talking[00:29:57] swyx: about economics instead. [00:30:00] That was the sort of fun one. So the focus I saw is that model papers at NeurIPS are kind of dead. No one really presents models anymore. It's just data sets.[00:30:12] swyx: This is all the grad students are working on. So like there was a data sets track and then I was looking around like, I was like, you don't need a data sets track because every paper is a data sets paper. And so data sets and benchmarks, they're kind of flip sides of the same thing. So Yeah. Cool. Yeah, if you're a grad student, you're a GPU boy, you kind of work on that.[00:30:30] swyx: And then the, the sort of big model that people walk around and pick the ones that they like, and then they use it in their models. And that's, that's kind of how it develops. I, I feel like, um, like, like you didn't last year, you had people like Hao Tian who worked on Lava, which is take Lama and add Vision.[00:30:47] swyx: And then obviously actually I hired him and he added Vision to Grok. Now he's the Vision Grok guy. This year, I don't think there was any of those.[00:30:55] Alessio: What were the most popular, like, orals? Last year it was like the [00:31:00] Mixed Monarch, I think, was like the most attended. Yeah, uh, I need to look it up. Yeah, I mean, if nothing comes to mind, that's also kind of like an answer in a way.[00:31:10] Alessio: But I think last year there was a lot of interest in, like, furthering models and, like, different architectures and all of that.[00:31:16] swyx: I will say that I felt the orals, oral picks this year were not very good. Either that or maybe it's just a So that's the highlight of how I have changed in terms of how I view papers.[00:31:29] swyx: So like, in my estimation, two of the best papers in this year for datasets or data comp and refined web or fine web. These are two actually industrially used papers, not highlighted for a while. I think DCLM got the spotlight, FineWeb didn't even get the spotlight. So like, it's just that the picks were different.[00:31:48] swyx: But one thing that does get a lot of play that a lot of people are debating is the role that's scheduled. This is the schedule free optimizer paper from Meta from Aaron DeFazio. And this [00:32:00] year in the ML community, there's been a lot of chat about shampoo, soap, all the bathroom amenities for optimizing your learning rates.[00:32:08] swyx: And, uh, most people at the big labs are. Who I asked about this, um, say that it's cute, but it's not something that matters. I don't know, but it's something that was discussed and very, very popular. 4Wars[00:32:19] Alessio: of AI recap maybe, just quickly. Um, where do you want to start? Data?[00:32:26] swyx: So to remind people, this is the 4Wars piece that we did as one of our earlier recaps of this year.[00:32:31] swyx: And the belligerents are on the left, journalists, writers, artists, anyone who owns IP basically, New York Times, Stack Overflow, Reddit, Getty, Sarah Silverman, George RR Martin. Yeah, and I think this year we can add Scarlett Johansson to that side of the fence. So anyone suing, open the eye, basically. I actually wanted to get a snapshot of all the lawsuits.[00:32:52] swyx: I'm sure some lawyer can do it. That's the data quality war. On the right hand side, we have the synthetic data people, and I think we talked about Lumna's talk, you know, [00:33:00] really showing how much synthetic data has come along this year. I think there was a bit of a fight between scale. ai and the synthetic data community, because scale.[00:33:09] swyx: ai published a paper saying that synthetic data doesn't work. Surprise, surprise, scale. ai is the leading vendor of non synthetic data. Only[00:33:17] Alessio: cage free annotated data is useful.[00:33:21] swyx: So I think there's some debate going on there, but I don't think it's much debate anymore that at least synthetic data, for the reasons that are blessed in Luna's talk, Makes sense.[00:33:32] swyx: I don't know if you have any perspectives there.[00:33:34] Alessio: I think, again, going back to the reinforcement fine tuning, I think that will change a little bit how people think about it. I think today people mostly use synthetic data, yeah, for distillation and kind of like fine tuning a smaller model from like a larger model.[00:33:46] Alessio: I'm not super aware of how the frontier labs use it outside of like the rephrase, the web thing that Apple also did. But yeah, I think it'll be. Useful. I think like whether or not that gets us the big [00:34:00] next step, I think that's maybe like TBD, you know, I think people love talking about data because it's like a GPU poor, you know, I think, uh, synthetic data is like something that people can do, you know, so they feel more opinionated about it compared to, yeah, the optimizers stuff, which is like,[00:34:17] swyx: they don't[00:34:17] Alessio: really work[00:34:18] swyx: on.[00:34:18] swyx: I think that there is an angle to the reasoning synthetic data. So this year, we covered in the paper club, the star series of papers. So that's star, Q star, V star. It basically helps you to synthesize reasoning steps, or at least distill reasoning steps from a verifier. And if you look at the OpenAI RFT, API that they released, or that they announced, basically they're asking you to submit graders, or they choose from a preset list of graders.[00:34:49] swyx: Basically It feels like a way to create valid synthetic data for them to fine tune their reasoning paths on. Um, so I think that is another angle where it starts to make sense. And [00:35:00] so like, it's very funny that basically all the data quality wars between Let's say the music industry or like the newspaper publishing industry or the textbooks industry on the big labs.[00:35:11] swyx: It's all of the pre training era. And then like the new era, like the reasoning era, like nobody has any problem with all the reasoning, especially because it's all like sort of math and science oriented with, with very reasonable graders. I think the more interesting next step is how does it generalize beyond STEM?[00:35:27] swyx: We've been using O1 for And I would say like for summarization and creative writing and instruction following, I think it's underrated. I started using O1 in our intro songs before we killed the intro songs, but it's very good at writing lyrics. You know, I can actually say like, I think one of the O1 pro demos.[00:35:46] swyx: All of these things that Noam was showing was that, you know, you can write an entire paragraph or three paragraphs without using the letter A, right?[00:35:53] Creative Writing with AI[00:35:53] swyx: So like, like literally just anything instead of token, like not even token level, character level manipulation and [00:36:00] counting and instruction following. It's, uh, it's very, very strong.[00:36:02] swyx: And so no surprises when I ask it to rhyme, uh, and to, to create song lyrics, it's going to do that very much better than in previous models. So I think it's underrated for creative writing.[00:36:11] Alessio: Yeah.[00:36:12] Legal and Ethical Issues in AI[00:36:12] Alessio: What do you think is the rationale that they're going to have in court when they don't show you the thinking traces of O1, but then they want us to, like, they're getting sued for using other publishers data, you know, but then on their end, they're like, well, you shouldn't be using my data to then train your model.[00:36:29] Alessio: So I'm curious to see how that kind of comes. Yeah, I mean, OPA has[00:36:32] swyx: many ways to publish, to punish people without bringing, taking them to court. Already banned ByteDance for distilling their, their info. And so anyone caught distilling the chain of thought will be just disallowed to continue on, on, on the API.[00:36:44] swyx: And it's fine. It's no big deal. Like, I don't even think that's an issue at all, just because the chain of thoughts are pretty well hidden. Like you have to work very, very hard to, to get it to leak. And then even when it leaks the chain of thought, you don't know if it's, if it's [00:37:00] The bigger concern is actually that there's not that much IP hiding behind it, that Cosign, which we talked about, we talked to him on Dev Day, can just fine tune 4.[00:37:13] swyx: 0 to beat 0. 1 Cloud SONET so far is beating O1 on coding tasks without, at least O1 preview, without being a reasoning model, same for Gemini Pro or Gemini 2. 0. So like, how much is reasoning important? How much of a moat is there in this, like, All of these are proprietary sort of training data that they've presumably accomplished.[00:37:34] swyx: Because even DeepSeek was able to do it. And they had, you know, two months notice to do this, to do R1. So, it's actually unclear how much moat there is. Obviously, you know, if you talk to the Strawberry team, they'll be like, yeah, I mean, we spent the last two years doing this. So, we don't know. And it's going to be Interesting because there'll be a lot of noise from people who say they have inference time compute and actually don't because they just have fancy chain of thought.[00:38:00][00:38:00] swyx: And then there's other people who actually do have very good chain of thought. And you will not see them on the same level as OpenAI because OpenAI has invested a lot in building up the mythology of their team. Um, which makes sense. Like the real answer is somewhere in between.[00:38:13] Alessio: Yeah, I think that's kind of like the main data war story developing.[00:38:18] The Data War: GPU Poor vs. GPU Rich[00:38:18] Alessio: GPU poor versus GPU rich. Yeah. Where do you think we are? I think there was, again, going back to like the small model thing, there was like a time in which the GPU poor were kind of like the rebel faction working on like these models that were like open and small and cheap. And I think today people don't really care as much about GPUs anymore.[00:38:37] Alessio: You also see it in the price of the GPUs. Like, you know, that market is kind of like plummeted because there's people don't want to be, they want to be GPU free. They don't even want to be poor. They just want to be, you know, completely without them. Yeah. How do you think about this war? You[00:38:52] swyx: can tell me about this, but like, I feel like the, the appetite for GPU rich startups, like the, you know, the, the funding plan is we will raise 60 million and [00:39:00] we'll give 50 of that to NVIDIA.[00:39:01] swyx: That is gone, right? Like, no one's, no one's pitching that. This was literally the plan, the exact plan of like, I can name like four or five startups, you know, this time last year. So yeah, GPU rich startups gone.[00:39:12] The Rise of GPU Ultra Rich[00:39:12] swyx: But I think like, The GPU ultra rich, the GPU ultra high net worth is still going. So, um, now we're, you know, we had Leopold's essay on the trillion dollar cluster.[00:39:23] swyx: We're not quite there yet. We have multiple labs, um, you know, XAI very famously, you know, Jensen Huang praising them for being. Best boy number one in spinning up 100, 000 GPU cluster in like 12 days or something. So likewise at Meta, likewise at OpenAI, likewise at the other labs as well. So like the GPU ultra rich are going to keep doing that because I think partially it's an article of faith now that you just need it.[00:39:46] swyx: Like you don't even know what it's going to, what you're going to use it for. You just, you just need it. And it makes sense that if, especially if we're going into. More researchy territory than we are. So let's say 2020 to 2023 was [00:40:00] let's scale big models territory because we had GPT 3 in 2020 and we were like, okay, we'll go from 1.[00:40:05] swyx: 75b to 1. 8b, 1. 8t. And that was GPT 3 to GPT 4. Okay, that's done. As far as everyone is concerned, Opus 3. 5 is not coming out, GPT 4. 5 is not coming out, and Gemini 2, we don't have Pro, whatever. We've hit that wall. Maybe I'll call it the 2 trillion perimeter wall. We're not going to 10 trillion. No one thinks it's a good idea, at least from training costs, from the amount of data, or at least the inference.[00:40:36] swyx: Would you pay 10x the price of GPT Probably not. Like, like you want something else that, that is at least more useful. So it makes sense that people are pivoting in terms of their inference paradigm.[00:40:47] Emerging Trends in AI Models[00:40:47] swyx: And so when it's more researchy, then you actually need more just general purpose compute to mess around with, uh, at the exact same time that production deployments of the old, the previous paradigm is still ramping up,[00:40:58] swyx: um,[00:40:58] swyx: uh, pretty aggressively.[00:40:59] swyx: So [00:41:00] it makes sense that the GPU rich are growing. We have now interviewed both together and fireworks and replicates. Uh, we haven't done any scale yet. But I think Amazon, maybe kind of a sleeper one, Amazon, in a sense of like they, at reInvent, I wasn't expecting them to do so well, but they are now a foundation model lab.[00:41:18] swyx: It's kind of interesting. Um, I think, uh, you know, David went over there and started just creating models.[00:41:25] Alessio: Yeah, I mean, that's the power of prepaid contracts. I think like a lot of AWS customers, you know, they do this big reserve instance contracts and now they got to use their money. That's why so many startups.[00:41:37] Alessio: Get bought through the AWS marketplace so they can kind of bundle them together and prefer pricing.[00:41:42] swyx: Okay, so maybe GPU super rich doing very well, GPU middle class dead, and then GPU[00:41:48] Alessio: poor. I mean, my thing is like, everybody should just be GPU rich. There shouldn't really be, even the GPU poorest, it's like, does it really make sense to be GPU poor?[00:41:57] Alessio: Like, if you're GPU poor, you should just use the [00:42:00] cloud. Yes, you know, and I think there might be a future once we kind of like figure out what the size and shape of these models is where like the tiny box and these things come to fruition where like you can be GPU poor at home. But I think today is like, why are you working so hard to like get these models to run on like very small clusters where it's like, It's so cheap to run them.[00:42:21] Alessio: Yeah, yeah,[00:42:22] swyx: yeah. I think mostly people think it's cool. People think it's a stepping stone to scaling up. So they aspire to be GPU rich one day and they're working on new methods. Like news research, like probably the most deep tech thing they've done this year is Distro or whatever the new name is.[00:42:38] swyx: There's a lot of interest in heterogeneous computing, distributed computing. I tend generally to de emphasize that historically, but it may be coming to a time where it is starting to be relevant. I don't know. You know, SF compute launched their compute marketplace this year, and like, who's really using that?[00:42:53] swyx: Like, it's a bunch of small clusters, disparate types of compute, and if you can make that [00:43:00] useful, then that will be very beneficial to the broader community, but maybe still not the source of frontier models. It's just going to be a second tier of compute that is unlocked for people, and that's fine. But yeah, I mean, I think this year, I would say a lot more on device, We are, I now have Apple intelligence on my phone.[00:43:19] swyx: Doesn't do anything apart from summarize my notifications. But still, not bad. Like, it's multi modal.[00:43:25] Alessio: Yeah, the notification summaries are so and so in my experience.[00:43:29] swyx: Yeah, but they add, they add juice to life. And then, um, Chrome Nano, uh, Gemini Nano is coming out in Chrome. Uh, they're still feature flagged, but you can, you can try it now if you, if you use the, uh, the alpha.[00:43:40] swyx: And so, like, I, I think, like, you know, We're getting the sort of GPU poor version of a lot of these things coming out, and I think it's like quite useful. Like Windows as well, rolling out RWKB in sort of every Windows department is super cool. And I think the last thing that I never put in this GPU poor war, that I think I should now, [00:44:00] is the number of startups that are GPU poor but still scaling very well, as sort of wrappers on top of either a foundation model lab, or GPU Cloud.[00:44:10] swyx: GPU Cloud, it would be Suno. Suno, Ramp has rated as one of the top ranked, fastest growing startups of the year. Um, I think the last public number is like zero to 20 million this year in ARR and Suno runs on Moto. So Suno itself is not GPU rich, but they're just doing the training on, on Moto, uh, who we've also talked to on, on the podcast.[00:44:31] swyx: The other one would be Bolt, straight cloud wrapper. And, and, um, Again, another, now they've announced 20 million ARR, which is another step up from our 8 million that we put on the title. So yeah, I mean, it's crazy that all these GPU pores are finding a way while the GPU riches are also finding a way. And then the only failures, I kind of call this the GPU smiling curve, where the edges do well, because you're either close to the machines, and you're like [00:45:00] number one on the machines, or you're like close to the customers, and you're number one on the customer side.[00:45:03] swyx: And the people who are in the middle. Inflection, um, character, didn't do that great. I think character did the best of all of them. Like, you have a note in here that we apparently said that character's price tag was[00:45:15] Alessio: 1B.[00:45:15] swyx: Did I say that?[00:45:16] Alessio: Yeah. You said Google should just buy them for 1B. I thought it was a crazy number.[00:45:20] Alessio: Then they paid 2. 7 billion. I mean, for like,[00:45:22] swyx: yeah.[00:45:22] Alessio: What do you pay for node? Like, I don't know what the game world was like. Maybe the starting price was 1B. I mean, whatever it was, it worked out for everybody involved.[00:45:31] The Multi-Modality War[00:45:31] Alessio: Multimodality war. And this one, we never had text to video in the first version, which now is the hottest.[00:45:37] swyx: Yeah, I would say it's a subset of image, but yes.[00:45:40] Alessio: Yeah, well, but I think at the time it wasn't really something people were doing, and now we had VO2 just came out yesterday. Uh, Sora was released last month, last week. I've not tried Sora, because the day that I tried, it wasn't, yeah. I[00:45:54] swyx: think it's generally available now, you can go to Sora.[00:45:56] swyx: com and try it. Yeah, they had[00:45:58] Alessio: the outage. Which I [00:46:00] think also played a part into it. Small things. Yeah. What's the other model that you posted today that was on Replicate? Video or OneLive?[00:46:08] swyx: Yeah. Very, very nondescript name, but it is from Minimax, which I think is a Chinese lab. The Chinese labs do surprisingly well at the video models.[00:46:20] swyx: I'm not sure it's actually Chinese. I don't know. Hold me up to that. Yep. China. It's good. Yeah, the Chinese love video. What can I say? They have a lot of training data for video. Or a more relaxed regulatory environment.[00:46:37] Alessio: Uh, well, sure, in some way. Yeah, I don't think there's much else there. I think like, you know, on the image side, I think it's still open.[00:46:45] Alessio: Yeah, I mean,[00:46:46] swyx: 11labs is now a unicorn. So basically, what is multi modality war? Multi modality war is, do you specialize in a single modality, right? Or do you have GodModel that does all the modalities? So this is [00:47:00] definitely still going, in a sense of 11 labs, you know, now Unicorn, PicoLabs doing well, they launched Pico 2.[00:47:06] swyx: 0 recently, HeyGen, I think has reached 100 million ARR, Assembly, I don't know, but they have billboards all over the place, so I assume they're doing very, very well. So these are all specialist models, specialist models and specialist startups. And then there's the big labs who are doing the sort of all in one play.[00:47:24] swyx: And then here I would highlight Gemini 2 for having native image output. Have you seen the demos? Um, yeah, it's, it's hard to keep up. Literally they launched this last week and a shout out to Paige Bailey, who came to the Latent Space event to demo on the day of launch. And she wasn't prepared. She was just like, I'm just going to show you.[00:47:43] swyx: So they have voice. They have, you know, obviously image input, and then they obviously can code gen and all that. But the new one that OpenAI and Meta both have but they haven't launched yet is image output. So you can literally, um, I think their demo video was that you put in an image of a [00:48:00] car, and you ask for minor modifications to that car.[00:48:02] swyx: They can generate you that modification exactly as you asked. So there's no need for the stable diffusion or comfy UI workflow of like mask here and then like infill there in paint there and all that, all that stuff. This is small model nonsense. Big model people are like, huh, we got you in as everything in the transformer.[00:48:21] swyx: This is the multimodality war, which is, do you, do you bet on the God model or do you string together a whole bunch of, uh, Small models like a, like a chump. Yeah,[00:48:29] Alessio: I don't know, man. Yeah, that would be interesting. I mean, obviously I use Midjourney for all of our thumbnails. Um, they've been doing a ton on the product, I would say.[00:48:38] Alessio: They launched a new Midjourney editor thing. They've been doing a ton. Because I think, yeah, the motto is kind of like, Maybe, you know, people say black forest, the black forest models are better than mid journey on a pixel by pixel basis. But I think when you put it, put it together, have you tried[00:48:53] swyx: the same problems on black forest?[00:48:55] Alessio: Yes. But the problem is just like, you know, on black forest, it generates one image. And then it's like, you got to [00:49:00] regenerate. You don't have all these like UI things. Like what I do, no, but it's like time issue, you know, it's like a mid[00:49:06] swyx: journey. Call the API four times.[00:49:08] Alessio: No, but then there's no like variate.[00:49:10] Alessio: Like the good thing about mid journey is like, you just go in there and you're cooking. There's a lot of stuff that just makes it really easy. And I think people underestimate that. Like, it's not really a skill issue, because I'm paying mid journey, so it's a Black Forest skill issue, because I'm not paying them, you know?[00:49:24] Alessio: Yeah,[00:49:25] swyx: so, okay, so, uh, this is a UX thing, right? Like, you, you, you understand that, at least, we think that Black Forest should be able to do all that stuff. I will also shout out, ReCraft has come out, uh, on top of the image arena that, uh, artificial analysis has done, has apparently, uh, Flux's place. Is this still true?[00:49:41] swyx: So, Artificial Analysis is now a company. I highlighted them I think in one of the early AI Newses of the year. And they have launched a whole bunch of arenas. So, they're trying to take on LM Arena, Anastasios and crew. And they have an image arena. Oh yeah, Recraft v3 is now beating Flux 1. 1. Which is very surprising [00:50:00] because Flux And Black Forest Labs are the old stable diffusion crew who left stability after, um, the management issues.[00:50:06] swyx: So Recurve has come from nowhere to be the top image model. Uh, very, very strange. I would also highlight that Grok has now launched Aurora, which is, it's very interesting dynamics between Grok and Black Forest Labs because Grok's images were originally launched, uh, in partnership with Black Forest Labs as a, as a thin wrapper.[00:50:24] swyx: And then Grok was like, no, we'll make our own. And so they've made their own. I don't know, there are no APIs or benchmarks about it. They just announced it. So yeah, that's the multi modality war. I would say that so far, the small model, the dedicated model people are winning, because they are just focused on their tasks.[00:50:42] swyx: But the big model, People are always catching up. And the moment I saw the Gemini 2 demo of image editing, where I can put in an image and just request it and it does, that's how AI should work. Not like a whole bunch of complicated steps. So it really is something. And I think one frontier that we haven't [00:51:00] seen this year, like obviously video has done very well, and it will continue to grow.[00:51:03] swyx: You know, we only have Sora Turbo today, but at some point we'll get full Sora. Oh, at least the Hollywood Labs will get Fulsora. We haven't seen video to audio, or video synced to audio. And so the researchers that I talked to are already starting to talk about that as the next frontier. But there's still maybe like five more years of video left to actually be Soda.[00:51:23] swyx: I would say that Gemini's approach Compared to OpenAI, Gemini seems, or DeepMind's approach to video seems a lot more fully fledged than OpenAI. Because if you look at the ICML recap that I published that so far nobody has listened to, um, that people have listened to it. It's just a different, definitely different audience.[00:51:43] swyx: It's only seven hours long. Why are people not listening? It's like everything in Uh, so, so DeepMind has, is working on Genie. They also launched Genie 2 and VideoPoet. So, like, they have maybe four years advantage on world modeling that OpenAI does not have. Because OpenAI basically only started [00:52:00] Diffusion Transformers last year, you know, when they hired, uh, Bill Peebles.[00:52:03] swyx: So, DeepMind has, has a bit of advantage here, I would say, in, in, in showing, like, the reason that VO2, while one, They cherry pick their videos. So obviously it looks better than Sora, but the reason I would believe that VO2, uh, when it's fully launched will do very well is because they have all this background work in video that they've done for years.[00:52:22] swyx: Like, like last year's NeurIPS, I already was interviewing some of their video people. I forget their model name, but for, for people who are dedicated fans, they can go to NeurIPS 2023 and see, see that paper.[00:52:32] Alessio: And then last but not least, the LLMOS. We renamed it to Ragops, formerly known as[00:52:39] swyx: Ragops War. I put the latest chart on the Braintrust episode.[00:52:43] swyx: I think I'm going to separate these essays from the episode notes. So the reason I used to do that, by the way, is because I wanted to show up on Hacker News. I wanted the podcast to show up on Hacker News. So I always put an essay inside of there because Hacker News people like to read and not listen.[00:52:58] Alessio: So episode essays,[00:52:59] swyx: I remember [00:53:00] purchasing them separately. You say Lanchain Llama Index is still growing.[00:53:03] Alessio: Yeah, so I looked at the PyPy stats, you know. I don't care about stars. On PyPy you see Do you want to share your screen? Yes. I prefer to look at actual downloads, not at stars on GitHub. So if you look at, you know, Lanchain still growing.[00:53:20] Alessio: These are the last six months. Llama Index still growing. What I've basically seen is like things that, One, obviously these things have A commercial product. So there's like people buying this and sticking with it versus kind of hopping in between things versus, you know, for example, crew AI, not really growing as much.[00:53:38] Alessio: The stars are growing. If you look on GitHub, like the stars are growing, but kind of like the usage is kind of like flat. In the last six months, have they done some[00:53:4
Ryan Rasmussen is the Head of Research at Bitwise. It's a holiday tradition every December on The Edge Podcast to explore Bitwise's bold predictions for crypto in the year ahead! Join us as we dive into their top 10 forecasts for the new year, from BTC reaching $200,000 to Coinbase surpassing Charles Schwab as the most valuable brokerage in the world to stablecoins doubling to $400B. With insights on the future of crypto adoption, DeFi, and real-world assets, this episode is packed with thought-provoking predictions and industry-shaping trends. ------
Brought to you by TogetherLetters & Edgewise!In this episode: Amazon's online car ‘dealership' with Hyundai is now liveJudge rejects sale of Alex Jones' Infowars to The OnionAppeals court upholds law ordering China-based ByteDance to sell TikTok or face U.S. banElon Musk becomes first person to surpass $400B net worth, according to BloombergLA Times owner plans to add AI-powered ‘bias meter' on news stories, sparking newsroom backlashiOS 18.2 is rolling out now, adding ChatGPT integration and more Apple Intelligence toolsApple's Surprising iPhone Update—Green Bubbles End This WeekRead the memo Calendly's CEO sent to employees announcing 70 job cutsYouTube Raises Price on TV Streaming Service to $82.99GM is pulling the plug on its robotaxi effortsWordPress CEO Rage Quits Community Slack After Court InjunctionWeird and Wacky: Mysterious drone sightings in New Jersey prompt security concerns. Here's what we knowKrispy Kreme orders across the US disrupted after cyberattackTech Rec:Sanjay - TRMNL Adam - Sora? My first experiement was amusingFind us here:sanjayparekh.com &
Episode 473: Neal recaps the latest inflation report, which ticked up just so slightly – but didn't cause any panic. Then, FIFA announced the next countries to host the World Cup and in 2034, the choice of Saudi Arabia has raised a lot of eyebrows. Plus, Elon Musk becomes the first person to reach $400B net worth. Also, Neal shares his favorite numbers on Americans moving, Juan Soto's contract, and a traveling humpback whale. Lastly, headline news you can use to end your day. Visit https://www.sage.com for more! Subscribe to Morning Brew Daily for more of the news you need to start your day. Share the show with a friend, and leave us a review on your favorite podcast app. Listen to Morning Brew Daily Here: https://link.chtbl.com/MBD Watch Morning Brew Daily Here: https://www.youtube.com/@MorningBrewDailyShow 00:00 - Intro 03:25 - Inflation Ticks Up Slightly 08:50 - Saudi Arabia to Host the World Cup 13:30 - Elon Musk is Worth $400B 19:10 - Neal's Numbers Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, we have a large news slate: Musk's net worth is first to pass $400B, Inflation rose again last month, and the NFL's Eagles are worth a record $8.3B, CEO of UnitedHealth Group's insurance division QOFTW: Rapid Fire https://www.instagram.com/delano.saporu/?hl=en. Connect with me here also: https://newstreetadvisorsgroup.com/social/. Want to support the show? Feel free to do so here! https://anchor.fm/delano-saporu4/support. Thank you for listening. --- Support this podcast: https://podcasters.spotify.com/pod/show/delano-saporu4/support
Perry Marshall is one of the most expensive business strategists in the world. He is endorsed in FORBES and INC Magazine and has authored ten books. At London's Royal Society he announced the world's largest science research challenge, the $10 million Evolution 2.0 Prize. His reinvention of the Pareto Principle is published in Harvard Business Review, and his Google book laid the foundations for the $400B digital advertising industry. He has a degree in Engineering and lives with his family in Chicago.
Chcesz uruchomić modele LLM (np. Llama, Mistral czy Bielika) na własnych warunkach? W tym odcinku dowiesz się o sprzęcie, oprogramowaniu i trikach, które to ułatwią. Konkretna i praktyczna wiedza, która Ci się przyda. Oglądaj na YouTube: https://youtu.be/_OKLzmaSmg0
How did Gary Tenkman go from a first-generation college graduate who felt like a fish out of water in finance to CEO of Ultimus Fund Solutions, a $475+ AUI award-winning fund? It sure wasn't easy. It took grit. It took being willing to get uncomfortable, fail, and learn (rinse, repeat). Grab your headphones and get ready for a backstory chock-full of lessons from places you'd least expect. Listen in as Gary discusses: His backstory — from first-generation college student paying his own way to rocking the CEO seat The importance of being willing to get uncomfortable, fail, and learn (rinse, repeat)What it looks like to be an introvert in a very forward-facing CEO role How being an introvert is one of the things that makes him a great leader The power of strategic scaling – why he considers culture first and profit second when considering acquisitions More about Gary Tenkman: Gary Tenkman is Chief Executive Officer of Ultimus Fund Solutions, the largest independent registered fund administrator and one of the largest private fund administrators in the U.S. As CEO, Gary works closely with the Ultimus executive team and board members with responsibility for providing strategic growth, financial, and operational leadership for Ultimus and its over 900 employees. Gary has over 30 years of experience in the financial services industry. He joined Ultimus in 2014 as COO and was named CEO in 2019. Gary currently also serves on the Board of Governors for the Investment Company Institute.Resources mentioned in this episode:Book: Get Out of My Life, but First Could You Drive Me & Cheryl to the Mall: A Parent's Guide to the New Teenager, Factfulness: Ten Reasons We're Wrong about the World--And Why Things Are Better Than You Think - - -Make The Boutique Investment Collective part of your Billion Dollar Backstory. Gain access to invaluable resources, expert coaches, and a supportive community of other boutique founders, fund managers, and investment pros. Join Havener Capital's exclusive membership
S&P futures are indicating a higher open today, up +0.87% following mixed results from Mag 7 earnings. Asian equities closed broadly higher today, and European equity markets are firmer in early trades. The Bank of Japan has increased its uncollateralized overnight call rate to 0.25% from the previous range of 0~0.1% by a 7-2 vote, defying market expectations for no change. The BOJ also unanimously agreed to reduce monthly JGB purchases to ¥3T ($19.6B) in 1Q26, with further quarterly reductions of ¥400B. The adjustments aim to achieve the 2% inflation target while maintaining accommodative financial conditions, as real rates are expected to remain significantly negative. The July Outlook Report revised the FY24 core CPI forecast down to 2.5% from 2.8%, upgraded FY25 to 2.1% from 1.9%, and kept FY26 unchanged at 1.9%. Companies Mentioned: Microsoft, AMD, Intel
If you see this in time, join our emergency LLM paper club on the Llama 3 paper!For everyone else, join our special AI in Action club on the Latent Space Discord for a special feature with the Cursor cofounders on Composer, their newest coding agent!Today, Meta is officially releasing the largest and most capable open model to date, Llama3-405B, a dense transformer trained on 15T tokens that beats GPT-4 on all major benchmarks:The 8B and 70B models from the April Llama 3 release have also received serious spec bumps, warranting the new label of Llama 3.1.If you are curious about the infra / hardware side, go check out our episode with Soumith Chintala, one of the AI infra leads at Meta. Today we have Thomas Scialom, who led Llama2 and now Llama3 post-training, so we spent most of our time on pre-training (synthetic data, data pipelines, scaling laws, etc) and post-training (RLHF vs instruction tuning, evals, tool calling).Synthetic data is all you needLlama3 was trained on 15T tokens, 7x more than Llama2 and with 4 times as much code and 30 different languages represented. But as Thomas beautifully put it:“My intuition is that the web is full of s**t in terms of text, and training on those tokens is a waste of compute.” “Llama 3 post-training doesn't have any human written answers there basically… It's just leveraging pure synthetic data from Llama 2.”While it is well speculated that the 8B and 70B were "offline distillations" of the 405B, there are a good deal more synthetic data elements to Llama 3.1 than the expected. The paper explicitly calls out:* SFT for Code: 3 approaches for synthetic data for the 405B bootstrapping itself with code execution feedback, programming language translation, and docs backtranslation.* SFT for Math: The Llama 3 paper credits the Let's Verify Step By Step authors, who we interviewed at ICLR:* SFT for Multilinguality: "To collect higher quality human annotations in non-English languages, we train a multilingual expert by branching off the pre-training run and continuing to pre-train on a data mix that consists of 90% multilingualtokens."* SFT for Long Context: "It is largely impractical to get humans to annotate such examples due to the tedious and time-consuming nature of reading lengthy contexts, so we predominantly rely on synthetic data to fill this gap. We use earlier versions of Llama 3 to generate synthetic data based on the key long-context use-cases: (possibly multi-turn) question-answering, summarization for long documents, and reasoning over code repositories, and describe them in greater detail below"* SFT for Tool Use: trained for Brave Search, Wolfram Alpha, and a Python Interpreter (a special new ipython role) for single, nested, parallel, and multiturn function calling.* RLHF: DPO preference data was used extensively on Llama 2 generations. This is something we partially covered in RLHF 201: humans are often better at judging between two options (i.e. which of two poems they prefer) than creating one (writing one from scratch). Similarly, models might not be great at creating text but they can be good at classifying their quality.Last but not least, Llama 3.1 received a license update explicitly allowing its use for synthetic data generation.Llama2 was also used as a classifier for all pre-training data that went into the model. It both labelled it by quality so that bad tokens were removed, but also used type (i.e. science, law, politics) to achieve a balanced data mix. Tokenizer size mattersThe tokens vocab of a model is the collection of all tokens that the model uses. Llama2 had a 34,000 tokens vocab, GPT-4 has 100,000, and 4o went up to 200,000. Llama3 went up 4x to 128,000 tokens. You can find the GPT-4 vocab list on Github.This is something that people gloss over, but there are many reason why a large vocab matters:* More tokens allow it to represent more concepts, and then be better at understanding the nuances.* The larger the tokenizer, the less tokens you need for the same amount of text, extending the perceived context size. In Llama3's case, that's ~30% more text due to the tokenizer upgrade. * With the same amount of compute you can train more knowledge into the model as you need fewer steps.The smaller the model, the larger the impact that the tokenizer size will have on it. You can listen at 55:24 for a deeper explanation.Dense models = 1 Expert MoEsMany people on X asked “why not MoE?”, and Thomas' answer was pretty clever: dense models are just MoEs with 1 expert :)[00:28:06]: I heard that question a lot, different aspects there. Why not MoE in the future? The other thing is, I think a dense model is just one specific variation of the model for an hyperparameter for an MOE with basically one expert. So it's just an hyperparameter we haven't optimized a lot yet, but we have some stuff ongoing and that's an hyperparameter we'll explore in the future.Basically… wait and see!Llama4Meta already started training Llama4 in June, and it sounds like one of the big focuses will be around agents. Thomas was one of the authors behind GAIA (listen to our interview with Thomas in our ICLR recap) and has been working on agent tooling for a while with things like Toolformer. Current models have “a gap of intelligence” when it comes to agentic workflows, as they are unable to plan without the user relying on prompting techniques and loops like ReAct, Chain of Thought, or frameworks like Autogen and Crew. That may be fixed soon?
Our 173rd episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris) See full episode notes here. Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai In this episode of Last Week in AI, we explore the latest advancements and debates in the AI field, including Google's release of Gemini 1.5, Meta's upcoming LLaMA 3, and Runway's Gen 3 Alpha video model. We discuss emerging AI features, legal disputes over data usage, and China's competition in AI. The conversation spans innovative research developments, cost considerations of AI architectures, and policy changes like the U.S. Supreme Court striking down Chevron deference. We also cover U.S. export controls on AI chips to China, workforce development in the semiconductor industry, and Bridgewater's new AI-driven financial fund, evaluating the broader financial and regulatory impacts of AI technologies. Timestamps + links: (00:00:00) Intro / Banter Tools & Apps(00:03:24) Google opens up Gemini 1.5 Flash, Pro with 2M tokens to the public (00:08:47) Meta is about to launch its biggest Llama model yet — here's why it's a big deal (00:12:38) Runway's Gen-3 Alpha AI video model now available – but there's a catch (00:16:28) This is Google AI, and it's coming to the Pixel 9 (00:17:30) AI Firm ElevenLabs Sets Audio Reader Pact With Judy Garland, James Dean, Burt Reynolds and Laurence Olivier Estates (00:20:06) Perplexity's ‘Pro Search' AI upgrade makes it better at math and research (00:23:12) Gemini's data-analyzing abilities aren't as good as Google claims Applications & Business(00:26:38) Quora's Chatbot Platform Poe Allows Users to Download Paywalled Articles on Demand (00:32:04) Huawei and Wuhan Xinxin to develop high-bandwidth memory chips amid US restrictions (00:34:57) Alibaba's large language model tops global ranking of AI developer platform Hugging Face (00:39:01) Here comes a Meta Ray-Bans challenger with ChatGPT-4o and a camera (00:43:35) Apple's Phil Schiller is reportedly joining OpenAI's board (00:47:26) AI Video Startup Runway Looking to Raise $450 Million Projects & Open Source(00:48:10) Kyutai Open Sources Moshi: A Real-Time Native Multimodal Foundation AI Model that can Listen and Speak (00:50:44) MMEvalPro: Calibrating Multimodal Benchmarks Towards Trustworthy and Efficient Evaluation (00:53:47) Anthropic Pushes for Third-Party AI Model Evaluations (00:57:29) Mozilla Llamafile, Builders Projects Shine at AI Engineers World's Fair Research & Advancements(00:59:26) Researchers upend AI status quo by eliminating matrix multiplication in LLMs (01:05:55) AI Agents That Matter (01:12:09) WARP: On the Benefits of Weight Averaged Rewarded Policies (01:17:20) Scaling Synthetic Data Creation with 1,000,000,000 Personas (01:24:16) Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization Policy & Safety(01:26:32) With Chevron's demise, AI regulation seems dead in the water (01:33:40) Nvidia to make $12bn from AI chips in China this year despite US controls (01:37:52) Uncle Sam relies on manual processes to oversee restrictions on Huawei, other Chinese tech players (01:40:57) U.S. government addresses critical workforce shortages for the semiconductor industry with new program (01:42:42) Bridgewater starts $2 billion fund that uses machine learning for decision-making and will include models from OpenAI, Anthropic and Perplexity (01:47:57) Outro
Theresa Szczurek is the Co-founder, COO, and Corporate Board Member of Radish Systems LLC. Radish Systems LLC is the worldwide leader in visual interactive voice response and visual live assistance solutions, improving the way businesses communicate with mobile devices and browser users by sharing ‘Voice with Visuals.' In this episode, KJ and Theresa discuss how visual and voice technologies can significantly enhance customer service, reduce frustration, and improve efficiency across various sectors such as healthcare, emergency services, and public sector communications. Theresa also reflects on her passion for purposeful work, the importance of enthusiasm in driving change, and her vision of leveraging artificial intelligence to further transform customer interactions. Key Takeaways: 06:02 Challenges in Traditional Call Centers 18:31 Revolutionizing Customer Communication 26:30 Success Stories and Real-World Applications 37:56 Pursuit of Passionate Purpose Quote of the Show (27:00): “The way to communicate, the tools to communicate have changed a bit over time, but yet people, I believe, have an inherent need to connect and to communicate.” – Theresa Szczurek Join our Anti-PR newsletter where we're keeping a watchful and clever eye on PR trends, PR fails, and interesting news in tech so you don't have to. You're welcome. Want PR that actually matters? Get 30 minutes of expert advice in a fast-paced, zero-nonsense session from Karla Jo Helms, a veteran Crisis PR and Anti-PR Strategist who knows how to tell your story in the best possible light and get the exposure you need to disrupt your industry. Click here to book your call: https://info.jotopr.com/free-anti-pr-eval Ways to connect with Theresa SzczurekLinkedIn: https://www.linkedin.com/in/theresaszczurek/ Twitter: https://twitter.com/theresaszczurek?lang=en Company Website: https://radishsystems.com Company LinkedIn: https://www.linkedin.com/company/radish-systems-llc/ How to get more Disruption/Interruption: Amazon Music - https://music.amazon.com/podcasts/eccda84d-4d5b-4c52-ba54-7fd8af3cbe87/disruption-interruption Apple Podcast - https://podcasts.apple.com/us/podcast/disruption-interruption/id1581985755 Spotify - https://open.spotify.com/show/6yGSwcSp8J354awJkCmJlDSee omnystudio.com/listener for privacy information.
Will Meta's Llama 3 take the LLM crown? Is Microsoft's VASA-1 so good that it's dangerous? What does the new Atlas model from Boston Dynamics mean for the intersection of AI and robotics? We'll catch you up on all of that and more, with this week's AI News that matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:02:20 New developments in Meta's llama.05:03 Try Llama 3, important updates, December 2023.08:30 Anthropic's models range from 8B to 400B parameters.12:43 Artificial intelligence creates hyperrealistic talking head videos.14:55 VESA 1 technology creates videos from one image and voice.19:19 Boston Dynamics introduces electric Atlas robot model.23:09 Next iPhone may partner with Google or OpenAI26:40 Instagram post hints at Caitlin Clark's success.Topics Covered in This Episode:1. Introduction of Meta Llama 32. Microsoft's Introduction of VASA 13. Miscellaneous AI and Tech NewsKeywords:Jordan Wilson, Meta, Llama 3, Microsoft, visa 1, chat GPT 5, Meta AI, open model, closed source models, OpenAI, Google, public use, Facebook, knowledge cutoff, download, fork, benchmarking, parameters, model, Mark Zuckerberg, CEO, Google Gemini, Anthropic's Quad 3 OPUS, VASA 1, AI framework, hyperrealistic talking faces, lip syncing, AI-based movie making, Atlas robot, TED Talks.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
⚖️ xAI's first multimodal model with a unique dataset☁️ Infini-Attention: Google's breakthrough gives LLMs limitless context
Show Notes: https://thisdayinai.com/bookmarks/51-ep59SimTheory: https://simtheory.aiThis Day in AI Community: https://thisdayinai.comCHAPTERS:======00:00 - Meta Llama 3: Chris's Cheese Song & Zuck's Silver Chain04:07 - Everything Meta Announced with Llama 3: 7B & 40B Model with 400B coming soon21:31 - Is Groq The Ideal API Host for Llama3?28:44 - Llama 3 Being Made Available via Meta Apps to 3B Users with Meta AI in Instagram, Whatsapp and via Web38:01 - Llama 3 Licensing Must Include "Llama 3" 40:52 - Llama 3 400B Model Benchmarks While Still in Training & Potential Unlimited Context? & You Can Eat Llama1:01:51 - OpenAI Assistants API v2 & Is Tooling Important to Win Devs? Google Gemini's Mistakes1:15:24 - Conor Update: Using VASA-1 To Deep Fake a Record Label1:23:07 - SimTheory update: what's next from SimTheory
Perry Marshall has one of the coolest brains + one of the biggest hearts of anyone that I've had the pleasure of spending time with. I've been a fan of Perry's work for several years… and if you haven't heard of him, here are some highlights: - Founder of the Evolution 2.0 Prize, world's largest award ($10M) for Origin of Life research with judges from Harvard, Oxford and MIT. - One of the most expensive business strategists in the world, with his work being featured in Harvard Business Review, and his Google book laid the foundations for the $400B digital advertising industry. - Originally an electrical engineer, Perry is incredibly passionate about curing cancer. For example, he's currently partnering with prominent people in the cancer space to develop a “Tissue Search Engine” that will allow us to detect the early evolution of cancer (at stage negative 1). (More info can be found on his site, reversingcancer.org) This is one of those “break your brain” kind of episodes, where you get to see inside the mind of a man that is solving some of the biggest problems around town. In this episode, you'll learn: - How you can approach solving the biggest problems in both your life (and the world) leveraging insights from a poem written 1,300 years ago - How to apply 80/20 thinking at a whole new level to find greater alignment in your life and business with your natural strengths - How you can use “Renaissance Time” to gain clarity and discernment (and why Perry has been using this practice without missing a day for the last 10.5 years) If you're the kind of person who wants to be a clearer thinker and learn how to solve problems more effectively, this is an episode that you WILL NOT want to miss. To learn more about Perry, visit https://gobeyondcurious.com/podcast/perry-marshall/ --- P.S. Huge shoutout to both Brian Kurtz and Bob Regnerus for helping to bring Perry on the show
Green energy investments now exceed oil and gas, with $1.7T directed towards renewables and $400B to solar. Today's podcast examines the surge in sustainable energy funding and deep-sea mining's impact on battery production. Subscribe to podcast updates: https://form.jotform.com/223614751580152 Ask Ric: https://www.thetayf.com/pages/ask-ric ----- Links from today's show: Advisor Toolkit: https://dacfp.com/toolkit/ Investor Toolkit: https://dacfp.com/spot_bitcoin_etf_toolkit/ Self-Care with Jean Edelman: http://www.selfcarewithjean.com/ ----- Follow Ric on social media: Facebook: https://www.facebook.com/RicEdelman Instagram: https://www.instagram.com/ric_edelman/ LinkedIn: https://www.linkedin.com/in/ricedelman/ X: https://twitter.com/ricedelman YouTube: https://www.youtube.com/@RicEdelman ----- Brought to you by: Invesco QQQ: https://www.invesco.com/qqq-etf/en/home.html Schwab: https://www.schwab.com/ Disclosure page: https://www.thetayf.com/pages/sponsorship-disclosure-fee -----
Tokenization will boost revenue in the alternatives industry by $400B. On today's podcast, I show you how blockchain technology is making such assets as real estate and crypto more accessible. Tune in now! Subscribe to podcast updates: https://form.jotform.com/223614751580152 Ask Ric: https://www.thetayf.com/pages/ask-ric ----- Links from today's show: How Tokenization Can Fuel a $400 Billion Opportunity in Distributing Alternative Investments to Individuals: https://www.jpmorgan.com/onyx/documents/how_tokenization_can_fuel_a_400_billion_opportunity_in_distributing_alternative_investments_to_individuals.pdf ----- Follow Ric on social media: Facebook: https://www.facebook.com/RicEdelman Instagram: https://www.instagram.com/ric_edelman/ LinkedIn: https://www.linkedin.com/in/ricedelman/ X: https://twitter.com/ricedelman YouTube: https://www.youtube.com/@RicEdelman ----- Brought to you by: Invesco QQQ: https://www.invesco.com/qqq-etf/en/home.html Schwab: https://www.schwab.com/ Disclosure page: https://www.thetayf.com/pages/sponsorship-disclosure-fee -----
Prepare for $400B in QT by March if Regional Bank Bailout Expires! A "top man" at the Federal Reserve, specifically the "Vice Chairman for Banking Supervision, a super duper important guy, is extremely serious about the Fed not renewing the regional bank bailout program that exploded onto the Fed's balance sheet last March to the tune of about $400 billion. If that's the case, then come March, all those dollars are going to have to be repaid. This will squeeze 3 months worth of quantitative tightening into about 2 weeks come March. It is quite doubtful that the monetary system can handle such extreme pressure. Essentially, the repayment of the Bank Term Funding Program loans could very will be impossible without triggering yet another banking crisis. To find out more, click to watch the video now! - Sign up for The End Game Investor! https://endgameinvestor.substack.com/ To find out more about Fortuna Silver go to: https://fortunasilver.com/ - To join our free email list and never miss a video click here: https://arcadiaeconomics.com/email-signup/ - To get your paperback or audio copy of The Big Silver Short go to: https://arcadiaeconomics.com/thebigsilvershort/ Find Arcadia Economics content on these sites: YouTube - https://www.youtube.com/user/ArcadiaEconomics Rumble - https://rumble.com/c/ArcadiaEconomics Bitchute - https://www.bitchute.com/channel/kgpeiwO1dhxX/ LBRY/Odysee - https://odysee.com/@ArcadiaEconomics:5 Listen to Arcadia Economics on your favorite Podcast platforms: Spotify - https://open.spotify.com/show/75OH2PpgUpriBA5mYf5kyY Apple - https://podcasts.apple.com/us/podcast/arcadia-economics/id1505398976 Google-https://podcasts.google.com/feed/aHR0cHM6Ly9teXNvdW5kd2lzZS5jb20vcnNzLzE2MTg5NTk1MjMzNDVz Anchor - https://anchor.fm/arcadiaeconomics Amazon - https://podcasters.amazon.com/podcasts Follow Arcadia Economics on these social platforms Twitter - https://twitter.com/ArcadiaEconomic Instagram - https://www.instagram.com/arcadiaeconomics/ To see the evidence of manipulative behavior in the silver market (as well as how you can send it to your local regulators and Congressional representatives) click here: https://arcadiaeconomics.com/cftc-complaint/ - To sign the petition to ban JP Morgan from having any involvement in the silver industry click here: https://www.ipetitions.com/petition/ban-jp-morgan-from-trading-gold-and-silver #silver #silverprice And remember to get outside and have some fun every once in a while!:) (URL0VD) This video was sponsored by Fortuna Silver, and Arcadia Economics does receive compensation. For our full disclaimer go to: https://arcadiaeconomics.com/disclaimer-fortuna-silver-mines/Subscribe to Arcadia Economics on Soundwise
Michael Caffrey of Tales of Adventure joins James to talk through the challenges and opportunities of vending Magic the Gathering beyond the 30th year of the game.
Tina Maria White, a prodigious talent from the outset, entered the renowned HBCU, Florida A & M University, at just 16, eventually securing a B.S. degree in print journalism before diving into a commendable career that began with a standout role at Xerox Corporation. In our upcoming episode of [Your Podcast Name], we navigate through Tina's remarkable journey from winning “Rookie Salesperson of the Year” at Xerox to carving out spaces where few women, especially Black women, have ventured in the tire and energy sectors. White, the visionary founder of TINA'S TIRES and TINA'S Green Energy Solutions, didn't stop at business: she extended her passion into social impact with Black Girls Rolling Tires, her non-profit aimed at guiding young females from underserved communities into the $400B tires and electrification industries. Her ventures uniquely intertwine commerce and community, especially within her home city of Riviera Beach. Join us as Tina helps us explore opportunities in the energy sector. Contact: linkedin.com/in/tina-white-145256186 | Website: tinasgreenenergysolutions.com/ __________________________________________ NABWIC's Vision: The Vision of the National Association of Black Women in Construction (NABWIC) is to build lasting strategic partnerships with first-rate organizations and individuals that will provide ground-breaking and innovative solutions for black women in construction and their respective communities.| NABWIC.ORG
On today's episode of the Federal Drive with Tom Temin: What good is federal grant money if you can't get your hands on it? The Health and Human Services inspector general takes on a $400B program. Contractors prepare for shutdown or at least an austere October. Learn more about your ad choices. Visit megaphone.fm/adchoices
Managed care. It's a substantial part of the gigantic Medicare program. The Centers for Medicaid and Medicare Services figures half of Medicare enrollees gets health care from the Medicare Advantage program. In the words of the Health and Human Services Office of Inspector General, the growth of managed care has transformed how the government pays for and covers health care. This for 100 million people. That's why the IG has made managed care a top priority. For more on its new strategic plan, Federal Drive Host Tom Temin spoke with the Senior Adviser for Managed Care in the OIG's Office of Audit Services, Carolyn Kapustij. Learn more about your ad choices. Visit megaphone.fm/adchoices
Managed care. It's a substantial part of the gigantic Medicare program. The Centers for Medicaid and Medicare Services figures half of Medicare enrollees gets health care from the Medicare Advantage program. In the words of the Health and Human Services Office of Inspector General, the growth of managed care has transformed how the government pays for and covers health care. This for 100 million people. That's why the IG has made managed care a top priority. For more on its new strategic plan, Federal Drive Host Tom Temin spoke with the Senior Adviser for Managed Care in the OIG's Office of Audit Services, Carolyn Kapustij. Learn more about your ad choices. Visit podcastchoices.com/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
On today's episode of the Federal Drive with Tom Temin: What good is federal grant money if you can't get your hands on it? The Health and Human Services inspector general takes on a $400B program. Contractors prepare for shutdown or at least an austere October. Learn more about your ad choices. Visit podcastchoices.com/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
How did Gary Tenkman go from a first-generation college graduate who felt like a fish out of water in finance to CEO of Ultimus Fund Solutions, a $475+ AUI award-winning fund? It sure wasn't easy. It took grit. It took being willing to get uncomfortable, fail, and learn (rinse, repeat). Grab your headphones and get ready for a backstory chock-full of lessons from places you'd least expect. Listen in as Gary discusses: – His backstory — from first-generation college student paying his own way to rocking the CEO seat – The importance of being willing to get uncomfortable, fail, and learn (rinse, repeat)– What it looks like to be an introvert in a very forward-facing CEO role – How being an introvert is one of the things that makes him a great leader – The power of strategic scaling – why he considers culture first and profit second when considering acquisitions More about Gary Tenkman: Gary Tenkman is Chief Executive Officer of Ultimus Fund Solutions, the largest independent registered fund administrator and one of the largest private fund administrators in the U.S. As CEO, Gary works closely with the Ultimus executive team and board members with responsibility for providing strategic growth, financial, and operational leadership for Ultimus and its over 900 employees. Gary has over 30 years of experience in the financial services industry. He joined Ultimus in 2014 as COO and was named CEO in 2019. Gary currently also serves on the Board of Governors for the Investment Company Institute.Want More Help With Storytelling? + Subscribe to my newsletter to get a weekly email that helps you use your words to power your growth:https://www.stacyhavener.com/subscribe Resources mentioned in this episode:Books: Get Out of My Life, but First Could You Drive Me & Cheryl to the Mall: A Parent's Guide to the New Teenager, Factfulness: Ten Reasons We're Wrong about the World--And Why Things Are Better Than You Think - - - Thank you to our Billion Dollar Backstory podcast sponsor: Ultimus Fund Solutions. You want to launch an interval fund (but don't know where to start). Ultimus has your back. Their in-depth guide answers your real questions.
SCOTUS strikes again, this time on Biden's student loan forgiveness plan. KCSB's Rosie Bultman has more.
Six conservatives on the Supreme Court struck down Biden's student loan program, which would have wiped out more than $400B in debt for nearly 43 million borrowers.
We're here with a HUGE student loan forgiveness update. Since President Biden was elected, those with student loans have been hoping and praying to have a sizable chunk of their debt wiped away. Tens of millions of borrowers would have been impacted, helping free up cash for those that need it most. But, on the other hand, taxpayers were staring at a $400B bill to forgive just a fraction of the student loan debt in America. The economic implications of student debt relief passing would have been huge, but a more significant economic impact could continue for borrowers. We've brought back Sarah Ewall-Wice, Political and Economics Reporter at CBS News, to give us a full student loan forgiveness update, break down what exactly happened in the Supreme Court, and what we must prepare for now that student debt relief is off the table. But, if you were banking on your loans being forgiven, fret not; a new plan may already be underway to give those with student debt another chance at redemption. Sarah walks through the legal battle the Biden Administration brought forth to get debt relief passed, what will happen to graduates now that the bill has come due, and whether or not defaults could increase across the board as a result. Dave and Sarah will also debate why a solution to rising college costs hasn't been conceived and what you should do NOW if you have student loan debt. In This Episode We Cover A federal student loan forgiveness update and what will happen next Why the Supreme Court decided to axe the debt relief plan (it's not what you think) Resuming student debt payments and what graduates need to do NOW Whether or not defaults across credit cards and mortgage payments could increase as a result The true cost of college and why unaffordable education MUST be tackled And So Much More! Links from the Show Find an Agent Find a Lender BiggerPockets Forums BiggerPockets Agent BiggerPockets Bootcamps Join BiggerPockets for FREE On The Market Join the Future of Real Estate Investing with Fundrise Connect with Other Investors in the “On The Market” Forums Subscribe to The “On The Market” YouTube Channel Dave's BiggerPockets Profile Dave's Instagram Hear Our Past Interview with Sarah on US Debt Connect with Sarah: CBS News Sarah's Instagram Sarah's Twitter Check the full show notes here: https://www.biggerpockets.com/blog/on-the-market-120 Interested in learning more about today's sponsors or becoming a BiggerPockets partner yourself? Email advertise@biggerpockets.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
Bob Dorsey is the co-founder and Vice Chairman of Ultimus Fund Solutions. In this episode, Bob and I discuss:His backstory: from butcher markets to financial marketsWhy FinTech is a people business How the root of solving problems doesn't differ based on your industryHow to keep a “people first” culture as you scaleAn important lesson he learned about complacency from an old TV commercialAdvice for people who are contemplating an angel investment venture or private equityMore About Bob DorseyBob has over 35 years of mutual fund industry experience. Prior to co-founding Ultimus in 1999, he was President of Countrywide Fund Services, Inc. (formerly MGF Service Corp.), a registered transfer agent and mutual fund service provider. Prior to Countrywide, Bob was the Assistant Controller and Tax Manager at Carlisle Enterprises, Inc. He began his career in the tax department of Arthur Andersen's Cincinnati office. He is a graduate of Christian Brothers University and a Certified Public Accountant (Inactive).Resources mentioned in this episode:Book: On Fire: The 7 Choices to Ignite a Radically Inspired Life by John O'Leary - - - Thank you to our Billion Dollar Backstory podcast sponsor: Ultimus Fund Solutions. You want to launch an interval fund (but don't know where to start). Ultimus has your back. Their in-depth guide answers your real questions.
Julie, in light of the Eurasian eagle owl Flaco escaping from the Central Park Zoo, talks about the magnificence of these animals. Topics include: The US Supreme Court is hearing Biden v Nebraska case today which challenges the constitutionality of Biden's $400B federal student loan forgiveness program; The Department of Energy says that the lab leak was the cause of the COVID-19 pandemic Wagner Group, a Russian paramilitary group that has committed atrocities in Ukraine, is exercising more dominion in Africa, which is quickly becoming a new battleground for Russia and the West.See omnystudio.com/listener for privacy information.
Chevron, BP stop output at some Gulf of Mexico facilities ahead of hurricane. Dollar creates an 'untenable situation' for risk assets - Morgan Stanley. Biden student loan forgiveness program to cost over $400B, CBO says. Catch today's WSB article seekingalpha.com/wsb. Start Your Free Trial of Seeking Alpha Premium - https://bit.ly/3uX5TDY.Wall Street Breakfast September 27: Chevron, BP Stop Output At Some Gulf Of Mexico Facilities Ahead Of Hurricane Ian
THE THESIS: Saul Alinsky was an evil man who is, thank God, dead. But, his plan for America marches on, now being pushed by the smarter, slicker, better funded, more connected and more evil WEF and their adherents. We can grade their progress at destroying America and, in so doing, make information into power by properly fighting against them. THE SCRIPTURE & SCRIPTURAL RESOURCES: Saul Alinksy, the “community ‘organizing'” guru of Barack Hussein Obama and Hillary Rodham Clinton dedicated his first book to Lucifer--yes, seriously. The Bible describes Satan time and again . . . Jesus taught often about Satan Here's an academic's view of the devil Ten Truths about a Liar; A Biblical Theology of Satan THE NEWS & COMMENT: While Alinsky didn't mention it explicitly, by seizing education the Alinksyites would take control of high offices and positions of influence, like the media. Controlling the media is an outcomes of all of this and we are living in a controlled media environment, here is but one example of that fact: Senator Ron Johnson: “This is alarming. Twitter is censoring @EpochTimes under the guise of "unsafe" speech. Remember what happened the last time corporate media and big tech tried to censor my investigation on Hunter Biden corruption?” ObamaCare was the moonshot of The Party--Alinsky's adherents--to take control of the human body by taking control of healthcare. The Great Reset, which was launched in full with the hoax response to the Covid Flu has shown just how far The Party has come in taking healthcare. The good news is, they are not as far along as they would hope. [AUDIO] - Wow! This question by @FoxPhil. In the previous question, @MarlaTellez asked how much of a role public pressure played in the decision not to mandate masks. Ferrer basically answered “No, it was based on the data points I just came up with today.” [AUDIO] - A day before this press conference in which she wore a mask while alone in her office, Barbara Ferrer, the LA County “public” “health” boss was apparently attending a massive sports event . . . unmasked, of course. Thank God Tucker Carlson is still able to speak truth on TV. [AUDIO] - Tucker Carlson asks about the forced injections The same people who locked down churches, schools and small businesses which lead to the ruin of lives, increases in suicide and bankruptices (while companies of The Party got many hundreds of billions of our money) refuse speak a word they usually love and refuse to tell men to take a break from anal sex to stop spreading Monkey Pox. [AUDIO] - New Media Rule: Don't Say Gay; The media's coverage of monkeypox has studiously avoided using the word "gay" when discussing the individuals who are most at risk of contracting the viral disease. The Party started the process of installing a Universal Basic Income under cover of Covid. They seek to replace work and wealth building with mere subsistence and dependence upon The Party. [AUDIO] - Joe Biden insists 64% of Americans think the economy is in a recession because they didn't get another stimulus check. Meanwhile real wages are down and prices are up. Wikipedia has changed the “definition” of inflation Ben White, a “reporter” who defined recession as two straight quarters of negative economic growth has come out of his struggle session speaking The Party's words like a good supplicant. Our financial system and national budget are a series of lies, concocted by liars. God was quite clear about financial stewardship, but The Party refuses to live within any semblance of discipline. [AUDIO] - @jonstewart you're wrong here. The bill gives a $400B blank check—separate from vets care—for unrelated pork that will supercharge inflation. I support the PACT Act & the $679.4B it would dedicate to vets. It's ppl trying to use PACT to shovel more pork who are exploiting vets. What Is the FBI Trying To Hide About Its Raid on Innocent Americans' Safe Deposit Boxes? Federal prosecutors want to keep key details about the planning and execution of the March 2021 raid at U.S. Private Vaults out of the public's sight. Gun control: the good news? American gun owners simply aren't gong to obey. The bad news is, The Party will keep building legal trip wires to the point where people can be arrested at anytime. [AUDIO] - Nancy Pelosi says that the bill "prohibits the sale, manufacture, or transfer of semi-automatic assault weapons" and then says "this isn't about taking guns away from people who should have them. It's about safe storage. You got a problem with that?" Education: it's over. Government schools are lost. GET-YOUR-KIDS-OUT! Do whatever you have to do to see that your child never enter a government school. [AUDIO] - Randi Weingarten wants us to believe Omicron is a danger to kids (and, hence, to adults) so she continues to destroy education. Satan was the first liar and the first divider. He divided people from God. He cannot create or love, he can twist, distort and divide. The Party is getting very near a hot, Civil War in America. Scott Gruber, who lists himself as the digital team lead for USAID--which is Samantha Power's group--said “some people deserve to feel unsafe.” Portlanders desperately try to sell off homes taken over by squatters; "Unfortunately there are squatters on the property and seller does not have resources to remove them and is willing to negotiate the price for a buyer to take the risk of closing." “Religion” - by which Alinsky meant Christianity in America. God remains on the throne, but people have had their Biblical Worldviews stolen from them, Gladly, there is an easy solution. Go to a Biblical church that teaches straight, Biblical teachings, Attend, serve, learn, become discipled and then become a disciple-maker. See omnystudio.com/listener for privacy information.
Today's headlines: At least 28 people have died due to extreme flooding in Kentucky, and at least 7 people have died due to a severe heat wave in California. Then, Democrats tried to include $400 billion into funding veterans' health care bills. Finally, the New York State Health Commissioner has officially declared monkeypox an emergency in New York, and President Biden tested positive for covid 19 again on Saturday morning. Resources/Articles mentioned this episode: CNN: Death toll in Kentucky floods rises to 28 as area braces for more rain Axios: California wildfire explodes in size amid "critical" risk Axios: Toomey: Dems tried to "sneak" $400B funding into veterans' health care bill Patch: NY Declares Monkeypox Emergency As Cases Continue To Rise CNN: President Joe Biden tests positive for Covid-19 again