Podcasts about 50b

  • 190PODCASTS
  • 226EPISODES
  • 43mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 30, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about 50b

Latest podcast episodes about 50b

Govcon Giants Podcast
275: Use This Consulting Hack to Tap Into Multi-Million Dollar Government Contracts

Govcon Giants Podcast

Play Episode Listen Later May 30, 2025 25:19


If you're out there grinding with no past performance, no big contracts, and no traction—this episode is your wake-up call. I break down the exact strategy I used in 2008 to go from zero to six figures by leveraging the consultant model—a proven shortcut to skip the five-year contract cycles and piggyback off other companies' capabilities. We're not here to chase $25k crumbs—we're here to build real wealth, and it starts with thinking differently. I even brought in insight from Dan Peña, who used this same model to build a $50B empire from scratch. In this episode, I walk you through the math, the mindset, and the mechanics of becoming a government consultant. Whether you want to get a piece of a $5M contract or land one high-ticket client and represent them in the federal space, I'll show you why this is the fastest path to revenue. You bring the hustle, they bring the past performance—and both of you win. I'm not teaching theory here. This is the exact model I followed, and it's the one I want you to adopt. Watch this episode twice if you need to. Watch the episode on Youtube: https://youtu.be/g1801TXW3pQ Govcon Coaching: https://www.govconcoaching.com/home63196883 Pricing: https://www.govconcoaching.com/pricing-page 

Go To Market Grit
From Scaling Cisco to Seeding AI: John T. Chambers on Speed, Strategy, and Reinvention

Go To Market Grit

Play Episode Listen Later May 19, 2025 87:08


John Chambers led Cisco through the rise of the internet—transforming it into the world's most valuable company at its peak.On this week's Grit, the former Cisco CEO unpacks how he scaled the business from $70M to $50B+, pioneered M&A as a growth strategy with 180 acquisitions, and built what many called the best sales force in tech.Now leading his own venture firm, Chambers shares how he's backing the next generation of AI-native startups.Guest: John T. Chambers, Former Cisco Executive Chairman & CEO, JC2 Ventures Founder & CEOChapters: 00:00 Trailer00:45 Introduction01:45 Track record, relationships, trust13:21 Acquisitions every year17:32 Product-focused24:40 Family, dyslexia, and without shame30:46 Wang Laboratories35:59 Ready being CEO40:17 Reinventing your business50:08 Numbers don't lie54:09 Sales calls and making mistakes56:20 Adapting leadership style1:06:32 Best leadership year ever1:13:35 A busy, exhausting schedule1:22:07 Candid with me1:25:21 What “grit” means to John1:26:43 OutroMentioned in this episode: John Doerr, OpenAI, Wang Laboratories, IBM, Microsoft, Google, Amazon, Apple Inc., Meta Platforms, FMC Corporation, DuPont de Nemours, Inc., John Mortgage, Don Valentine, Sequoia Capital, Alcatel Mobile, Lucent Technologies, Inc., Verizon Communications Inc., AT&T Inc., Rick Justice, Pankage Patel, Larry Carter, CNBC, Jim Cramer, George Kurtz, CrowdStrike, Randy Pond, Rebecca Jacoby, Mel SelcherLinks:Connect with JohnXLinkedInConnect with JoubinXLinkedInEmail: grit@kleinerperkins.comLearn more about Kleiner Perkins

Registered Investment Advisor Podcast
Episode 203: Scaling Hightower from $50B to $200B AUM Driving Growth Without Losing Independence

Registered Investment Advisor Podcast

Play Episode Listen Later May 7, 2025 20:15


What does it take to scale a $200B wealth management powerhouse while preserving the heart of independent advice?   In this episode of the Registered Investment Advisor Podcast, Seth Greene speaks with Bob Oros, Chairman and CEO of Hightower. Bob shares how he transformed a $50B firm into a $200B national wealth management platform by empowering advisors with scale, resources, and entrepreneurial freedom. Drawing on insights from over 50 M&A deals and decades in the industry, Bob discusses leadership lessons, organic growth strategies, and how Hightower helps top advisors deliver extraordinary client experiences.   Key Takeaways: → Why successful mergers and acquisitions require saying no to deals that don't align with your company's culture and goals. → How communication and culture are central to managing rapid growth while remaining nimble. → How Hightower provides advisors with services that include HR, compliance, finance, centralized investments, and a national trust company. → How advisors can choose which Hightower services to adopt.  → Why Hightower focuses on helping advisors free up time and serve clients to attract new clients.    Bob Oros is Chairman and CEO of Hightower, a national wealth management firm that invests in and empowers financial advisory businesses to drive growth and help clients achieve ‘well-th. rebalanced.' Under Bob's leadership, Hightower has transformed its business and culture, accelerated acquisitions, expanded services for advisors, and achieved consistently strong organic growth. He has over 25 years of strategic and operational experience, with a track record of successfully recruiting, retaining, and supporting advisors.   Connect With Bob:  Hightower Advisors TikTok X LinkedIn Learn more about your ad choices. Visit megaphone.fm/adchoices

The Security Token Show
RWA DeFi Vaults Sector in Full Swing Plus Venture Funding is Back! - Security Token Show: Episode 282

The Security Token Show

Play Episode Listen Later May 2, 2025 32:52


Tune in to this episode of the Security Token Show where this week Herwig Konings and Kyle Sonlin cover the industry leading headlines and market movements, including RWA DeFi vaults, venture funding coming back, and more RWA news! Company of the Week - Herwig: Particula Company of the Week - Kyle: KfW  The Market Movements 1. Circle Rejects Ripple's $5B Acquisition Offer, New $20B Offer Reported: https://cointelegraph.com/news/ripple-circle-bid-rejected-bloomberg  https://x.com/Cointelegraph/status/1918261724224098651  2. BlackRock Files to Tokenize $150B Treasury Trust Fund with BNY Mellon: https://www.coindesk.com/markets/2025/04/30/sec-filing-shows-blackrock-preparing-150-billion-tokenized-treasury-trust-offering  3. Particula Closes $5.5M Raise and Moves to USA: https://particula.io/particula-raises-5m-funding-round/  4. Dinari Raises $12.7M Series A led by Hack VC and Blockchange Ventures: https://dinari.com/blog/12m-series-a-equities-onchain  5. Tether Attestation Report: More than 7.7 Tons of Gold Backing XAUT: https://crypto.news/tether-holds-more-than-7-7-tons-of-gold-backing-its-xaut-token/  6. MetaWealth Now Registered in Lithuania as VASP: https://thepaypers.com/online-mobile-banking/metawealth-gains-a-virtual-asset-service-provider-licence-in-lithuania--1273351  7. Sony's Soneium and Plume Partner for Onchain Staking and Yield Opportunities: https://www.techinasia.com/news/sonys-blockchain-plume-partner-tokenized-yields  The Token Debrief 1. Calastone Announces Fireblocks as Infrastructure Partner for Fund Tokenization: https://financefeeds.com/calastone-partners-with-fireblocks-to-launch-fund-tokenization-platform/   2. Centrifuge Introduces RWA Launchpad: https://centrifuge.mirror.xyz/Ujcfp4flrFUGxLUEXiDlwZH1ZCfLmh4HMdXI1CUP-XQ  3. ERC3643 Association Announces Interoperable DvP Proof of Concept: https://www.linkedin.com/posts/erc3643_erc3643-activity-7323571899043659776-aPCX 4. Goldman Sachs to Expand Crypto Trading and Explore Crypto Lending & Asset Tokenization: https://www.coinspeaker.com/goldman-sachs-eyes-expansion-in-crypto-trading/  5. Nairobi Securities Exchange (NSE) and DeFi Technologies Create Kenya Digital Exchange: https://coingeek.com/kenya-prepares-tokenizing-rwas-on-homegrown-exchange/   6. Securitize and Gauntlet Use Morpho to Launch Vault for Apollo's ACRED: https://securitize.io/learn/press/securitize-and-gauntlet-launch-levered-rwa-strategy-on-apollo-diversified-credit-securitize-fund  7. Libre to Bring Institutions to TON with $500M Telegram Bond Fund ($TBF): https://www.librecapital.com/insights/libre-and-ton-foundation-launch-500m-telegram-bond-fund-tbf-on-ton-blockchain  8. Hilbert Group Announces Tokenized Bitcoin Yield Offering on Rebranded Syntetika Platform: https://finance.yahoo.com/news/hilbert-group-announces-launch-tokenized-082000090.html  9. KfW Moves from Issuer to Investor, Invests €10M in Berlin Hyp's €100M Covered Bond: https://www.kfw.de/About-KfW/Newsroom/Latest-News/Pressemitteilungen-Details_848192.html  10. Wormhole to Provide Interoperability for Mercado Bitcoin's $200M Pipeline and Invests in Offering: https://www.tronweekly.com/mercado-bitcoin-partners-with-wormhole/  11. MultiBank to Tokenize $3B of MAG's UAE Real Estate on Mavryk: https://cointelegraph.com/news/multibank-mag-mavryk-3b-rwa-tokenization-launch  12. Liquid Noble Adds More Utility to $LGAU Tokenized Gold: https://coingeek.com/liquid-noble-revamps-for-improved-tokenized-bullion-trading/  13. Solana Policy Institute, Superstate, and Orca Submit Proposal for Project Open: US Equities on Public Blockchains https://www.linkedin.com/posts/solana-policy-institute_project-open-wireframe-blueprint-4282025-activity-7323417793951895553-e_W0 14. Pakistan Approves First Tokenized Gold Solution under Fasset's Sandbox License: https://www.urdupoint.com/en/technology/fasset-secures-sandbox-license-to-launch-paki-1971443.html 15. Argentinian Regulator Presents Tokenization Framework: https://invezz.com/news/2025/04/27/latam-crypto-news-itau-to-invest-210m-in-bitcoin-and-argentinas-cnv-to-present-new-tokenization-regime/ 16. World Federation of Exchanges Releases Report on CBDC Impact on Tokenization: https://www.ledgerinsights.com/world-federation-of-exchanges-explore-cbdc-for-tokenization/ 17. Deloitte Predicts 25% of Cross-Border Payments Delivered Onchain by 2030, $50B in Savings: https://fintechmagazine.com/articles/deloitte-tokenised-networks-to-reshape-global-payments   = Stay in touch via our Social Media = Kyle: https://www.linkedin.com/in/kylesonlin /   Herwig: https://www.linkedin.com/in/herwigkonings/  Opinion articles, interviews, and more: https://medium.com/security-token-group   Find the video edition of this episode on our Youtube Channel: https://www.youtube.com/@stmtvofficial     STM Predicts $30-50T in RWAs by 2030: https://docsend.com/view/7jx2nsjq6dsun2b9  More STM.co Reports: https://reports.stm.co/ Join the RWA Foundation and Read the Whitepaper: RWAF.xyz ⏰ TABLE OF CONTENTS ⏰ 0:00 Introduction 0:16 Welcome 1:05 Market Movements 14:14 RWA Foundation Update 15:04  Token Debrief 26:04 Companies of The Week

The Wright Report
25 APR 2025: Friday Roundup: Domestic News // Listener Questions About Spy Talk, War With Iran, and the Death of America - And What To Do About It

The Wright Report

Play Episode Listen Later Apr 25, 2025 34:50


Donate (no account necessary) | Subscribe (account required) Join Bryan Dean Wright, former CIA Operations Officer, for the Friday Roundup on The Wright Report—covering the week's biggest stories and your top listener questions. Another Deportation Reversal Ordered by a Judge – A Venezuelan teen deported under Trump's Alien Enemies Act must now be returned to the U.S. after a judge ruled the Biden-era protections for minors take legal precedence. Trump's Five-Bucket Economic Strategy: Momentum and Warnings – Big reshoring wins with Roche investing $50B in U.S. drug manufacturing and Hyundai moving SUV production from Mexico to Alabama. Meanwhile, Walmart says it can hold prices, but warns of potential shortages. Tariff Talks and Global Trade Realignment – 15 nations are in advanced talks with Trump's team to reduce tariffs, including Vietnam and South Africa. U.S. tariff revenue is surging, but American importers remain unsure how much of the cost will be passed on to consumers. Young Americans Choose Trades Over College – Gen Z is abandoning overpriced universities for skilled trades, boosting Trump's efforts to revive the working class and address student debt. Listener Questions: How U.S. Intelligence Really Assesses Threats – Bryan walks listeners through how intel assessments are made—and how partisan leaks and foreign influence can distort the truth, especially on hot-button topics like Iran and Venezuela. Should We Go to War with Iran? – Bryan lays out the moral and strategic stakes of conflict, emphasizing the burden of leadership and the need for unshakable justification before risking American lives. The Republic Is Dead? What's Next? – Thoughtful responses to Bryan's commentary on America's decline include whether the nation can be saved by law and order, and how local resilience might help restore national strength. Get the facts, the analysis, and the truth—only on The Wright Report. "And you shall know the truth, and the truth shall make you free." - John 8:32​  

Tech Path Podcast
Early 104% Tariff Shocks China!

Tech Path Podcast

Play Episode Listen Later Apr 8, 2025 16:56


The White House has announced that an additional 104% tariff on China went into effect at noon on Tuesday, with collections of the tariff beginning April 9. US President Donald Trump threatened an additional tariff on China if Beijing didn't remove its retaliatory duties on US exports. The latter brought its own 34% tariff increase on the United States in response to Trump's tariff announcement last week. Since China has yet to lift its retaliatory tariffs, the White House has added an additional 104% tariff to Chinese imports.~This Episode is Sponsored By Coinbase~ Buy $50 & Get $50 for getting started on Coinbase➜ https://bit.ly/CBARRON00:00 Intro00:17 Sponsor: Coinbase00:45 Tariffs are live + Yuan crashes01:35 China tariff could mean capital flight to crypto02:32 Bitcoin holdings02:57 China vs Trump04:22 China dumps $50B is US treasuries05:00 Chinese debt05:39 China being petty06:15 Scott Bessent China escalation was a mistake07:23 Bessent x Soros connection08:10 Ray Dalio - "I agree with problem, concerned with solution"09:45 China global trading11:10 Tom Lee - This could take some time12:40 Italy ready negotiate13:19 Countries willing to negotiate15:18 Trump meeting w/Republicans15:50 Charts16:00 Outro#Bitcoin #ethereum #tariffs ~Early 104% Tariff Shocks China!

The Pursuit of Scrappiness
198. Europe Strikes Back, Biggest Fundraising Announcements in March, Elon Controls 60% of World's Satellites, Alibaba Joins AI Race, xAI Buys X

The Pursuit of Scrappiness

Play Episode Listen Later Apr 1, 2025 62:52


Welcome to a new type of episode of the Pursuit of Scrappiness podcast. A monthly analysis of topics we find relevant to highlight, discuss and share with you to help you become a scrappier and better version of yourself. We will be looking at events and developments in business, politics and technology from a European and particularly Baltic perspective.  On this episode we talk about:Baltic funding newsEnefit Green going privateHow Europe strikes back in space and rocket techHarry Stebbings' new VC initiativeTrade wars & AI wars1/3 Baltics' Biggest Fundraising Announcements  Walk15 secures €5M Series A at a €13M valuation, nearing 1 million users with its activity app.  Change Ventures invests €250K in Latvian energy startup EngyCell, leveraging old Tesla batteries for storage solutions.  Frankenberg Technologies raises €4M for defense tech, including mini-missiles to be tested in Ukraine.Estonian fintech Cino lands €3.5M seed funding for its card-linking payment-splitting app.  Lithuanian startup Commody raises €0.5M pre-seed for NFT-enabled collectible car ownership. Eesti Energia's €1B buyout of Enefit Green shakes up the Baltic stock exchange.2/3 Europe Strikes Back: Tech Scene Highlights  Harry Stebbings' 20VC launches a €10M fund targeting founders under 25, backed by top European tech names.  Revolut's valuation soars to $48B after a Schroder's stake revaluation, a 1000x return from its 2016 crowdfunding.Bolt acquires Danish taxi startup Vigo to enter Denmark's regulated ride-hailing market. Secondaries dominate 2024 exits (71%), offering liquidity to startup stakeholders.  EU set to fine Apple and Meta under the Digital Markets Act for competition and privacy violations.  German startup ISAR Aerospace tests a rocket in Norway, aiming to rival SpaceX with NATO backing.3/3 Global & U.S. Highlights  Alibaba invests $50B in an AI model for devices like iPhones and BMWs, intensifying the global AI race.  U.S. job-switching yields only a 4.8% wage increase vs. 4.6% for stayers, signaling a shift in career strategies.  OpenAI's $40B SoftBank investment hinges on its for-profit transition, challenged by Elon Musk's lawsuit.  XAI acquires X (Twitter) for $33B, integrating AI with social media amid a $80B valuation for XAI.  DOGE claims $130B in savings, dwarfed by a $500B rise in U.S. government spending, raising questions about impact.==If you liked this episode or simply want to support the work we do, buy us a coffee or two, or a hundred, with just a few clicks at: https://buymeacoffee.com/pursuitofscrappinessFind all episodes on >  https://www.pursuitofscrappiness.co/Watch select full-length episodes on our YouTube channel > https://www.youtube.com/channel/UCP6ueaLnjS-CQfrMCm2EoTAConnect with us on Linkedin > https://www.linkedin.com/company/pursuit-of-scrappiness/===============Support the show

The Hydrogen Podcast
Bosch's Hydrogen Breakthroughs, $50B Texas H2-Powered Data Center, & France's $92B Hydrogen Goldmine!

The Hydrogen Podcast

Play Episode Listen Later Mar 31, 2025 11:43 Transcription Available


The Hydrogen Podcast
India's First Hydrogen Truck Trials + Spain's $4B Hydrogen Plan | Game-Changing Fuel Cell Breakthrough!

The Hydrogen Podcast

Play Episode Listen Later Mar 6, 2025 10:03 Transcription Available


Hydrogen trucking and infrastructure are evolving FAST! In this episode of The Hydrogen Podcast, I break down:✅ Tata Motors' Hydrogen Truck Trials  – India's first Class 8 fuel cell trucks hit the road, a major step in decarbonizing freight.✅ Spain's Enagas Invests €4 Billion  – Hydrogen infrastructure expansion with H2Med pipeline & fueling stations.✅ Fuel Cell Cost Breakthrough  – Nanotech at the University of Chicago could slash fuel cell costs by 30-40%.✅ Hydrogen Market Economics  – How India's $12B truck market, Spain's $200M fueling revenue, and fuel cell cost cuts create a $50B global opportunity.

The Generations Radio Program
How Important is Church?

The Generations Radio Program

Play Episode Listen Later Feb 20, 2025


Is it enough to attend church via Zoom, or is there more to attending church than logging in and listening to a sermon once a week? Kevin and Josh Schwisow discuss several different topics involving the importance of church, tithing, and what it means to serve others as a Christian. This program includes: 1. The World View in 5 Minutes with Adam McManus (Biden's EPA advisor admits $50B "insurance policy" against Trump, Judge gives Dept. of Government Efficiency massive win, PCA repents of helping illegals stay) 2. Generations with Kevin Swanson

Generations Radio
How Important is Church? - What Does the Bible Say About That

Generations Radio

Play Episode Listen Later Feb 20, 2025 42:48


Is it enough to attend church via Zoom, or is there more to attending church than logging in and listening to a sermon once a week? Kevin and Josh Schwisow discuss several different topics involving the importance of church, tithing, and what it means to serve others as a Christian.This program includes:1. The World View in 5 Minutes with Adam McManus (Biden's EPA advisor admits $50B "insurance policy" against Trump, Judge gives Dept. of Government Efficiency massive win, PCA repents of helping illegals stay)2. Generations with Kevin Swanson

The Daily Detail
The Daily Detail for 2.19.25

The Daily Detail

Play Episode Listen Later Feb 19, 2025 17:27


AlabamaA bill offered by AL delegates prohibits sale of land to China and othersGovernor Ivey doubles down on bill she supports to restructure VA in ALSoS Wes Allen to run in 2026 for the Lt. Governor's officeClean Up Alabama calls on state lawmakers to take up bill re: obscenity exemptions for public schools and librariesAL senate passes bill providing exemptions from jury duty for nursing momsSevere Weather Preparedness Sales Tax Holiday happens this weekendNationalTrump calls for resignation of all US attorneys appointed during Biden Admin.District judge rules against Dem lawsuit to stop DOGE and Elon Musk effortsSCOTUS prepares to consider appeal from Trump over Special Counsel firingA director within the FDA has resigned in response to mass firings within HHSUS Senate will vote on confirming Kash Patel to FBI this coming ThursdayEPA's Lee Zeldin has located $20B taxpayer money in offshore accountDHS released ad campaign to illegal aliens advising them to "self-deport"DOGE team says $50B has been saved through recent audits of agenciesPart 2 of VP Vance's speech in Munich over freedom of speech

Bankless
ROLLUP: Trump's Massive Memecoin | Ethereum Ecosystem Drama | Ross Ulbricht Freed | Phantom Wallet Worth $3B?

Bankless

Play Episode Listen Later Jan 24, 2025 70:45


In this week's Bankless Weekly Rollup, David is joined by Eric Conner to unpack a whirlwind week in crypto. Highlights include Trump's explosive entry back into the Oval Office, launching a $50B memecoin on Solana, a full pardon for Ross Ulbricht, and pro-crypto cabinet appointments. Meanwhile, the Ethereum Foundation faces internal drama as Vitalik declares “wartime mode” amidst leadership restructuring. We also dive into Solana's soaring ecosystem, BlackRock's massive Bitcoin buys, and the SEC's shakeup with new pro-crypto initiatives. It's a week of big moves, bigger drama, and major market shifts! ------

The Security Token Show
RWA Industry Promises Billions in Pipeline for 2025 - Security Token Show: Episode 267

The Security Token Show

Play Episode Listen Later Jan 13, 2025 52:27


Tune in to this episode of the Security Token Show where this week Herwig Konings, Kyle Sonlin, and Nico Pantelis cover the industry leading headlines and market movements, including how we've crossed $50B in market cap and even some predictions!   This week Jason Barraza had the opportunity to host Gabriel Sadoun from DigiShares on their new “DigiShares Launch” Platform and how they're making tokenization easier, faster, and cheaper for issuers worldwide.   Join the RWA Foundation and Read the Whitepaper: RWAF.xyz Read STM's Global Tokenized Real Estate Market Guide 2024: https://docsend.com/view/rrfjz7zxzqb9na2q  Read the RWA Securities Market Update: https://docsend.com/view/7k8mr83xsgyt57yh  The Market Movements MANTRA and DAMAC Partner to Tokenized $1B Worth of Real Estate: https://www.coindesk.com/business/2025/01/09/mantra-blockchain-to-tokenize-1-b-of-real-world-assets-for-uae-based-property-firm-damac   Coinbase Explores Tokenizing Their Public Stock: https://beincrypto.com/coinbase-considers-coin-tokenization-on-base/   OCBC Launches Customizable Corporate Bonds: https://www.finews.asia/finance/42567-ocbc-bespoke-tokenized-bonds-global-markets-singapore   Dusk Mainnet Launches After 6 Years with Transaction Confidentiality in Mind: https://www.coinspeaker.com/dusk-mainnet-goes-live-after-6-years-bringing-privacy-first-rwa-tokenization/   Bitfinex Derivates Acquires DASP License in El Salvador, Moves Headquarters: https://www.tradingview.com/news/cointelegraph:a3e404078094b:0-bitfinex-derivatives-to-move-to-el-salvador-after-securing-local-crypto-license/   Ditobanx to Tokenize $300M Worth of Assets in El Salvador with Tokeny: https://tokeny.com/tokeny-and-ditobanx-partner-to-transform-el-salvador-into-a-tokenization-leader/   The Token Debrief STM on CoinDesk: Minimum 10X Growth From Current $50B Market Cap & What to Look For in 2025: https://www.coindesk.com/coindesk-indices/2025/01/08/what-2025-holds-for-tokenized-real-world-assets   Elixir Enables DeFi Access for Hamilton Lane's Tokenized SCOPE Fund through deUSD: https://crypto.news/elixir-unlocks-defi-for-hamilton-lanes-scope-fund-via-deusd/   Morpho Integrates Superstate's $USCC Crypto Carry Fund as Collateral Option for Steakhouse USDC RWA Vault: https://www.linkedin.com/posts/superstate_our-crypto-carry-fund-uscc-is-now-live-on-activity-7282072010090770432-hsVY?utm_source=share&utm_medium=member_desktop Michael McCluskey Appointed as New CEO at Sologenic: https://www.globenewswire.com/news-release/2025/01/07/3005653/0/en/Sologenic-Appoints-Michael-McCluskey-as-CEO-to-Lead-Innovation-in-Tokenization-DeFi.html   Hong Kong Launches Bank Incubator, Focuses on Tokenized Deposits: https://www.ledgerinsights.com/hong-kong-launches-dlt-incubator-for-banks/   Raredex Tokenizes Rare Earth Metals on Arbitrum: https://www.panewslab.com/en/articledetails/um83mcmf.html   Standard Chartered Launches Custody for Digital Assets in Luxembourg under MiCA: https://www.ledgerinsights.com/standard-chartered-sets-up-digital-asset-custody-in-luxembourg/   Plume and PinLink to Tokenize DePIN for RWAs: https://www.cryptoglobe.com/latest/2025/01/plume-and-pinlink-join-forces-to-target-30t-rwa-tokenization-opportunity/   FDIC Issued “Pause” Letters to USDF Consortium and Other Banks: https://www.ledgerinsights.com/fdic-publishes-crypto-pause-letters-including-usdf-consortium/   = Check out our Companies = Security Token Group: http://securitytokengroup.com/   Security Token Advisors: http://www.securitytokenadvisors.com/   Security Token Market: https://stm.co  InvestReady: https://www.investready.com   ⏰ TABLE OF CONTENTS ⏰ 0:16 Introduction 1:13 Market Movements 15:30 STS Interviews: DigiShares 23:25 Token Debrief 43:46 RWA Foundation Weekly Update 45:54 Companies of The Year 2024

The Security Token Show
RWA Market Cap Ends 2024 Over $50 Billion - Security Token Show: Episode 266

The Security Token Show

Play Episode Listen Later Jan 3, 2025 39:08


Tune in to this episode of the Security Token Show where this week Herwig Konings and Kyle Sonlin cover the industry leading headlines and market movements, including how we've crossed $50B in market cap and even some predictions! Join the RWA Foundation and Read the Whitepaper: RWAF.xyz  Read STM's Global Tokenized Real Estate Market Guide 2024: https://docsend.com/view/rrfjz7zxzqb9na2q   Read the RWA Securities Market Update: https://docsend.com/view/7k8mr83xsgyt57yh   Company of the Week - Herwig: Frax Company of the Week - Kyle: Nest  = Stay in touch via our Social Media = Kyle: https://www.linkedin.com/in/kylesonlin/   Herwig: https://www.linkedin.com/in/herwigkonings/  Nico: https://www.linkedin.com/in/nicopantelis/   Jason: https://www.linkedin.com/in/jasonbarraza/   Opinion articles, interviews, and more: https://medium.com/security-token-group   Find the video edition of this episode on our Youtube Channel: https://www.youtube.com/@stmtvofficial The Market Movements 1. Frax Allocations Announced: https://beincrypto.com/frax-stablecoin-blackrock-buidl-fund/  / https://x.com/superstatefunds/status/1874935000820310326   2. Nest and Dinari Tokenize Blackstone Senior Loan ETF Vault: https://www.news-journal.com/nest-partners-with-dinari-to-deliver-real-world-yield-through-first-tokenized-blackstone-etf-on/article_2e936434-cf32-5857-a329-d54f77b37a1e.html     3. Plume Network Launches $25M RWA Tokenization Fund: https://www.prnewswire.com/news-releases/plume-network-launches-25-million-rwafi-ecosystem-fund-to-accelerate-real-world-asset-tokenization-and-innovation-302341436.html   4. Ondo to Natively Issue USDY on Plume Network: https://www.prnewswire.com/news-releases/plume-network-taps-ondo-finance-to-broaden-rwafi-ecosystem-with-tokenized-us-treasuries-302340097.html   5. Plume Network and Maseer To Tokenize $200M in Carbon Allowances: https://enterprisetalk.com/quick-bytes/plume-network-partners-with-maseer-to-tokenize-usd-200m-in-carbon-allowances-on-chain   The Token Debrief 1. Mizuho Securities and Blue Sky Tokenize Renewable Energy Business: https://www.ledgerinsights.com/mizuho-securities-involved-in-security-token-for-renewable-energy/   2. Franklin Templeton to Expand BENJI and Looks at More ETFs: https://blockworks.co/news/franklin-templeton-etfs-tokenization-2025   3. T-Bank to Enter Tokenization; Bank of Russia's CBDC Receives Negative Feedback: https://coingeek.com/russia-cbdc-faces-opposition-t-bank-dabbles-in-tokenization/   4. Binance Wallet Now Supports Matrixdock's XAUm Gold Token: https://thetokenizer.io/2024/12/30/matrixdock-integrates-xaum-gold-token-with-binance-wallet-to-advance-financial-equality/   5. Singularity Finance and Crymbo Partner on FATF Travel Rule Compliance: https://zycrypto.com/singularity-finance-partners-with-crymbo-to-streamline-digital-asset-compliance-with-fatf-travel-rule/   ⏰ TABLE OF CONTENTS ⏰ 0:16 Introduction 1:53 Market Movements 18:53 RWA Foundation Weekly Update 21:25 Token Debrief 32:35 Companies of The Week: Nest and Frax

Mornings on the Mall
Who's 2024's Biggest Loser

Mornings on the Mall

Play Episode Listen Later Dec 31, 2024 34:38


12/31/24 Hour 2 Vince speaks with Eddie Scarry, Columnist at The Federalist and Author of “Liberal Misery: How the Hateful Left Sucks Joy Out of Everything and Everyone” about why he believes Barack Obama is the biggest loser of 2024. Vince speaks with Ed Morrissey, Managing Editor at Hot Air and host of the Ed Morrissey Show Podcast about Politico suddenly wondering what happened to the $50B in Biden’s Green New Deal For more coverage on the issues that matter to you visit www.WMAL.com, download the WMAL app or tune in live on WMAL-FM 105.9 from 3-6pm. To join the conversation, check us out on social media: @WMAL @VinceCoglianese. Executive Producer: Corey Inganamort @TheBirdWords See omnystudio.com/listener for privacy information.

Inside the Network
Hamza Fodderwala: The future of cybersecurity — 2024 retrospective, 2025 predictions and what founders need to know

Inside the Network

Play Episode Listen Later Dec 29, 2024 57:28 Transcription Available


In this holiday episode special, we're joined by Hamza Fodderwala, Executive Director at Morgan Stanley, where he leads cybersecurity equity coverage. He joined Morgan Stanley's software research team in early 2016 and leads coverage for public cybersecurity companies like Palo Alto Networks, CrowdStrike, Fortinet, SentinelOne, Okta, Zscaler, Cloudflare, Rapid7, Check Point, Qualys, Varonis and Tenable. Before Morgan Stanley, Hamza was an equity research associate at Susquehanna International Group covering the financial technology sector. Hamza graduated from New York University, with a Bachelor of Arts in Economics.We dive into Hamza's insights on the major customer buying patterns in cybersecurity throughout 2024 and what might shift in 2025. Hamza shares his observations on how the Generative AI boom is influencing product adoption in the industry, and whether enterprises are currently adopting AI security solutions. Additionally, we explore key trends from cybersecurity resellers, discuss what might unlock public equity markets for new IPOs, and which private cyber companies could go public next.Our discussion covers the cybersecurity M&A landscape, highlighting over $50B in deal volume this year with companies like Juniper, Darktrace, Recorded Future, Synopsys, Venafi, and more all getting acquired. Finally, Hamza shares lessons for founders, offering advice on identifying areas ripe for disruption, navigating the venture funding landscape, and building resilience in a competitive industry.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
2024 in Post-Transformers Architectures (State Space Models, RWKV) [LS Live @ NeurIPS]

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 24, 2024 43:02


Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe

Strategy Simplified
S16E2: Nike Business Breakdown - Can the Iconic Brand "Just Do It" Again?

Strategy Simplified

Play Episode Listen Later Dec 18, 2024 25:42


Send us a textNike's been struggling - can the $50B retailer turn it around?In this segment of Business Breakdowns, Namaan Mian and Jenny Rae Le Roux explore Nike's business and what the iconic brand needs to do to get back to market dominance.First, they cover key financials to know for Nike, before breaking down the company's business model. Next, they discuss metrics they would be managing to as consultants or members of the Nike board. Finally, they both share a hot take about the future of Nike.Grab your business strategist hat and start listening.Business Breakdowns drops on the 1st and 3rd Wednesday of each month. Loving it or have ideas to grow the segment? Reach out by sending us a text or email.Management Consulted LinksBuild your business acumen through our Black Belt case coaching programConnect with Namaan and Jenny Rae on LinkedInStrategy Sprint one-week consulting project: learn more and joinMore About NikeNike financial documentsNike Investor Relations siteConnect With Management Consulted Follow Management Consulted on LinkedIn, Instagram, and TikTok for the latest updates and industry insights. Schedule a free 15min consultation with a member of the Management Consulted team. Join an upcoming live event - case interviews demos, expert panels, and more. Email our team (team@managementconsulted.com) with any questions or feedback.

Furthermore with Amanda Head
Rep. Burlison calls 3-month CR a ‘dumpster fire,' says ‘dirty secret' in D.C. is that some 'Republicans like spending too'

Furthermore with Amanda Head

Play Episode Listen Later Dec 18, 2024 29:35


On this episode of the podcast, Missouri Congressman Eric Burlison dives into the chaos surrounding the recently released 1,500+ page omnibus bill, which he describes as a ‘dumpster fire' and a ‘last spending spree' before a shift in presidential leadership. The Missouri Republican goes on to break down troubling provisions, including a congressional pay raise, taxpayer funding of a privately owned bridge and a $50B windfall for Big Pharma. The Congressman calls for reform through separate funding bills and shares his frustration over the lack of progress on key issues like healthcare and border security. Furthermore, Burlison explores how future executive actions could tackle these fiscal challenges and imminent dangers that our country faces.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Not Investment Advice
193: OpenAI vs. xAI, De-Banking Debate, Business of IVF & Presidential Pardon History

Not Investment Advice

Play Episode Listen Later Dec 4, 2024 59:15


The NIA boys discuss OpenAI vs. xAI, De-Banking Debate, Business of IVF & Presidential Pardon HistoryEpisode Timestamps(00:00:00) - Intro(00:01:42) - Meme of the week(00:13:48) - xAI's Recent Funding Raise at $50B valuation(00:17:38) - OpenAI vs. xAI(00:26:51) - What makes xAI worth $50B(00:33:30) - De-Banking Debate(00:50:38) - Business of IVFWhat Is Not Investment Advice?Every week, Jack Butcher, Bilal Zaidi & Trung Phan discuss what they're finding on the edges of the internet + the latest in business, technology and memes.Subscribe + listen on your fav podcast app:Apple: https://pod.link/notadvicepod.appleSpotify: https://pod.link/notadvicepod.spotifyOthers: https://pod.link/notadvicepodWatch + Subscribe on Youtube:https://youtu.be/i7BCrmeNMOgListen into our group chat on Telegram:https://t.me/notinvestmentadviceLet us know what you think on Twitter:@bzaidi@trungtphan@jackbutcher@niapodcastFollow NIA on social media:Instagram: https://www.instagram.com/notadvicepod/Facebook: https://www.facebook.com/profile.php?id=100089813414522TikTok: https://www.tiktok.com/@niapodcast Hosted on Acast. See acast.com/privacy for more information.

The Voice of Reason with Andy Hooser
Chris Burgard: The Immigration Overhaul and a New America 2025

The Voice of Reason with Andy Hooser

Play Episode Listen Later Dec 4, 2024 36:49


Guest Chris Burgard, Director of Documentary film "The War on Truth", joins to discuss the battle against illegal and criminal migrants. Discussion of plans for mass deportations, Democrat cities working with Trump administration, and the battle against the deep state.  As Biden continues his world farewell tour, the federal government pledges to spend as much money as possible before exiting office. Are we really looking out for the country, or only for special interests?  EPA admits to purging more that $50B in special interesting funding before Trump takes office. 

Entrepreneurs for Impact
#206: A 7-Minute Thanksgiving Micro-Episode — Tim Ferriss Wisdom. How to Recharge for Climate Wins. Science of Gratitude. 25% Boost in Happiness. Toxic Positivity.

Entrepreneurs for Impact

Play Episode Listen Later Nov 28, 2024 6:53


In this unusual episode, I do my best to go meta, above our normal climate tech challenges and opportunities, to the science and mindset of gratitude. Does a greater appreciation for what we have mean nothing bad is happening out there? Not at all. But building climate tech solutions is not a spring. It's a marathon. Or maybe both. As such, we need all the non-work tools in our toolbelt to maintain our energy, health, and focus along the journey. In seven short minutes, I walk through five science-based benefits of gratitude and four practical ways to practice it in a few minutes each day.

Acquired
IKEA

Acquired

Play Episode Listen Later Nov 18, 2024 202:28


IKEA may be the most singular company we've ever studied on Acquired. They're a globally scaled, $50B annual revenue company with no direct competitors — yet have only ~5% market share. They're one of the largest retailers in the world — yet sell only their own products. They generate a few billion in free cash flow every year — yet have no shareholders. And oh yeah, they also sell hot dogs cheaper than Costco! (Sort of.)Tune in for an episode flat-packed with counterintuitive lessons about how this folksy mail order business from the Swedish countryside came into your living rooms (and bedrooms and dining rooms and kitchens and bathrooms and patios and garages and backyards) all over the globe!Sponsors:Many thanks to our fantastic Fall ‘24 Season partners:J.P. Morgan PaymentsStatsigCrusoeLinks:Please take our 2024 Acquired Survey if you have a minute. It'd mean the world to us!The Testament of a Furniture DealerOur past episodes on Costco, Walmart, Amazon, LVMH and HermèsWorldly Partners Multi-Decade IKEA StudyEpisode sourcesCarve Outs:DetroitersThe 11-inch iPad ProThe QB SchoolIce Cube at the World SeriesMore Acquired:Get email updates with hints on next episode and follow-ups from recent episodesJoin the SlackSubscribe to ACQ2Check out the latest swag in the ACQ Merch Store!‍Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.

Alt Goes Mainstream
Churchill Asset Management's Alona Gornick - the evolution of private credit, the power of permanent capital, and the importance of the product specialist

Alt Goes Mainstream

Play Episode Listen Later Nov 14, 2024 54:51


Welcome back to the Alt Goes Mainstream podcast.Today's episode is with someone who has experienced the growth and evolution of the credit space from different vantage points.We welcome Alona Gornick, a Managing Director, Senior Investment Strategist, and Co-Head of the Chicago Office for Churchill Asset Management, a firm with $50B committed capital that is a provider of financing solutions to middle market private equity firms and their portfolio companies. Churchill is an investment-specialist affiliate of Nuveen, the asset manager of TIAA.Alona provides investment insights across the private capital spectrum to the investment community – and has a particular focus on working with the Private Wealth and Retail channels. She works closely with Nuveen's global distribution team to deepen relationships with and educate Churchill's investors and partners.Alona leverages her experience in capital markets, investor relations, and credit investing from working at the likes of Nuveen, Golden Gate Capital, and Oaktree Capital Management.Alona and I had a fascinating conversation. We discussed:The evolution of credit investing.The opportunities and risks in private credit. Are we in a private credit bubble?Why the product specialist role is critical for working with the wealth channel.The power of scale, particularly in private credit, and how it helps alternative asset managers win deals and invest.The strategic benefit of platforms and permanent capital.Thanks Alona for coming on the show to share your views and wisdom on private markets. We hope you enjoy.A word from AGM podcast sponsor, Ultimus Fund SolutionsThis episode of Alt Goes Mainstream is brought to you by Ultimus Fund Solutions, a leading full-service fund administrator for asset managers in private and public markets. As private markets continue to move into the mainstream, the industry requires infrastructure solutions that help funds and investors keep pace. In an increasingly sophisticated financial marketplace, investment managers must navigate a growing array of challenges: elaborate fund structures, specialized strategies, evolving compliance requirements, a growing need for sophisticated reporting, and intensifying demands for transparency.To assist with these challenging opportunities, more and more fund sponsors and asset managers are turning to Ultimus, a leading service provider that blends high tech and high touch in unique and customized fund administration and middle office solutions for a diverse and growing universe of over 450 clients and 1,800 funds, representing $500 billion assets under administration, all handled by a team of over 1,000 professionals. Ultimus offers a wide range of capabilities across registered funds, private funds and public plans, as well as outsourced middle office services. Delivering operational excellence, Ultimus helps firms manage the ever-changing regulatory environment while meeting the needs of their institutional and retail investors. Ultimus provides comprehensive operational support and fund governance services to help managers successfully launch retail alternative products.Visit www.ultimusfundsolutions.com to learn more about Ultimus' technology enhanced services and solutions or contact Ultimus Executive Vice President of Business Development Gary Harris on email at gharris@ultimusfundsolutions.com.We thank Ultimus for their support of alts going mainstream.Show Notes00:00 Introduction to Ultimus Fund Solutions01:18 Welcome to the Podcast02:00 Guest Introduction: Alona Gornick03:45 Alona's Career Path and Experience06:59 Growth of Middle Market Direct Lending07:41 Changes in the Credit Landscape10:21 The Importance of Size and Scale in Private Credit13:27 Deal Structuring and Market Evolution14:46 Impact of High Rate Environment16:06 Private Credit Returns and Underwriting20:53 Investor Questions and Market Insights21:24 Educating Investors on Private Credit23:43 Private Credit in Wealth Portfolios24:09 Diversification Benefits of Private Credit24:24 Yield Premium in Private Credit26:40 Private Credit vs. Private Equity27:06 Exploring Private Equity and Private Debt27:24 Transitioning from Public to Private Credit27:49 The Role of a Product Specialist28:09 Balancing Risks and Benefits28:49 Relating to Advisors with Real Examples29:40 The Importance of Education in Allocation30:25 Diverse Viewpoints on Alternative Asset Managers31:48 Challenges in Access to Capital33:26 The Significance of Hiring Quality People34:12 Non-Traditional Backgrounds in Specialist Roles36:29 Patience and Commitment in Educating Investors39:13 The Hardest Part of Educating the Wealth Channel40:47 The Role of Structure in Education44:21 Concerns About the Future of Private Credit47:00 The Growth Potential of Private Credit49:38 The Most Interesting Alternative Investment50:15 The Opportunity in Private Equity Secondaries52:47 Private Credit Secondaries: A Nascent Space54:17 Primary and Secondary Considerations in Credit54:34 Conclusion and Final Thoughts

Entrepreneurs for Impact
#204: Jonathan Tan, CEO of Coreshell — VC-Backed Battery Anode Innovator. 25% Lower Cost. 40% Longer EV Drive Range. Stress Management Via Rock Climbing. Pain vs. Suffering.

Entrepreneurs for Impact

Play Episode Listen Later Nov 7, 2024 51:33


Coreshell can reduce the cost of batteries by roughly 25% and improve drive range by about 40% by producing metallurgical silicon anodes to replace graphite. Their collaborators include some of the biggest names in the automotive industry and the largest merchant producer of silicon metal in the Western world. Jonathan is a UC Berkeley graduate in chemical engineering with 12 years of prior experience at New Logic Research as well as Membrane Technolgy and Research. He's also a Climate CEO Fellow with us at EFI. In this episode, you'll learn these four important takeaways. How they raised 10% of the capital of peers to reach the same milestones Why their metallurgical silicon anode can be 20x more cost-efficient than today's current lithium-ion battery technology How he reduces stress by facing mortality in safe but adventurous rock climbing Why we should focus on process vs. outcomes, and how this relates to the difference between pain vs. suffering

Business Casual
Wait, College is Getting Cheaper? & BYD is Outselling Tesla

Business Casual

Play Episode Listen Later Nov 1, 2024 31:41


Episode 444: Neal and Toby chat about the recent data showing not all college tuition is getting expensive, in fact, it's become more affordable in the last decade. Then, Britain's Labour Party swings for the fences to catch up to the US economy by announcing its biggest tax hike on the wealthy yet. Next, Chinese EV maker BYD overtakes Tesla in EV sales for the first time this quarter, signaling that a new EV king could be crowned. Meanwhile, the fast-casual restaurant industry might be struggling but Chili's is riding high thanks to its Triple Dipper platter. And, Super Micro has a colossal $50B stock wipeout. Lastly, important headlines you need to know to end your day. Subscribe to Morning Brew Daily for more of the news you need to start your day. Share the show with a friend, and leave us a review on your favorite podcast app. Find your fit at bonobos.com and use code BREW20 for 20% off.  Get your Morning Brew Daily T-Shirt HERE: https://shop.morningbrew.com/products/morning-brew-radio-t-shirt?_pos=1&_sid=6b0bc409d&_ss=r&variant=45353879044316  Listen to Morning Brew Daily Here: https://link.chtbl.com/MBD Watch Morning Brew Daily Here: https://www.youtube.com/@MorningBrewDailyShow Learn more about your ad choices. Visit megaphone.fm/adchoices

IoT For All Podcast
The State of IoT Adoption | Eseye's Nick Earle | Internet of Things Podcast

IoT For All Podcast

Play Episode Listen Later Oct 30, 2024 35:25


In this episode of the IoT For All Podcast, Nick Earle, CEO of Eseye, joins Ryan Chacon to discuss the state of IoT adoption and challenges facing the industry. The conversation covers Eseye's 2024 State of IoT Adoption Report, emerging IoT use cases, IoT device firmware, SGP.32, Amazon's ambitious 'Amazon Key' project, IoT initiatives in Africa, and predictions for IoT in 2025. 2024 State of IoT Adoption Report: https://www.iotforall.com/white-paper/eseye-2024-state-of-iot-adoption-report Nick Earle is the CEO of Eseye where he spearheads Eseye's strategy. He firmly believes in connectivity that ‘just works.' He's a visionary business leader with a distinguished career in technology spanning more than 30 years, oscillating between start-ups and global technology corporations. Previously, Nick led organizations and cross-company transformation programs for two $50B global corporations; Cisco where he ran the Cloud and Managed Services business as well as their Worldwide Field Services function, and Hewlett Packard where he ran the global Enterprise Marketing function and the internet transformation strategy. As a world leader in IoT connectivity solutions, Eseye helps customers to realize lasting value from global IoT projects. They bring the deep device expertise needed to integrate, manage, and optimize IoT connectivity for estates of any scale or complexity, seamlessly connecting devices across 190 countries and more than 700 networks. All with near-100% uptime. Discover more about IoT at https://www.iotforall.com More about Eseye: https://www.eseye.com Connect with Nick: https://www.linkedin.com/in/nearle/ Our sponsor: https://www.qoitech.com (00:00) Qoitech (00:35) Intro (00:49) Nick Earle and Eseye (01:16) The state of IoT adoption (05:12) Challenges in IoT connectivity (09:39) What makes connectivity a challenge? (14:30) Device firmware and security (18:02) Predictions for IoT in 2025 (22:57) SGP.32 standard (27:32) Emerging IoT use cases (33:52) Learn more and follow up Subscribe on YouTube: https://bit.ly/2NlcEwm​ Join Our Newsletter: https://www.iotforall.com/iot-newsletter Follow Us on Social: https://linktr.ee/iot4all Check out the IoT For All Media Network: https://www.iotforall.com/podcast-overview

TD Ameritrade Network
EARNINGS ALERT: GOOGL, QRVO, CMG, V

TD Ameritrade Network

Play Episode Listen Later Oct 29, 2024 16:09


Alphabet (GOOGL) tops expectations as the company says YouTube Ad Sales surpassed $50B over the past 4 quarters. Meanwhile, Qorvo (QRVO) and Chipotle (CMG) both fall after mixed reports. And, Visa (V) saw payment volumes rise, but the company will reportedly lay off 1,400 employees by the end of the year. Kevin Green and Diane King Hall react to the earnings live. ======== Schwab Network ======== Empowering every investor and trader, every market day. Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6D Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribe Download the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185 Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7 Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watch Watch on Vizio - https://www.vizio.com/en/watchfreeplus-explore Watch on DistroTV - https://www.distro.tv/live/schwab-network/ Follow us on X – https://twitter.com/schwabnetwork Follow us on Facebook – https://www.facebook.com/schwabnetwork Follow us on LinkedIn - https://www.linkedin.com/company/schwab-network/ About Schwab Network - https://schwabnetwork.com/about

Geeks Of The Valley
#102: Pioneering Fintech in Latin America with Mercado Pago's Osvaldo Giménez

Geeks Of The Valley

Play Episode Listen Later Oct 23, 2024 43:48


Osvaldo Giménez serves as the President of Fintech at Mercado Libre, and is also the CEO of Mercado Pago, its financial services division. Osvaldo shares his journey, the challenges of building fintech solutions for diverse markets, and the strategies that have made MercadoLibre Latin America's largest e-commerce powerhouse, Mercado Pago is Mercado Libre's financial services unit accounting for over 50M active users and over $50B in transaction volume, where he oversees the business strategy, including product development and promotion. Mr. Giménez joined Mercado Pago in 2004 and became President in 2020. Osvaldo was previously Country Manager for Mercado Libre in Argentina and was responsible for launching the digital trading platform in Chile, Uruguay and Peru following the firm's IPO. Before joining Mercado Libre, Giménez was an associate consultant at Booz Allen and Hamilton and worked in the fixed income department at Santander in New York. Giménez graduated with a degree in industrial engineering from ITBA (Instituto Tecnológico de Buenos Aires) and holds an M.B.A. from Stanford University. LinkedIn: https://www.linkedin.com/in/osvaldogimenez/ Company LinkedIn: https://www.linkedin.com/company/mercadolibre --- Support this podcast: https://podcasters.spotify.com/pod/show/geeksofthevalley/support

All-In with Chamath, Jason, Sacks & Friedberg
Big Fed rate cuts, AI killing call centers, $50B govt boondoggle, VC's rough years, Trump/Kamala

All-In with Chamath, Jason, Sacks & Friedberg

Play Episode Listen Later Sep 20, 2024 84:50


(0:00) Bestie intros + All-In Summit recap (6:50) Fed cuts 50 bps: Economic tailwind, scary signal, or both? (17:35) AI is coming for call centers; how agent training works (33:41) US government wasting $50B for rural internet and EV charging stations (47:10) Reflecting on some rough years in VC: is the model broken? (1:07:18) Reacting to the first Trump/Kamala debate, what factors will make each candidate can win or lose the race Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://www.cnn.com/2024/09/19/investing/stocks-fed-rate-cut/index.html https://www.wsj.com/livecoverage/stock-market-today-dow-sp500-nasdaq-live-09-05-2024/card/say-goodbye-to-the-inverted-yield-curve--snsL80qp8JX9UvaMCvVc https://mearsheimer.ai https://seekingalpha.com/news/4144652-klarna-shuts-down-salesforce-as-service-provider-workday-to-meet-same-fate-amid-ai-initiatives https://x.com/brendancarrfcc/status/1836079197967532497 https://reason.com/2024/05/30/7-5-billion-in-government-cash-only-built-8-e-v-chargers-in-2-5-years https://www.cnbc.com/2024/09/17/spacexs-starlink-has-2500-aircraft-under-contract.html https://www.bloomberg.com/news/articles/2024-01-31/the-us-installed-more-than-1-000-ev-charging-stations-since-summer https://x.com/brendancarrfcc/status/1836435062994121053 https://x.com/brendancarrfcc/status/1834009499931463705 https://x.com/molson_hart/status/1835650978906857948 https://x.com/danprimack/status/1824506087116058665 https://x.com/Jason/status/1768073854545449228 https://chamath.substack.com/p/2023-annual-letter https://x.com/Jason/status/1836820167449326063 https://www.axios.com/2024/04/03/us-global-venture-capital-q1 https://www.wsj.com/articles/university-endowments-mint-billions-in-golden-era-of-venture-capital-11632907802 https://www.natesilver.net/p/nate-silver-2024-president-election-polls-model https://x.com/GrageDustin/status/1836178999178866766 https://www.snopes.com/fact-check/trump-very-fine-people https://x.com/EndWokeness/status/1836516153893519867

This Week in Pre-IPO Stocks
E145: FigureAI's $675m capital raise, Stripe stock buyback at $70b, xAI Grok 2.0 differentiators

This Week in Pre-IPO Stocks

Play Episode Listen Later Sep 3, 2024 35:37


Send us a text00:19 | FigureAI (humanoid robots)- AI and humanoid robots drive efficiency and will drive cost of goods/services to $0, drive unlimited GDP- $675m raise $2.6b valuation; OpenAI, Microsoft, Nvidia- 10b humanoid robots by 2040- Musk is projecting 16b to 32b humanoid robots, Telsa Optimus is a humanoid robot- humanoid robots fit easily into a human world to easily replace humans11:49 | Stripe- These companies are so big they're doing share buybacks!- Planning new tender offer to repurchase shares from employees- Entire offer financed with Stripe's own cash, a shift from external funding- Generated $615M in free cash flow in June quarter vs. $500M cash burn in 2022- Valuation at $70B (secondary), up from $50B in 2022 and $65B in last tender- Up to 8,000 employees can sell up to $50,000 of vested shares at $27.51/share- Expanding beyond core payments into billing software; segment on track for $500M annual revenue25:39 | xAI- xAI differentiators are becoming clear; real time data, most accurate answers, Musk effect (i.e. unlimited capital)- AI large language model platform business- Released Grok-2 and Grok-2 Mini beta LLMs on X platform- Enterprise API arriving later this month- Top-four position on LMSYS chatbot leaderboard- Grok-2 Mini: efficient, ideal for speed/resource-critical scenarios- Focus on expanding multimodal understanding- Available to Premium/Premium+ subscribers on X at $8/month- Secondary market valuation: $25.7B (+6.9% vs May 2024 round)

WALL STREET COLADA
Agosto 29: Salesforce gana, ya que los resultados superan las estimaciones; el director financiero dimite. Yelp demanda a Google por supuesto monopolio de búsqueda. CrowdStrike cae después de recortar la orientación.

WALL STREET COLADA

Play Episode Listen Later Aug 29, 2024 4:12


Noticias Económicas y Financieras Se trata de una de las reestructuraciones de deuda soberana más rápidas y de mayor envergadura de los últimos tiempos. Mientras Ucrania lanza la siguiente contraofensiva contra Rusia, el país ha elaborado un nuevo acuerdo que se traducirá en un alivio para sus más de $20B en bonos internacionales. Lo has dicho. ¡La negociación tras los resultados de Nvidia $NVDA fue un comodín! La acción cayó un 7% AH a $116.95/acción el miércoles, a pesar de los impresionantes resultados y las previsiones de la niña mimada de la IA, junto con una recompra de $50B. Desde entonces, Nvidia ha recortado las grandes pérdidas y ahora solo ha bajado un 2% en la sesión previa al mercado. Super Micro Computer $SMCI se desplomó un 19% el miércoles después de revelar que no podría presentar su informe anual 10-K a tiempo. "Se necesita tiempo adicional para que la gerencia complete su evaluación del diseño y la eficacia operativa de los controles internos sobre los informes financieros". Ford $F es la última empresa en reducir su programa de diversidad, equidad e inclusión, uniéndose a varias otras empresas estadounidenses que han revisado sus políticas en medio de una creciente presión y críticas en línea. El fabricante de automóviles ya no participará en el Índice de Igualdad Corporativa de la Campaña de Derechos Humanos, mientras que ha reorientado los grupos de recursos para empleados y está cambiando algunos de sus patrocinios corporativos.

The Thoughtful Entrepreneur
1994 – Overcoming the Connectivity Challenges in the IoT Landscape with Eseye's Nick Earle

The Thoughtful Entrepreneur

Play Episode Listen Later Aug 22, 2024 20:53 Transcription Available


Unlocking the Future of IoT: Exploring Interoperability in IoT DevicesIn a recent episode of The Thoughtful Entrepreneur, host Josh sat down with Nick Earle, the CEO of Eseye and the host of the IoT Leaders Podcast, to delve into the intricacies of the Internet of Things (IoT) industry. The conversation was rich with insights on the challenges and advancements in IoT, particularly focusing on connectivity and interoperability issues that have historically hindered the growth of IoT technologies. This blog post will break down the key themes and actionable advice from the episode, providing a comprehensive guide for anyone interested in the evolving landscape of IoT.Nick begins by addressing a significant discrepancy in the IoT space. He recalls a time when Cisco predicted that by 2020, 50 billion devices would be connected to the internet. However, the reality fell short, with only 11 billion devices connected. This gap underscores the challenges that the IoT industry faces, particularly regarding interoperability and connectivity. One of the fundamental issues in IoT is the prevalence of proprietary systems. Much like mobile network operators with their SIM cards, IoT devices often operate on closed systems, creating barriers for communication between devices. This proprietary nature leads to frustration for both consumers and businesses, hindering the seamless integration of IoT technologies.Nick elaborates on Eseye's innovative approach to solving connectivity issues in IoT. The company has developed a solution that allows devices to connect to any network globally, eliminating the proprietary lock that has long plagued the industry. This solution is akin to the airline industry's evolution, where travelers can now purchase a single ticket that allows them to fly on multiple airlines through alliances like Star Alliance. By abstracting the connectivity process, Eseye aims to simplify IoT integration for businesses and consumers. One of the standout features of Eseye's technology is its ability to enable devices to automatically switch between networks, ensuring that devices remain connected regardless of their location, providing a seamless user experience.About Nick Earle:Nick is CEO of Eseye, a global IoT connectivity solutions company with offices in 7 countries, more than 2000 customers across 190 countries and is deploying its IoT connectivity solutions in large Enterprises including 4 of the Fortune 10.Nick spearheads Eseye's strategy and firmly believes in connectivity that ‘just works'; that makes people's lives and jobs easier; connectivity that's invisible. He's a visionary business leader with a distinguished career in technology spanning more than 30 years, spanning large corporations and dynamic start-ups and oscillating between start-ups and global technology, telco and transportation companies.Previously, Nick led organisations and cross-company transformation programs for two $50B global corporations; Cisco where he ran the Cloud and Managed Services business as well as their Worldwide Field Services function, and Hewlett Packard where he ran the global Enterprise Marketing function and the internet transformation strategy.Nick was voted #2 in Computer Reseller News list of the 25 most ‘Disruptive Channel Executives in IT globally'. He has recently received the Juniper Research ‘Mover and Shaker' award and named ‘CxO of the Year' at the 2023 IoT Global Awards, highlighting his visionary leadership and success in propelling Eseye to an enviable position in the IoT space.About Eseye: As a world leader in IoT connectivity solutions, Eseye enables customers to achieve lasting value from global IoT projects.They bring the deep device expertise necessary to integrate, manage, and optimize IoT connectivity for estates of any scale or complexity. Eseye seamlessly connects these devices across 190 countries...

Radio Advisory
218: [Encore + bonus content] Site-of-care shifts: It's time to go on offense

Radio Advisory

Play Episode Listen Later Aug 13, 2024 37:18


This week, host Abby Burns invites Advisory Board expert Sebastian Beckmann back to Radio Advisory to provide an update—as promised—on what his team has uncovered about site-of-care shifts in the six months since he first brought this research to the pod. Hint: there's about $50B at play. This episode is a modified encore of Episode 195: Site-of-care shifts: It's time to go on offense.” In that episode, Sebastian and fellow Advisory Board expert Nick Hula joined Abby to break down how health systems should be thinking about site-of-care shifts as a part of their growth strategies, including making the transition from a “defensive” mindset to prevent volume shifts, to an “offensive” mindset to capture them. The original episode will play almost in its entirety, with interjections from Abby and Sebastian to dig deeper into what the site-of-care shift opportunity—or risk—actually looks like across markets and services. Links: Seize the $50 billion site-of-care shift opportunity Interactive maps: See where site-of-care shifts are having the biggest impact Site-of-care shifts: Healthcare's $50B opportunity Your guides to volume growth in 6 key service lines 4 takeaways from our updated provider volume forecast 5 trends (re)shaping site-of-care shifts What's happening with joint replacement volumes? Ep. 193: Is health system growth still possible? Learn more about On-demand Courses Use the Market Scenario Planner A transcript of this episode as well as more information and resources can be found on radioadvisory.advisory.com.

Solar Maverick Podcast
SMP 169: Innovative Finacing Solution Allowing Developers to Own Solar Projects

Solar Maverick Podcast

Play Episode Listen Later Aug 13, 2024 37:16


Episode Summary In this episode of the Solar Maverick Podcast, Benoy speaks with Jim Howard, President at Dudley Ventures, and Derek Gabriel who is the COO & Head of Originations at SunRocket Capital.  They speak about how their structured finance solution ensures access to construction funding and long-term financing for developers who want to preserve their equity in their projects. Benoy Thanjan Benoy Thanjan is the Founder and CEO of Reneu Energy and he is also an advisor for several solar startup companies.  He has extensive project origination, development, and financial experience in the renewable energy industry and in the environmental commodities market.   This includes initial site evaluation, permitting, financing, sourcing equipment, and negotiating the long-term energy and environmental commodities off-take agreements. He manages due diligence processes on land, permitting, and utility interconnection and is in charge of financing and structuring through Note to Proceed (“NTP”) to Commercial Operation Date (“COD”). Benoy composes teams suitable for all project development and construction tasks. He is also involved in project planning and pipeline financial modeling. He has been part of all sides of the transaction and this allows him to provide unique perspectives and value. Benoy has extensive experience in financial engineering to make solar projects profitable. Before founding Reneu Energy, he was the SREC Trader in the Project Finance Group for SolarCity which merged with Tesla in 2016.  He originated SREC trades with buyers and co-developed their SREC monetization and hedging strategy with the senior management of SolarCity to move into the east coast markets.  Benoy was the Vice President at Vanguard Energy Partners which is a national solar installer where he focused on project finance solutions for commercial scale solar projects.  He also worked for Ridgewood Renewable Power, a private equity fund, where he analyzed potential investments in renewable energy projects and worked on maximizing the financial return of the projects in the portfolio.  Benoy also worked on the sale of all of the renewable energy projects in Ridgewood's portfolio.   He was in the Energy Structured Finance practice for Deloitte & Touche and in Financial Advisory Services practice at Ernst & Young.  Benoy received his first experience in Finance as an intern at D.E. Shaw & Co., which is a global investment firm with 37 billion dollars in investment capital. He has a MBA in Finance from Rutgers University and a BS in Finance and Economics from the Stern School of Business at New York University.  Benoy was an Alumni Scholar at the Stern School of Business.   Jim Howard James D Howard, Jr. is the President of Dudley Ventures, LLC is an investment and advisory services firm specializing in congressionally sanctioned tax credits and other tax advantaged investments. DV has a 20+ year track record of success with the vision, staff and values to succeed. Dudley Ventures is a subsidiary of Valley National Bancorp (NASDAQ:VLY), one of the top 30 publically traded banks with $50B in assets, A native of The Bronx, New York, Mr. Howard earned his degree from The College of the Holy Cross in Worcester, Massachusetts in 1980 and a law degree from Georgetown University Law Center in 1984. Dudley Ventures was founded by Mr. Howard in 1996. Dudley Ventures has been an innovator in tax credit investment and Mr. Howard has authored numerous articles on tax credit investing and is a regular speaker at conferences. Dudley Ventures has invested and manages over $2 billion in tax credit investments. Mr. Howard is a Board Member of the New Markets Tax Credit Coalition.   Derek Gabriel With over 15 years of experience in the renewable energy sector, Derek is a passionate and driven leader who strives to make a positive impact on the environment and society through innovative and sustainable solutions. As the Chief Operating Officer at SunRocket Capital, Derek oversee the daily administrative and operational functions, as well as the origination, development, and financing of solar projects across various markets and segments. Derek leverage my extensive network of over 15 years in the solar industry, and my deep knowledge of solar policy, tax equity, and debt financing to create value for our clients, investors, and partners. In his previous role as the Executive Vice President and Head of Originations at Sol-REIT, Derek was responsible for business development in providing debt and arranging tax equity for solar and renewable energy projects, ranging from C&I, community solar, portfolio transactions, to utility-scale. Derek engaged a team of solar lending analyst, underwriters, and established strong relationships with key stakeholders, such as EPCs, developers, off-takers, and lenders. Derek also contributed to the strategic direction and growth of the company, and supported the development of new products and services. Some of the skills that Derek applied and developed in this role include organizational development, business development, negotiation, risk management, and financial analysis. Stay Connected: Benoy Thanjan Email: info@reneuenergy.com  LinkedIn: Benoy Thanjan Website: https://www.reneuenergy.com   Jim Howard Linkedin:  https://www.linkedin.com/in/jamesdhoward/ Dudley Venture Website:  https://www.dudleyventures.com   Derek Gabriel LinkedIn:  https://www.linkedin.com/in/derek-g-a754a748/ Website:  https://www.sunrocketcapital.com   Press Release dated June 24, 2024 SunRocket Capital and Dudley Ventures Arrange to Provide $100+ Million to Monetize Tax Credits in Commercial, Industrial and Community Solar Developments https://www.sunrocketcapital.com/news/sunrocket-capital-and-dudley-ventures-to-monetize-tax-credits      

It's Baton Rouge: Out to Lunch

Despite what statistics show about fewer people getting married and more people getting divorced, Americans spent more than $50B on weddings last year, and the numbers continue to grow. Venues, food, liquor, music, gowns, flowers, cake, photographer – and that's not counting bachelor and bachelorette parties or what's involved if the big day is a destination wedding in another state or country. Call it the wedding industrial complex. Or call it good fun. Either way, local entrepreneurs know all about it and are capitalizing on the opportunities to meet ever-growing demands of couples who want more than a courthouse ceremony. Ramsey Roberts Sims is one of Baton Rouge's wedding authorities who knows as much about brides (and probably grooms) as anyone in Louisiana. Ramsey is owner of I Do Bridal Couture, a boutique that specializes in designer bridal gowns at its two locations in Baton Rouge and Covington. Ramsey started the business in 2012, a few years after shopping for her own bridal gown and becoming frustrated with the lack of high-end inventory and personal service. I Do Bridal Couture bills itself on offering that type of exclusive inventory and personal customer service. In recent years, Ramsey, along with her husband, has also started an online children's boutique with her husband, somehow juggling both businesses with their three young children. Ramsey, thanks so much for joining m eon out to lunch.    Once you've decided to get married, you need a place to hold the ceremony and celebration. Mary Skinner is CEO of Oak Park Events, a local events firm with two venues – Oak Lodge in Baton Rouge and Parc 73 in Prairieville, which specialize in wedding receptions, and also play host to a variety of other special events, parties, and gatherings. Oak Park Events was founded by Mary's parents and she worked with them as manager from 2009-2012, back when there was just one venue, Oak Lodge. Mary helped oversee the design, construction and eventual expansion of Parc 73 then she left the business to spend several years in commercial real estate, which is what she was trained in, rejoining in 2016 as CEO. Her parents recently retired so now Mary is running the company. Out to Lunch is recorded live over lunch at Mansurs On the Boulevard. You can find photos from this show at itsbatonrouge.la.See omnystudio.com/listener for privacy information.

Intelligent Medicine
Intelligent Medicine Radio for July 13, Part 2: Is the glycemic index (GI) obsolete?

Intelligent Medicine

Play Episode Listen Later Jul 15, 2024 41:09


Study that claimed herbal supplements were devoid of active ingredients debunked; Can IV vitamin C help with blood-borne infections? Just 30 minutes of exercise boosts cancer-fighting white blood cells; The paradoxical effects of diet on a little-known but important cardiovascular risk factor—lp(a); A quarter century of research supports heart benefits of Coenzyme Q10; Is the glycemic index (GI) obsolete? Medicare Advantage pushes $50B in fake diagnoses.

Prime Venture Partners Podcast
The AI Opportunity for Startups with Shripati Acharya, Pankaj Agarwal and Jerome Manuel

Prime Venture Partners Podcast

Play Episode Listen Later Jul 11, 2024 42:47 Transcription Available


In this special podcast episode, we talk about the massive AI opportunity - how it has evolved since the introduction of the first GPT models and what the future looks like and why we at Prime are super excited about it.Our in-house Artificial Intelligence (AI) experts Shripati Acharya and Pankaj Agarwal (Investments ‪@ PrimeVenturePartners‬) provide a comprehensive overview of the foundational technologies (LLMs, GPTs, Tokens) driving AI, the impact on startups and business models that will create and reshape trillion dollars worth of economic value in the process.If you are an entrepreneur today, this is a pot of gold as we unravel why $50B venture money was invested in AI startups globally since 2023. Did you know since the launch of GPT in 2022, $100B+ have been invested in AI startups!Listen/watch the podcast to learn more about:0:00 - The Evolution of Artificial Intelligence5:36 - Understanding Machine Learning and AI13:20 - Why $100B+ is invested in AI since 2022!27:17 - Which AI startups will get funded?37:22 - Future of AI Applications and WorkforceEnjoyed the podcast? Please consider leaving a review on Apple Podcasts and subscribe wherever you are listening to this.Follow Prime Venture Partners:LinkedIn: https://www.linkedin.com/company/primevp/Twitter: https://twitter.com/Primevp_inThis podcast is for you. Do let us know what you like about the podcast, what you don't like, the guests you'd like to have on the podcast and the topics you'd like us to cover in future episodes.Please share your feedback here: https://primevp.in/podcastfeedback

SoFi Daily Podcast
SoFi Daily Podcast - 7/9/2024

SoFi Daily Podcast

Play Episode Listen Later Jul 9, 2024 4:20


U.S. stocks were mixed Monday. Plus, Hurricane Beryl causes Texas blackouts, Medicare pays insurers $50B for untreated diseases, and air travel's new record. Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Duran Podcast
US sanctions China via Russia. EU agrees to cover $50B loan

The Duran Podcast

Play Episode Listen Later Jun 17, 2024 49:04


US sanctions China via Russia. EU agrees to cover $50B loan

The Christian Post Daily
SBC Attendance Surges, Boy Scouts' Gender-Inclusive Rebrand, Mica Miller Case Closure, 'Lifemark' Hits Top 10

The Christian Post Daily

Play Episode Listen Later May 9, 2024 8:04


Top headlines for Thursday, May 9, 2024In this episode, we discuss the Southern Baptist congregations' remarkable achievements in 2023, noting an upswing in baptisms, worship attendance, and small group participation, alongside a deceleration in membership decline. Then, we explore the Boy Scouts of America's transformative decision to rebrand as “Scouting America,” a move aimed at embracing gender inclusivity within their ranks.In more somber news, we reflect on the Robeson County sheriff's office's report from North Carolina, which provided conclusive evidence regarding the tragic demise of Mica Miller. Lastly, we celebrate a faith-based film's success as it breaks into Netflix's global top 10, signaling a renewed interest in spiritual cinema.Subscribe to this PodcastApple PodcastsSpotifyGoogle PodcastsOvercastFollow Us on Social Media@ChristianPost on TwitterChristian Post on Facebook@ChristianPostIntl on InstagramSubscribe on YouTubeGet the Edifi AppDownload for iPhoneDownload for AndroidSubscribe to Our NewsletterSubscribe to the Freedom Post, delivered every Monday and ThursdayClick here to get the top headlines delivered to your inbox every morning!Links to the NewsSBC baptisms near pre-pandemic levels as attendance surges | Church & Ministries NewsBoy Scouts changing name to gender-inclusive 'Scouting America' | U.S. NewsFlorida school pulls Christian club after atheist group complains | Education NewsChicago Teachers Union wants $50B for pay hikes, free abortions | Education NewsMIT drops DEI diversity statements for faculty hiring | Education NewsMica Miller bought gun, told police ‘I'm about to kill myself' | U.S. NewsMississippi high court: Private schools eligible for public funds | Politics NewsPro-life movie 'Lifemark' makes it into Netflix Global Top 10 | Entertainment News

Liberty Roundtable Podcast
Radio Show Hour 1 – 05/06/2024

Liberty Roundtable Podcast

Play Episode Listen Later May 6, 2024 54:50


* Guest: Dr. Scott Bradley, Founder and Chairman of the Constitution Commemoration Foundation and the author of the book and DVD/CD lecture series To Preserve the Nation. In the Tradition of the Founding Fathers - FreedomsRisingSun.com * Great Book: 'Everything They Ever Told Me Was a Lie',  investigative reporter and author - PatShannan.us * Chicago Teachers Union's $50B in demands: abortions, migrant services, required LGBTQ training, gender-neutral bathrooms - Paul Sacca, TheBlaze.com The 2023 Illinois state budget is $50.4B. * Elon Musk sounds alarm about America's national debt, warning that without action, 'the dollar will be worth nothing' - Alex Nitzberg. * Sen. Rand Paul has suggested that the national debt is "greatest threat" to America's national security and warned that "we are threatening the very existence of our currency, and perhaps our country, by this crazy profligate spending." * "The national debt is the greatest threat our country faces — and we are rapidly approaching the crisis point," Sens. Mitt Romney, Joe Manchin, Reps. Bill Huizenga, and Scott Peters declared in a joint opinion piece. * Trump Campaign Sues Nevada Over Law Allowing Mail Ballots To Be Counted 4 Days After Election - Leif Le Mahieu - DailyWire.com * According to the suit, this law violates federal election law and disproportionately harms Republicans. * Trump: Arrest Jack Smith after special counsel admits lying to court - WND.com

Chicago's Morning Answer with Dan Proft & Amy Jacobson

0:00 - Trump Super Tuesday victory speech   12:08 - CTU's SD Gates at City Club: next contract will cost the city $50B   27:19 - COVID doc, “It Wasn't Fauci”   49:01 - Move over Dylan Mulvaney, Doritos' spokeshuman Samantha Hudson   01:03:06 - Senior writer at National Review, Noah Rothman, reviews Super Tuesday results and asks Does Kamala Harris Know the Administration Needs an Israeli Victory? Keep updated this election season with Noah on X @NoahCRothman   01:23:27 - Noted economist Stephen Moore gives the over/unders for Thursday's State of the Union. Get more Steve @StephenMoore   01:36:23 - Bob Fioretti, two-term Chicago 2nd ward Alderman (2007-2015), 2nd ward Democratic Committeeman (2008-2016) and current candidate for Cook County States Attorney, wants to put an end to the "revolving door" of crime. Support Bob Fioretti's run for Cook Co States Attorney fiorettiforcook.com 01:55:33 - Eli Steele, documentary filmmaker and writer, shares details from his new film  Killing America: Can America's Schools be Saved? For more info on Killing America and screening locations visit manofsteeleproductions.comSee omnystudio.com/listener for privacy information.

The CEO Sessions
She Took on the FDA and Won BIG - CEO Cindy Eckert of Sprout Pharmaceuticals

The CEO Sessions

Play Episode Listen Later Feb 14, 2024 44:03


CEO Takes on the FDA and Wins Big!Get inspired by Cindy Eckert, CEO of Sprout Pharmaceuticals, who shares her story of grit, determination, and perseverance that paid off in a HUGE way......Sold for her first company for $1 Billion Dollars....Created the FemTech category… projected to become $50B....Changed the world for women.She took the taboo and turned it into powerhouse brand and impact maker.If you've never heard of the "pink pill" or Addyi then you need to listen to this episode!Fortune Magazine calls her a tireless force of nature, and you'll want to hear why…LinkedIn Profile https://www.linkedin.com/in/cindy-eckertCompany Link: https://sproutpharmaceuticals.com/https://thepinkceiling.com/ What You'll Discover in this Episode:Why Sold Her Company and Started a New One the VERY Next Day.What She's Learned from ALWAYS Wearing Pink.When a Stranger Reaching Out Transformed Her Business.How Moxie Translates to the Board Room.Blue Pill vs Pink Pill Bias.Advice for Leveraging Positive Disruption.The Childhood Experience that Former Her an Entrepreneur.-----Connect with the Host, #1 bestselling author Ben FanningSpeaking and Training inquiresSubscribe to my Youtube channelLinkedInInstagramTwitter----ADDYI is for premenopausal women with acquired, generalized hypoactive (low) desire disorder who have not had problems with low desire in the past, and who have low desire no matter the type of activity, the situation or the partner. The low desire is troubling to them and is not due to a medical or mental health problem,problems in the relationship or medicine or other drug use. ADDYI is not for use in men or to enhance performance. Your risk of severe low blood pressure and fainting is increased if you drink 1-2 standard alcoholic drinks close in time to your ADDYI dose. Wait at least 2 hours after drinking before taking ADDYI at bedtime. Your risk of severe low blood pressure and fainting is also increased if you take certain prescription, over the counter or herbal medications, or have liver problems. Low blood pressure and fainting can happen when you take ADDYI even if you don't drink alcohol or take other medicines. Do not take if you are allergic to any of the ingredients in ADDYI. Allergic reactions may include hives, itching or trouble breathing. Sleepiness, sometimes serious, can occur. Common side effects include dizziness, nausea, tiredness, difficulty falling asleep or staying asleep and dry mouth. See full PI and Medication Guide, including Boxed Warning at addyi.com/pi or call 844-PINK-PILL.

Make Me Smart
It's a rough housing market out there, folks

Make Me Smart

Play Episode Listen Later Jan 20, 2024 28:51


A drop in preowned home sales in December was the cherry on top of the worst year for the U.S. housing market since 1995. We’ll get into the causes of the slump and what it would take for the housing market to get back on track. And, a tax deal that would expand the child tax credit is gaining momentum. Then, we’ll play a round of Half Full/Half Empty! Here’s everything we talked about today: “Strong bipartisan showing in first test of tax deal’s support” from Roll Call “Mars Express finds evidence of large water deposit at the Medusae Fossae Formation” from Phys.org “What Is an Assumable Mortgage?” Buy Side from The Wall Street Journal “US Existing-Home Sales Decline to Cap Worst Year Since 1995” from Bloomberg “Expect restaurants to go all in on breakfast this year” from Marketplace ‘”Super shoes” take their place in the $50B running shoe market” from Marketplace “Can robots make us less lonely?” from Marketplace “It doesn’t take a Mathlete to know a “Mean Girls” remake adds up for Hollywood” from Marketplace “What happens when a school bans smartphones? A complete transformation” from The Guardian We love to hear from you. Send your questions and comments to makemesmart@marketplace.org or leave us a voicemail at 508-U-B-SMART.

Make Me Smart
It's a rough housing market out there, folks

Make Me Smart

Play Episode Listen Later Jan 20, 2024 28:51


A drop in preowned home sales in December was the cherry on top of the worst year for the U.S. housing market since 1995. We’ll get into the causes of the slump and what it would take for the housing market to get back on track. And, a tax deal that would expand the child tax credit is gaining momentum. Then, we’ll play a round of Half Full/Half Empty! Here’s everything we talked about today: “Strong bipartisan showing in first test of tax deal’s support” from Roll Call “Mars Express finds evidence of large water deposit at the Medusae Fossae Formation” from Phys.org “What Is an Assumable Mortgage?” Buy Side from The Wall Street Journal “US Existing-Home Sales Decline to Cap Worst Year Since 1995” from Bloomberg “Expect restaurants to go all in on breakfast this year” from Marketplace ‘”Super shoes” take their place in the $50B running shoe market” from Marketplace “Can robots make us less lonely?” from Marketplace “It doesn’t take a Mathlete to know a “Mean Girls” remake adds up for Hollywood” from Marketplace “What happens when a school bans smartphones? A complete transformation” from The Guardian We love to hear from you. Send your questions and comments to makemesmart@marketplace.org or leave us a voicemail at 508-U-B-SMART.

Marketplace All-in-One
It's a rough housing market out there, folks

Marketplace All-in-One

Play Episode Listen Later Jan 20, 2024 28:51


A drop in preowned home sales in December was the cherry on top of the worst year for the U.S. housing market since 1995. We’ll get into the causes of the slump and what it would take for the housing market to get back on track. And, a tax deal that would expand the child tax credit is gaining momentum. Then, we’ll play a round of Half Full/Half Empty! Here’s everything we talked about today: “Strong bipartisan showing in first test of tax deal’s support” from Roll Call “Mars Express finds evidence of large water deposit at the Medusae Fossae Formation” from Phys.org “What Is an Assumable Mortgage?” Buy Side from The Wall Street Journal “US Existing-Home Sales Decline to Cap Worst Year Since 1995” from Bloomberg “Expect restaurants to go all in on breakfast this year” from Marketplace ‘”Super shoes” take their place in the $50B running shoe market” from Marketplace “Can robots make us less lonely?” from Marketplace “It doesn’t take a Mathlete to know a “Mean Girls” remake adds up for Hollywood” from Marketplace “What happens when a school bans smartphones? A complete transformation” from The Guardian We love to hear from you. Send your questions and comments to makemesmart@marketplace.org or leave us a voicemail at 508-U-B-SMART.

FantasyPros - Fantasy Football Podcast
Week 14 RB & WR Rankings & Tiers: Roschon Johnson, Ezekiel Elliott, Justin Jefferson (Ep. 1200)

FantasyPros - Fantasy Football Podcast

Play Episode Listen Later Dec 7, 2023 97:55 Transcription Available Very Popular


Gear up for Week 14 with the ultimate running back and wide receiver rankings breakdown. Tera Roberts, Pat Fitzmaurice, and Chris Welsh break up this week's consensus rankings into tiers and provide detailed analysis of the key differences they have on several of these players. Tune in to hear our dissenting opinions on this week's most polarizing players and prepare yourself for a triumphant week in fantasy football! Timestamps (Note that these may be off due to the ads):Introduction - 0:00:00Top 20 RB Rankings - 0:01:14Isiah Pacheco - 0:01:22De'Von Achane - 0:01:44Saquon Barkley - 0:03:13Austin Ekeler - 0:04:05B Tier - 0:06:29Breece Hall - 0:06:37DraftKings Sportsbook - 0:11:26C+ Tier - 0:13:01Gus Edwards - 0:13:10Alexander Mattison - 0:16:58C Tier - 0:22:19Ezekiel Elliott - 0:22:47Chuba Hubbard - 0:27:25Devin Singletary - 0:31:43Gametime - 0:35:26C- Tier - 0:36:40Roschon Johnson - 0:37:00D+ & D Tiers - 0:39:35Rico Dowdle - 0:40:48Would You Rather? - 0:43:00Jerome Ford vs. Jaylen Warren - 0:43:09Roschon Johnson vs. Tyjae Spears - 0:46:39Top 25 WR Rankings - 0:49:58Justin Jefferson - 0:50:05DJ Moore - 0:50:54Michael Pittman - 0:52:44Nico Collins - 0:53:14B Tier - 0:56:13Garrett Wilson - 0:56:37AirMedCare - 0:59:50B- & C+ Tiers - 1:00:52Jakobi Meyers - 1:01:13Gabe Davis - 1:04:49C Tier - 1:09:26Jordan Addison - 1:10:02C- & D+ Tiers - 1:14:30DeVante Parker - 1:15:15Jonathan Mingo - 1:16:03D Tier - 1:17:58Dontayvion Wicks - 1:18:30Cedric Tillman - 1:20:39Win a signed DK Metcalf jersey - 1:21:28Would You Rather? - 1:21:50Cooper Kupp vs. Brandin Cooks - 1:21:58Tyler Lockett vs. Jayden Reed - 1:26:42Flex Appeal - 1:29:38 Helpful Links: DraftKings Sportsbook - Football's more fun when you're in on the action! So download the app NOW and sign up with code FANTASYPROS. New customers can bet just FIVE DOLLARS to get TWO HUNDRED DOLLARS INSTANTLY IN BONUS BETS. Only on DraftKings Sportsbook–an Official Sports Betting Partner of the NFL with code FANTASYPROS. The crown is yours. Gambling problem? Call 1-800-Gambler or visit www.1800gambler.net. In New York, call 877-8-HOPENY or text HOPENY (467369). In Connecticut, Help is available for problem gambling call 888-789-7777 or visit ccpg.org. Please play responsibly. On behalf of Boot Hill Casino & Resort (KS). Licensee partner Golden Nugget Lake Charles (LA). 21 + age varies by jurisdiction. Void in ONT. Bonus bets expire one hundred sixty eight hours after issuance. See sportsbook.draftkings.com/footballterms for eligibility and deposit restrictions, terms, and responsible gaming resources.  Gametime – Gametime has last-minute amazing deals on tickets to see your favorite sports team, band or comedian. Download the Gametime app and redeem code FANTASYPROS for 20 dollars off your first purchase. AirMedCare – AirMedCare Network providers operate state-of-the-art helicopters that can respond to critically ill or injured patients who need emergency medical transport. Our listeners get up to an eighty dollar Mastercard or Amazon eGift Card when they join and use offer code: FANTASYPROS. Make financial peace of mind part of your game plan. Visit airmedcarenetwork.com/fantasypros. Double your FantasyPros Subscription for FREE – For a limited time, you can DOUBLE the length of your new subscription when you upgrade to any of our premium plans, for FREE. Check out fantasypros.com/promo to take advantage of our holiday offer and double your subscription for free today! My Playbook – Don't miss out on the revolutionary fantasy football software that over 1 million teams have already synced with: My Playbook. It's packed with custom advice, rankings, and analysis tailored just for your team. Discover your optimal lineup, find advantageous trades, and stay ahead with the latest player news. Join the league of winners today at fantasypros.com/myplaybook and let's secure that championship! Leave a Review – If you enjoy our show and find our insight to be valuable, we'd love to hear from you! Your reviews fuel our passion and help us tailor content specifically for YOU. Head to Apple Podcasts, Spotify, or wherever else you get your podcasts and leave an honest review. Let's make this show the ultimate destination for fantasy football enthusiasts like us. Thank you for watching and for showing your support – https://fantasypros.com/review/ BettingPros Podcast – For advice on the best picks and props across both the NFL and college football each and every week, check out the BettingPros Podcast at bettingpros.com/podcast, our BettingPros YouTube channel at youtube.com/bettingpros, or wherever you listen to podcasts.See omnystudio.com/listener for privacy information.