POPULARITY
What does it look like to build a company—and a life—rooted in both purpose and profit? In this episode of Purpose & Profit Podcast, we sit down with William Norvell, a top 1% podcaster, investor, and trusted advisor to faith-driven entrepreneurs. William brings a wealth of experience, from co-founding Faith Driven Entrepreneur and Faith Driven Investor, to his current work with Forte, a company on a mission to democratize mental health in the workplace.With an MBA from Stanford and a unique journey that spans investment banking, private equity, and startup leadership, William offers a candid, wisdom-filled conversation about faith, business, and the complex human decisions leaders must make. Whether you're a mission-minded entrepreneur or nonprofit leader looking to create sustainable impact, this episode is full of practical insights and vision for a better future.You'll Learn:How William's journey—from Alabama to Stanford—shaped his views on leadership, purpose, and wealth.Why values-based investing is more than a trend, and how to align your portfolio with your convictions.The origin story of Faith Driven Entrepreneur and the gap it sought to fill in the faith and business world.Why William believes nonprofit CEOs have one of the toughest jobs, and how financial sustainability can shift the paradigm.What it means to run a Public Benefit Corporation and how that changes decisions from the boardroom to the break room.A raw take on early team dynamics and how to handle tough leadership decisions with both grace and clarity.A bold vision for the future, where businesses are inherently purposeful and nonprofits are free from constant fundraising.Discover more episodes of Purpose & Profit Podcast here. Get to know your co-hosts, Dave Raley and Carly Berna.Purpose & Profit Podcast is brought to you by:VIRTUOUSFAITHSEARCH PARTNERSIMAGO CONSULTINGAVID AIDICKERSONBAKKERSHARESYNERGYSpecial thanks to editor and sound engineer Barry R. Hill and producer Abigail Morse.
Mike McCue introduces Surf: Flipboard's founder and CEO demonstrated their new social browser app that aggregates content from ActivityPub, AT Proto, and RSS into unified feeds, allowing users to follow people across platforms and create curated content collections. OpenAI Adjusts Reorganization Plans: OpenAI will maintain its non-profit arm while converting its for-profit division into a public benefit corporation similar to Anthropic, pending regulatory approval. AI Criticism Blog Post: A blog highlighted practical AI concerns beyond the singularity, focusing on coordinated inauthentic behavior, misinformation, and non-consensual pornography. AI Workplace Misuse: Nearly half of workers admit to using AI inappropriately at work according to a Fast Company report. AI Academic Cheating: New York Magazine investigated widespread AI cheating in colleges, including students using AI for all assignments while maintaining excellent grades. "I Smell AI": The team discussed unreliable AI detection methods and embarrassing AI-generated news errors, including Alberta being incorrectly described as "French-speaking." Instagram Co-founder on AI Chatbots: Kevin Systrom claims AI assistants are designed to maximize engagement metrics rather than utility, though Leo demonstrated how these behaviors can be modified. Google Labs' AI Experiments: The hosts explored Google's new AI Mode search interface, language learning tools, and a career recommendation system. New York Times Subscriber Growth: The NYT added 250,000 digital subscribers with a 14% jump in digital subscription revenue, with nearly half subscribing to multiple products. Auburn University's Phone Help Desk: The hosts discussed Auburn's 70-year tradition of librarians answering public phone questions, continuing through technological changes. San Francisco's Orb Store: World opened a downtown storefront where visitors scan their irises with "orbs" to verify humanity and receive WorldCoin cryptocurrency. Driverless Trucks Begin Regular Routes: Aurora launched fully autonomous semi-trucks between Dallas and Houston, raising both safety hopes and public perception concerns. Waymo Safety Study: Data showed Waymo's autonomous vehicles significantly reduced injury crashes, though the hosts questioned aspects of the data presentation. AI-Generated Video in Court: An AI-generated video of a deceased shooting victim "forgiving" his killer was shown in an Arizona courtroom, raising ethical and legal questions. Paris's Game Recommendation - Norco: Paris recommended the Southern Gothic narrative game Norco, set in industrial Louisiana with a surreal atmosphere similar to Disco Elysium. Leo's Game Recommendation - Tippy Coco: Leo shared a simple browser-based ball-bouncing game at TippyCoco.com as an easy option for casual players. Jeff's Pick - World Bank Data Sets: Jeff highlighted World Bank's release of hundreds of public data sets intended for AI training that provide insight into global technology adoption. Google Invests in Wonder: Google Ventures invested in virtual kitchen company Wonder, which raised $600 million despite questions about food delivery business sustainability. These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/intelligent-machines/episodes/818 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Mike McCue Sponsors: monarchmoney.com with code IM spaceship.com/twit bigid.com/im Melissa.com/twit
Mike McCue introduces Surf: Flipboard's founder and CEO demonstrated their new social browser app that aggregates content from ActivityPub, AT Proto, and RSS into unified feeds, allowing users to follow people across platforms and create curated content collections. OpenAI Adjusts Reorganization Plans: OpenAI will maintain its non-profit arm while converting its for-profit division into a public benefit corporation similar to Anthropic, pending regulatory approval. AI Criticism Blog Post: A blog highlighted practical AI concerns beyond the singularity, focusing on coordinated inauthentic behavior, misinformation, and non-consensual pornography. AI Workplace Misuse: Nearly half of workers admit to using AI inappropriately at work according to a Fast Company report. AI Academic Cheating: New York Magazine investigated widespread AI cheating in colleges, including students using AI for all assignments while maintaining excellent grades. "I Smell AI": The team discussed unreliable AI detection methods and embarrassing AI-generated news errors, including Alberta being incorrectly described as "French-speaking." Instagram Co-founder on AI Chatbots: Kevin Systrom claims AI assistants are designed to maximize engagement metrics rather than utility, though Leo demonstrated how these behaviors can be modified. Google Labs' AI Experiments: The hosts explored Google's new AI Mode search interface, language learning tools, and a career recommendation system. New York Times Subscriber Growth: The NYT added 250,000 digital subscribers with a 14% jump in digital subscription revenue, with nearly half subscribing to multiple products. Auburn University's Phone Help Desk: The hosts discussed Auburn's 70-year tradition of librarians answering public phone questions, continuing through technological changes. San Francisco's Orb Store: World opened a downtown storefront where visitors scan their irises with "orbs" to verify humanity and receive WorldCoin cryptocurrency. Driverless Trucks Begin Regular Routes: Aurora launched fully autonomous semi-trucks between Dallas and Houston, raising both safety hopes and public perception concerns. Waymo Safety Study: Data showed Waymo's autonomous vehicles significantly reduced injury crashes, though the hosts questioned aspects of the data presentation. AI-Generated Video in Court: An AI-generated video of a deceased shooting victim "forgiving" his killer was shown in an Arizona courtroom, raising ethical and legal questions. Paris's Game Recommendation - Norco: Paris recommended the Southern Gothic narrative game Norco, set in industrial Louisiana with a surreal atmosphere similar to Disco Elysium. Leo's Game Recommendation - Tippy Coco: Leo shared a simple browser-based ball-bouncing game at TippyCoco.com as an easy option for casual players. Jeff's Pick - World Bank Data Sets: Jeff highlighted World Bank's release of hundreds of public data sets intended for AI training that provide insight into global technology adoption. Google Invests in Wonder: Google Ventures invested in virtual kitchen company Wonder, which raised $600 million despite questions about food delivery business sustainability. These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/intelligent-machines/episodes/818 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Mike McCue Sponsors: monarchmoney.com with code IM spaceship.com/twit bigid.com/im Melissa.com/twit
Mike McCue introduces Surf: Flipboard's founder and CEO demonstrated their new social browser app that aggregates content from ActivityPub, AT Proto, and RSS into unified feeds, allowing users to follow people across platforms and create curated content collections. OpenAI Adjusts Reorganization Plans: OpenAI will maintain its non-profit arm while converting its for-profit division into a public benefit corporation similar to Anthropic, pending regulatory approval. AI Criticism Blog Post: A blog highlighted practical AI concerns beyond the singularity, focusing on coordinated inauthentic behavior, misinformation, and non-consensual pornography. AI Workplace Misuse: Nearly half of workers admit to using AI inappropriately at work according to a Fast Company report. AI Academic Cheating: New York Magazine investigated widespread AI cheating in colleges, including students using AI for all assignments while maintaining excellent grades. "I Smell AI": The team discussed unreliable AI detection methods and embarrassing AI-generated news errors, including Alberta being incorrectly described as "French-speaking." Instagram Co-founder on AI Chatbots: Kevin Systrom claims AI assistants are designed to maximize engagement metrics rather than utility, though Leo demonstrated how these behaviors can be modified. Google Labs' AI Experiments: The hosts explored Google's new AI Mode search interface, language learning tools, and a career recommendation system. New York Times Subscriber Growth: The NYT added 250,000 digital subscribers with a 14% jump in digital subscription revenue, with nearly half subscribing to multiple products. Auburn University's Phone Help Desk: The hosts discussed Auburn's 70-year tradition of librarians answering public phone questions, continuing through technological changes. San Francisco's Orb Store: World opened a downtown storefront where visitors scan their irises with "orbs" to verify humanity and receive WorldCoin cryptocurrency. Driverless Trucks Begin Regular Routes: Aurora launched fully autonomous semi-trucks between Dallas and Houston, raising both safety hopes and public perception concerns. Waymo Safety Study: Data showed Waymo's autonomous vehicles significantly reduced injury crashes, though the hosts questioned aspects of the data presentation. AI-Generated Video in Court: An AI-generated video of a deceased shooting victim "forgiving" his killer was shown in an Arizona courtroom, raising ethical and legal questions. Paris's Game Recommendation - Norco: Paris recommended the Southern Gothic narrative game Norco, set in industrial Louisiana with a surreal atmosphere similar to Disco Elysium. Leo's Game Recommendation - Tippy Coco: Leo shared a simple browser-based ball-bouncing game at TippyCoco.com as an easy option for casual players. Jeff's Pick - World Bank Data Sets: Jeff highlighted World Bank's release of hundreds of public data sets intended for AI training that provide insight into global technology adoption. Google Invests in Wonder: Google Ventures invested in virtual kitchen company Wonder, which raised $600 million despite questions about food delivery business sustainability. These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/intelligent-machines/episodes/818 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Mike McCue Sponsors: monarchmoney.com with code IM spaceship.com/twit bigid.com/im Melissa.com/twit
Mike McCue introduces Surf: Flipboard's founder and CEO demonstrated their new social browser app that aggregates content from ActivityPub, AT Proto, and RSS into unified feeds, allowing users to follow people across platforms and create curated content collections. OpenAI Adjusts Reorganization Plans: OpenAI will maintain its non-profit arm while converting its for-profit division into a public benefit corporation similar to Anthropic, pending regulatory approval. AI Criticism Blog Post: A blog highlighted practical AI concerns beyond the singularity, focusing on coordinated inauthentic behavior, misinformation, and non-consensual pornography. AI Workplace Misuse: Nearly half of workers admit to using AI inappropriately at work according to a Fast Company report. AI Academic Cheating: New York Magazine investigated widespread AI cheating in colleges, including students using AI for all assignments while maintaining excellent grades. "I Smell AI": The team discussed unreliable AI detection methods and embarrassing AI-generated news errors, including Alberta being incorrectly described as "French-speaking." Instagram Co-founder on AI Chatbots: Kevin Systrom claims AI assistants are designed to maximize engagement metrics rather than utility, though Leo demonstrated how these behaviors can be modified. Google Labs' AI Experiments: The hosts explored Google's new AI Mode search interface, language learning tools, and a career recommendation system. New York Times Subscriber Growth: The NYT added 250,000 digital subscribers with a 14% jump in digital subscription revenue, with nearly half subscribing to multiple products. Auburn University's Phone Help Desk: The hosts discussed Auburn's 70-year tradition of librarians answering public phone questions, continuing through technological changes. San Francisco's Orb Store: World opened a downtown storefront where visitors scan their irises with "orbs" to verify humanity and receive WorldCoin cryptocurrency. Driverless Trucks Begin Regular Routes: Aurora launched fully autonomous semi-trucks between Dallas and Houston, raising both safety hopes and public perception concerns. Waymo Safety Study: Data showed Waymo's autonomous vehicles significantly reduced injury crashes, though the hosts questioned aspects of the data presentation. AI-Generated Video in Court: An AI-generated video of a deceased shooting victim "forgiving" his killer was shown in an Arizona courtroom, raising ethical and legal questions. Paris's Game Recommendation - Norco: Paris recommended the Southern Gothic narrative game Norco, set in industrial Louisiana with a surreal atmosphere similar to Disco Elysium. Leo's Game Recommendation - Tippy Coco: Leo shared a simple browser-based ball-bouncing game at TippyCoco.com as an easy option for casual players. Jeff's Pick - World Bank Data Sets: Jeff highlighted World Bank's release of hundreds of public data sets intended for AI training that provide insight into global technology adoption. Google Invests in Wonder: Google Ventures invested in virtual kitchen company Wonder, which raised $600 million despite questions about food delivery business sustainability. These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/intelligent-machines/episodes/818 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Mike McCue Sponsors: monarchmoney.com with code IM spaceship.com/twit bigid.com/im Melissa.com/twit
Function Health has launched an FDA-cleared AI-powered full-body MRI that dramatically reduces scan time to 22 minutes and the cost to $499, making it more accessible for preventative health. In construction, Skanska has introduced Safety Sidekick, an AI assistant providing real-time safety guidance by consolidating crucial safety documents into a mobile and desktop resource, empowering safer decision-making on job sites. Google's Gemini 2.5 Pro Preview 'I/O edition' boasts significantly improved coding capabilities, ranking first on both LMArena for coding and the WebDev Arena Leaderboard, excelling particularly in building interactive web applications. This development coincides with predictions from Robert Scoble about Google's upcoming AI-powered glasses that will interact through eyes, hands, and voice, leveraging Google's vast personal data ecosystem, potentially posing a challenge to Apple. Meanwhile, OpenAI has decided to maintain its nonprofit control while restructuring its for-profit arm into a Public Benefit Corporation, legally requiring a balance between profit and public benefit, aligning with their mission of developing beneficial artificial general intelligence.@function @Scobleizer @Austen
Mike McCue introduces Surf: Flipboard's founder and CEO demonstrated their new social browser app that aggregates content from ActivityPub, AT Proto, and RSS into unified feeds, allowing users to follow people across platforms and create curated content collections. OpenAI Adjusts Reorganization Plans: OpenAI will maintain its non-profit arm while converting its for-profit division into a public benefit corporation similar to Anthropic, pending regulatory approval. AI Criticism Blog Post: A blog highlighted practical AI concerns beyond the singularity, focusing on coordinated inauthentic behavior, misinformation, and non-consensual pornography. AI Workplace Misuse: Nearly half of workers admit to using AI inappropriately at work according to a Fast Company report. AI Academic Cheating: New York Magazine investigated widespread AI cheating in colleges, including students using AI for all assignments while maintaining excellent grades. "I Smell AI": The team discussed unreliable AI detection methods and embarrassing AI-generated news errors, including Alberta being incorrectly described as "French-speaking." Instagram Co-founder on AI Chatbots: Kevin Systrom claims AI assistants are designed to maximize engagement metrics rather than utility, though Leo demonstrated how these behaviors can be modified. Google Labs' AI Experiments: The hosts explored Google's new AI Mode search interface, language learning tools, and a career recommendation system. New York Times Subscriber Growth: The NYT added 250,000 digital subscribers with a 14% jump in digital subscription revenue, with nearly half subscribing to multiple products. Auburn University's Phone Help Desk: The hosts discussed Auburn's 70-year tradition of librarians answering public phone questions, continuing through technological changes. San Francisco's Orb Store: World opened a downtown storefront where visitors scan their irises with "orbs" to verify humanity and receive WorldCoin cryptocurrency. Driverless Trucks Begin Regular Routes: Aurora launched fully autonomous semi-trucks between Dallas and Houston, raising both safety hopes and public perception concerns. Waymo Safety Study: Data showed Waymo's autonomous vehicles significantly reduced injury crashes, though the hosts questioned aspects of the data presentation. AI-Generated Video in Court: An AI-generated video of a deceased shooting victim "forgiving" his killer was shown in an Arizona courtroom, raising ethical and legal questions. Paris's Game Recommendation - Norco: Paris recommended the Southern Gothic narrative game Norco, set in industrial Louisiana with a surreal atmosphere similar to Disco Elysium. Leo's Game Recommendation - Tippy Coco: Leo shared a simple browser-based ball-bouncing game at TippyCoco.com as an easy option for casual players. Jeff's Pick - World Bank Data Sets: Jeff highlighted World Bank's release of hundreds of public data sets intended for AI training that provide insight into global technology adoption. Google Invests in Wonder: Google Ventures invested in virtual kitchen company Wonder, which raised $600 million despite questions about food delivery business sustainability. These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/intelligent-machines/episodes/818 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Mike McCue Sponsors: monarchmoney.com with code IM spaceship.com/twit bigid.com/im Melissa.com/twit
Mike McCue introduces Surf: Flipboard's founder and CEO demonstrated their new social browser app that aggregates content from ActivityPub, AT Proto, and RSS into unified feeds, allowing users to follow people across platforms and create curated content collections. OpenAI Adjusts Reorganization Plans: OpenAI will maintain its non-profit arm while converting its for-profit division into a public benefit corporation similar to Anthropic, pending regulatory approval. AI Criticism Blog Post: A blog highlighted practical AI concerns beyond the singularity, focusing on coordinated inauthentic behavior, misinformation, and non-consensual pornography. AI Workplace Misuse: Nearly half of workers admit to using AI inappropriately at work according to a Fast Company report. AI Academic Cheating: New York Magazine investigated widespread AI cheating in colleges, including students using AI for all assignments while maintaining excellent grades. "I Smell AI": The team discussed unreliable AI detection methods and embarrassing AI-generated news errors, including Alberta being incorrectly described as "French-speaking." Instagram Co-founder on AI Chatbots: Kevin Systrom claims AI assistants are designed to maximize engagement metrics rather than utility, though Leo demonstrated how these behaviors can be modified. Google Labs' AI Experiments: The hosts explored Google's new AI Mode search interface, language learning tools, and a career recommendation system. New York Times Subscriber Growth: The NYT added 250,000 digital subscribers with a 14% jump in digital subscription revenue, with nearly half subscribing to multiple products. Auburn University's Phone Help Desk: The hosts discussed Auburn's 70-year tradition of librarians answering public phone questions, continuing through technological changes. San Francisco's Orb Store: World opened a downtown storefront where visitors scan their irises with "orbs" to verify humanity and receive WorldCoin cryptocurrency. Driverless Trucks Begin Regular Routes: Aurora launched fully autonomous semi-trucks between Dallas and Houston, raising both safety hopes and public perception concerns. Waymo Safety Study: Data showed Waymo's autonomous vehicles significantly reduced injury crashes, though the hosts questioned aspects of the data presentation. AI-Generated Video in Court: An AI-generated video of a deceased shooting victim "forgiving" his killer was shown in an Arizona courtroom, raising ethical and legal questions. Paris's Game Recommendation - Norco: Paris recommended the Southern Gothic narrative game Norco, set in industrial Louisiana with a surreal atmosphere similar to Disco Elysium. Leo's Game Recommendation - Tippy Coco: Leo shared a simple browser-based ball-bouncing game at TippyCoco.com as an easy option for casual players. Jeff's Pick - World Bank Data Sets: Jeff highlighted World Bank's release of hundreds of public data sets intended for AI training that provide insight into global technology adoption. Google Invests in Wonder: Google Ventures invested in virtual kitchen company Wonder, which raised $600 million despite questions about food delivery business sustainability. These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/intelligent-machines/episodes/818 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Mike McCue Sponsors: monarchmoney.com with code IM spaceship.com/twit bigid.com/im Melissa.com/twit
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Try AI Box: https://AIBox.ai/AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/aboutWOW! Lets talk about the recent developments at OpenAI, particularly the reversal of their decision to transition from a nonprofit to a for-profit model. Jaeden discusses the implications of this change, including the influence of Elon Musk's lawsuit and the vision articulated by Sam Altman for the future of AI. Chapters00:00 OpenAI's Nonprofit to For-Profit Transition Drama00:55 Launch of AI Box Playground02:22 Elon Musk's Lawsuit Against OpenAI03:43 Sam Altman's Vision for OpenAI06:27 Democratic AI vs Authoritarian AI09:16 OpenAI's Future Structure and Funding Needs12:02 Transition to Public Benefit Corporation
June Cheung is head of JAPAC at Scope 3, Public Benefit Corporation that focuses on decarbonizing media and advertising by providing a platform to visualize, measure, and reduce carbon emissions across the advertising ecosystem. But it is more than that. June shares the evolution of Scope3 from its foundation in 2022 with the aim to help the advertising industry reduce its carbon footprint, to facilitating the creation of an agreed measurement model for the industry tobuilding and sharing emission reduction solutions to today with the recent announcement of the creation of an agentic AI platform to help partners build and sell AI-enabled media products that are efficient and sustainable by design. Listen on Apple: https://podcasts.apple.com/au/podcast/managing-marketing/id1018735190 Listen on Spotify: https://open.spotify.com/show/75mJ4Gt6MWzFWvmd3A64XW?si=a3b63c66ab6e4934 Listen on Stitcher: https://www.stitcher.com/show/managing-marketing Listen on Podbean: https://managingmarketing.podbean.com/ For more episodes of TrinityP3's Managing Marketing podcast, visit https://www.trinityp3.com/managing-marketing-podcasts/ Recorded live on Zoom and edited, mixed and managed by JML Audio with thanks to Jared Lattouf.
OpenAI sta considerando di trasformarsi definitivamente da organizzazione non-profit a società orientata al profitto (Public Benefit Corporation).Elon Musk e diversi ex-dipendenti lanciano l'allarme: un passaggio completo al ''for-profit'' potrebbe aumentare drasticamente i rischi legati allo sviluppo e alla sicurezza dell'intelligenza artificiale, mettendo in secondo piano gli obiettivi etici originari dell'azienda. Questo cambiamento strutturale rischia infatti di incentivare scorciatoie pericolose nello sviluppo dei modelli AI, sacrificando sicurezza e controlli rigorosi in nome della velocità, della competizione commerciale e della massimizzazione dei profitti.La struttura non-profit è davvero fondamentale per garantire sicurezza ed etica nell'intelligenza artificiale, o la competizione commerciale può invece contribuire positivamente all'innovazione? ~~~~~ INGAGGI E SPONSORSHIP ~~~~~ Per contatti commerciali: sales@matteoflora.comPer consulenze legali: info@42LawFirm.it~~~~~ SOSTIENI IL CANALE! ~~~~~Con la Membership PRO puoi supportare il Canale » https://link.mgpf.it/proSe vuoi qui la mia attrezzatura » https://mgpf.it/attrezzatura~~~~~ SEGUIMI ANCHE ONLINE CON LE NOTIFICHE! ~~~~~» CANALE WHATSAPP » https://link.mgpf.it/wa» CANALE TELEGRAM » https://mgpf.it/tg» CORSO (Gratis) IN FUTURO » https://mgpf.it/nl» NEWSLETTER » https://mgpf.it/nl~~~~~ CIAO INTERNET E MATTEO FLORA ~~~~~ Questo è “Ciao Internet!” la prima e più seguita trasmissione di TECH POLICY in lingua italiana, online su YouTube e in Podcast.Io sono MATTEO FLORA e sono:» Professore in Fondamenti di Sicurezza delle AI e delle SuperIntelligenze (ESE)» Professore ac in Corporate Reputation e Crisis Management (Pavia).Sono un Imprenditore Seriale del digitale e ho fondato:» The Fool » https://thefool.it - La società italiana leader di Customer Insight» The Magician » https://themagician.agency - Atelier di Advocacy e Gestione della Crisi» 42 Law Firm » https://42lf.it - Lo Studio Legale per la Trasformazione Digitale » ...e tante altre qui: https://matteoflora.com/#aziendeSono Future Leader (IVLP) del Dipartimento di Stato USA sotto Amministrazione Obama nel programma “Combating Cybercrime (2012)”.Sono Presidente di PermessoNegato, l'associazione italiana che si occupa di Pornografia Non- Consensuale e Revenge Porn.Conduco in TV “Intelligenze Artificiali” su Mediaset/TgCom.
All too often, capitalism is identified with the for-profit sector. However, one organizational form whose importance is often overlooked is nonprofits. Roughly 4% of the American economy, including most universities and hospital systems, are nonprofit.One prominent nonprofit currently at the center of a raging debate is OpenAI, the $300 billion American artificial intelligence research organization best known for developing ChatGPT. Founded in 2015 as a donation-based nonprofit with a mission to build AI for humanity, it created a complex “hybrid capped profit” governance structure in 2019. Then, after a dramatic firing and re-hiring of CEO Sam Altman in 2023 (covered on an earlier episode of Capitalisn't: “Who Controls AI?”), a new board of directors announced that achieving OpenAI's mission would require far more capital than philanthropic donations could provide and initiated a process to transition to a for-profit public benefit corporation. This process has been fraught with corporate drama, including one early OpenAI investor, Elon Musk, filing a lawsuit to stop the process and launching a $97.4 billion unsolicited bid for OpenAI's nonprofit arm.Beyond the staggering valuation numbers at stake here–not to mention OpenAI's open pursuit of profits over the public good–are complicated legal and philosophical questions. Namely, what happens when corporate leaders violate the founding purpose of a firm? To discuss, Luigi and Bethany are joined by Rose Chan Loui, the founding executive director of the Lowell Milken Center on Philanthropy and Nonprofits at UCLA Law and co-author of the paper "Board Control of a Charity's Subsidiaries: The Saga of OpenAI.” Is OpenAI a “textbook case of altruism vs. greed,” as the judge overseeing the case declared? Is AI for everyone, or only for investors? Together, they discuss how money can distort purpose and philanthropy, precedents for this case, where it might go next, and how it may shape the future of capitalism itself.Show Notes:Read extensive coverage of the Musk-OpenAI lawsuit on ProMarket, including Luigi's article from March 2024: “Why Musk Is Right About OpenAI.”Guest Disclosure (provided to The Conversation for an op-ed on the case): The authors do not work for, consult, own shares in, or receive funding from any company or organization that would benefit from this article. They have disclosed no relevant affiliations beyond their academic appointment.
I'm not a financial advisor; Superpowers for Good should not be considered investment advice. Seek counsel before making investment decisions.Watch the show on television by downloading the e360tv channel app to your Roku, AppleTV or AmazonFireTV. You can also see it on YouTube.When you purchase an item, launch a campaign or create an investment account after clicking a link here, we may earn a fee. Engage to support our work.Has your business been impacted by the recent fires? Apply now for a chance to receive one of 10 free tickets to SuperCrowdLA on May 2nd and 3rd and gain the tools to rebuild and grow!Devin: What is your superpower?Craig: I think my superpower is empathic awareness and the ability to kind of perceive what others are thinking.Investing has long been a game reserved for the elite, but Craig Jonas is working to change that. As the founder and CEO of CoPeace Capital, he's on a mission to use investing as a force for good, ensuring that more people—especially those historically excluded from wealth-building opportunities—can participate in equity investing.“Many communities have not had the access to growing their wealth through equity investing,” Craig explained. “And the crowdfunding platforms allow that to happen.”CoPeace, a Certified B Corporation and Public Benefit Corporation, has been actively raising capital from the crowd, successfully leveraging Regulation Crowdfunding to fuel its impact-driven investments. By embracing this approach, CoPeace is not only funding its own growth but also identifying promising companies that align with its mission. The company has already invested in businesses focused on sustainable infrastructure and social impact, proving that financial returns and positive change can coexist.Craig emphasized that success in impact crowdfunding isn't automatic—it requires effort, strategy, and community engagement. “The number one learning is that it's not magic. It takes work to develop the crowd that might want to invest in what you're doing.”A common misconception about impact investing is that it requires sacrificing returns. Craig challenges that notion, pointing out that market-based returns are not only possible but often stronger when companies prioritize sustainability and long-term value creation. “Not only are we seeing kind of market-based returns, but the data is showing that it is a good business decision to care about the future of our world,” he said.With plans to launch a larger fund, CoPeace Capital is expanding its ability to invest in high-impact businesses while continuing to make equity investing more accessible. Craig's leadership in this space demonstrates that with the right approach, anyone can leverage capital to drive meaningful change.For those looking to align their investments with their values, CoPeace offers a compelling example of how financial success and social impact can go hand in hand.tl;dr:* Craig Jonas, CEO of CoPeace Capital, discusses using impact investing to democratize access to wealth-building opportunities.* He shares insights from CoPeace's successful Regulation Crowdfunding campaign and how it supports mission-aligned businesses.* Craig challenges the misconception that impact investing requires sacrificing financial returns, highlighting data that supports market-based gains.* He explains his superpower, empathic awareness, and how it shapes his leadership and investment strategies.* Craig offers practical advice for developing empathic awareness, emphasizing active listening, reflection, and immersing oneself in diverse environments.How to Develop Empathic Awareness As a SuperpowerCraig's superpower is empathic awareness—the ability to perceive and deeply understand the emotions and perspectives of others.Craig describes this skill as an essential part of navigating the impact investing space. "I think my superpower is empathic awareness and the ability to kind of perceive what others are thinking," he shared. By listening intently to stakeholders, investors, and the communities CoPeace serves, Craig ensures that their needs and concerns shape the company's mission and strategy.One powerful example of Craig's empathic awareness took place during his time as a basketball coach in post-apartheid South Africa. Selected as the first American coach to work in the townships, he recalls sensing the fear and hesitation in the eyes of young players as he entered their community. By being present, acknowledging their lived experiences, and creating a space of mutual trust, he fostered a meaningful connection that made the basketball clinic not only a success but also a transformational moment for everyone involved.Craig believes that developing empathic awareness requires intentional effort. His tips include:* Pause and Reflect: Before reacting, take an extra moment to consider the perspectives of others.* Engage in New Environments: Step outside your comfort zone by immersing yourself in different cultures and communities.* Observe Role Models: Identify and learn from individuals who demonstrate deep empathy in their interactions.* Listen Actively: Focus on understanding rather than responding—allow people to share their experiences fully.By following Craig's example and advice, you can make empathic awareness a skill. With practice and effort, you could make it a superpower that enables you to do more good in the world.Remember, however, that research into success suggests that building on your own superpowers is more important than creating new ones or overcoming weaknesses. You do you!Guest ProfileCraig Jonas (he/him):CEO, CoPeace CapitalAbout CoPeace Capital: Our mission is to drive measurable and sustainable impact on our world and the people who live in it by funding and fostering growth-stage ventures in Healthcare and Sport: Investing as a force for good.Website: copeace.comX/Twitter Handle: @CoPeaceLinkTree: linktr.ee/CoPeaceOther URL: invest.svx.us/offering/copcop2/Biographical Information: Craig Jonas is a seasoned entrepreneur with over 30 years of leadership experience across business, academics, and athletics. As the founder of CoPeace, he built a mission-driven holding company focused on sustainable investments and democratizing access to equity. Previously, he served as COO of Basketball Travelers and BTI Events, managing global sports events, and held leadership roles at Sportvision and Coach's Edge. Craig holds a doctorate in conflict management from the University of Kansas and has been featured in Forbes, Worth, and Triple Pundit. He lives in Colorado with his wife, Seanna, and their dog, Fergus.BluSky Handle: @jonascopeace.bsky.socialPersonal Facebook Profile: fb.com/craig.jonas.77Linkedin: linkedin.com/in/craigjonas/Instagram Handle: @CoPeacePBCSupport Our SponsorsOur generous sponsors make our work possible, serving impact investors, social entrepreneurs, community builders and diverse founders. Today's advertisers include FundingHope, Imotobank Dealership, Crowdfunding Made Simple and SuperCrowdLA. Learn more about advertising with us here.Max-Impact MembersThe following Max-Impact Members provide valuable financial support:Carol Fineagan, Independent Consultant | Lory Moore, Lory Moore Law | Marcia Brinton, High Desert Gear | Paul Lovejoy, Stakeholder Enterprise | Pearl Wright, Global Changemaker | Ralf Mandt, Next Pitch | Scott Thorpe, Philanthropist | Add Your Name HereUpcoming SuperCrowd Event CalendarIf a location is not noted, the events below are virtual.* Superpowers for Good Live Pitch – Where Innovation Meets Impact! Join us on March 12, 2025, for the Q1-25 live pitch event, streaming on e360tv, LinkedIn, Facebook, and Instagram. Watch impact-driven startups pitch their bold ideas, connect with investors, and drive positive change. Don't miss this chance to witness innovation in action!* Impact Cherub Club Meeting hosted by The Super Crowd, Inc., a public benefit corporation, on March 18, 2024, at 1:00 PM Eastern. Each month, the Club meets to review new offerings for investment consideration and to conduct due diligence on previously screened deals. To join the Impact Cherub Club, become an Impact Member of the SuperCrowd.* SuperCrowdHour, March 19, 2025, at 1:00 PM Eastern. Devin Thorpe will be leading a session on "How to Build a VC-Style Impact Crowdfunding Portfolio," sharing expert insights on diversifying investments, identifying high-potential impact ventures, and leveraging crowdfunding for both financial and social returns. Whether you're an experienced investor or just getting started, this is a must-attend! Don't miss it!* SuperCrowdLA: we're going to be live in Santa Monica, California, May 1-3. Plan to join us for a major, in-person event focused on scaling impact. Sponsored by Digital Niche Agency, ProActive Real Estate and others. This will be a can't-miss event. Has your business been impacted by the recent fires? Apply now for a chance to receive one of 10 free tickets to SuperCrowdLA on May 2nd and 3rd and gain the tools to rebuild and grow!Community Event Calendar* Successful Funding with Karl Dakin, Tuesdays at 10:00 AM ET - Click on Events* Capital Raise Strategies for Purpose Driven Enterprises, hosted by PathLight Law, February 25 at 1:00 PM ET.* Kingscrowd Meet UP in San Francisco, CA - February 27th at 5:30 PM PT* Igniting Community Capital to Build Outdoor Recreation Communities, Crowdfund Better, Thursdays, March 20 & 27, April 3 & 10, 2025, at 1:00 PM ET.* NC3 Changing the Paradigm: Mobilizing Community Investment Funds, March 7, 2025* Asheville Neighborhood Economics, April 1-2, 2-25.* Regulated Investment Crowdfunding Summit 2025, Crowdfunding Professional Association, Washington DC, October 21-22, 2025.Call for community action:* Please show your support for a tax credit for investments made via Regulation Crowdfunding, benefitting both the investors and the small businesses that receive the investments. Learn more here.If you would like to submit an event for us to share with the 9,000+ changemakers, investors and entrepreneurs who are members of the SuperCrowd, click here.We use AI to help us write compelling recaps of each episode. Get full access to Superpowers for Good at www.superpowers4good.com/subscribe
Send us a textAkamai's journey to regain control of their cloud spending reveals the pitfalls of over-provisioning and the importance of optimized workloads. The episode navigates through significant updates like OpenAI's transition to a Public Benefit Corporation and Microsoft's substantial investments in AI data centers while pondering Google Cloud's rising competitive positioning against Azure. • Akamai struggles with escalating cloud costs • The ramifications of leveraging cloud as a crutch for poor workload design • OpenAI's shift to a Public Benefit Corporation for societal good • Microsoft's ambitious $80 billion investment in AI infrastructure • Analysts predict Google Cloud could surpass Azure by 2025 • The rising role of AI agents in tech and IT managementCheck out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj
Have you ever thought about how your everyday choices can make waves—literal waves—of change? In this episode of The Happy Hustle Podcast, I had an inspiring and thought-provoking chat with Alex Schultz, the co-founder and CEO of Four Ocean. If you're passionate about sustainability and the future of our planet, this conversation is a must-listen!Alex and I dive deep into the concept of the triple bottom line—people, planet, and profit. 4ocean isn't just about selling bracelets; it's about cleaning up the ocean and creating a movement. Alex shared how they've built a business model that ties every product sale to a tangible environmental impact: cleaning one pound of trash from the ocean. It's not just business as usual; it's business with a purpose. Starting and scaling a business with a mission isn't all smooth sailing. Alex opened up about the challenges of running a purpose-driven company, from navigating criticism to engaging employees and building partnerships. He shared actionable tips for entrepreneurs looking to create a lasting impact. This episode is packed with insights for anyone who cares about sustainability, entrepreneurship, or simply living a more mindful life. From uncovering the hidden costs of plastic pollution to learning how businesses can lead the charge for a cleaner planet, Alex's journey is nothing short of inspiring.Don't miss this deep dive into the intersection of business and sustainability. Tune in now, and let's get to Happy Hustlin' for the planet!In this episode, we cover: 00:00 Introduction to Four Ocean and Its Mission02:59 The Triple Bottom Line in Business06:11 Understanding B Corporations and Public Benefit Corporations09:00 The Business Model of Four Ocean12:14 Creating a Movement for Ocean Cleanup15:04 Navigating Challenges and Criticism18:11 Partnerships and Sustainability Initiatives21:05 The Impact of Consumer Choices24:05 Plastic Pollution Statistics and Myths31:50 The Truth About Recycling and Plastic Waste34:34 The Impact of Microplastics on Health37:48 Mindful Living: Small Changes for Big Impact41:45 The Importance of Passion and Purpose in Business43:49 Balancing Entrepreneurship and Family Life49:00 Employee Engagement and Company Culture52:28 Rapid Fire Insights and Personal ReflectionsConnect with Alexhttps://www.facebook.com/4oceanBracelets/https://www.instagram.com/4ocean/https://www.tiktok.com/@4oceanhttps://www.youtube.com/channel/UCCT_-OGW5IiUuuHwmuyUPYQhttps://x.com/4oceanhttps://www.linkedin.com/company/4oceanpbc/posts/?feedView=allFind Alex on this website: 4ocean.com Connect with Cary!https://www.instagram.com/caryjack/https://www.facebook.com/SirCaryJackhttps://www.linkedin.com/in/cary-jack-kendzior/https://twitter.com/thehappyhustlehttps://www.youtube.com/channel/UCFDNsD59tLxv2JfEuSsNMOQ/featured Get a free copy of his new book, The Happy Hustle, 10 Alignments to Avoid Burnout & Achieve Blissful Balance https://www.thehappyhustle.com/bookSign up for The Journey: 10 Days To Become a Happy Hustler Online Course https://thehappyhustle.com/thejourney/Apply to the Montana Mastermind Epic Camping Adventure https://thehappyhustle.com/mastermind/“It's time to Happy Hustle, a blissfully balanced life you love, full of passion, purpose, and positive impact!”Episode Sponsor: Magnesium Breakthrough from BiOptimizers https://bioptimizers.com/happyIf you've been on a restricted diet lately or maybe even taken some meds to shed those pounds for the summer, I gotta warn ya—be careful! You might have unknowingly created a nutrient deficiency that could not only mess with your health but also jeopardize those weight loss goals.Did you know that over 75% of Americans are already deficient in magnesium? Yeah, it's wild! Magnesium is this powerhouse mineral that's involved in over 600 biological reactions in your body. It helps with everything from sleep to stress management to hormone balance—all key players in keeping your weight on track.And if you're still on those meds, you might be dealing with some side effects like sleepless nights, digestive issues, or irritability, which can totally throw off your commitment to your goals. Whether you're taking meds or not, setting up healthy habits is crucial to maintaining your weight over time. One of the best things you can do? Make sure you're getting all the magnesium your body needs.Don't let a magnesium deficiency derail your progress! Give Magnesium Breakthrough by BIOptimizers a shot. Unlike other supplements, this one's got all 7 forms of magnesium that your body can actually absorb, so you get the full spectrum of benefits.This approach will help you crush your goals and maintain a healthy weight while keeping your overall health in check. For an exclusive offer, head to bioptimizers.com/happy and use the promo code 'happy10' at checkout to save 10%. And if you subscribe, you'll snag amazing discounts, free gifts, and a guaranteed monthly supply.
We're experimenting and would love to hear from you!In the final 2024 episode of Discover Daily, we explore OpenAI's groundbreaking restructuring plans for 2025, as the company transitions into a Delaware Public Benefit Corporation. This major shift, accompanied by a $6.6 billion funding round valuing OpenAI at $157 billion, represents a strategic move to balance profit-seeking with societal impact while maintaining their commitment to beneficial AI development.China's ambitious Tibet Mega Dam project takes center stage as the world's soon-to-be largest hydropower facility, set to generate an unprecedented 60 gigawatts of power. This engineering marvel, while promising significant renewable energy benefits, faces substantial challenges including seismic concerns and environmental impacts that could affect downstream nations like India and Bangladesh.The episode concludes with an in-depth look at Encyclopedia Britannica's remarkable transformation from a traditional print publisher to an innovative AI company. With a potential public valuation approaching $1 billion, Britannica has revolutionized its business model by developing AI-powered educational tools, including a specialized chatbot that leverages their vast repository of verified knowledge to enhance learning experiences in schools and libraries worldwide.Thank you to our listeners! We appreciate your support and feedback. Have a happy and safe New Year! We'll see you in 2025.From Perplexity's Discover Feed: https://www.perplexity.ai/page/openai-proposes-public-benefit-MMvaZ.jGQgGdkRfvcEoANAhttps://www.perplexity.ai/page/china-approves-tibet-mega-dam-UwFwqGK2RPG5MVwURQd55ghttps://www.perplexity.ai/page/encyclopedia-britannica-is-an-JdeViIFRSjuAji4Snw.HVgPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
Send Everyday AI and Jordan a text messageGoogle is using Claude to improve Gemini? Why is OpenAI looking at building humanoids? What does a $100 billion price tag have to do with AGI? AI news and big tech didn't take a holiday break. Get caught up with Everyday AI's AI News That Matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Impact of Large Language Models2. Google's AI Strategy3. OpenAI's Restructuring and Robotics Research4. AI Manipulation and Concerns5. AGI and its Valuation6. DeepSeek's Open-Source Model7. Meta's AI Plan for Social MediaTimestamps:00:00 Open-source AI competes with proprietary models.04:21 DeepSeek v3: Affordable, open-source model for innovators.07:39 Meta expands AI characters, faces safety risks.10:42 OpenAI restructuring as Public Benefit Corporation (PBC).17:04 Google compares models; Gemini flagged for safety.19:40 Models often use other models for evaluation.21:51 Google prioritizes Gemini AI for 2025 growth.26:29 Google's Gemini lagged behind in updates, ineffective.31:17 AI's intention economy forecasts, manipulates, sells intentions.35:13 Hinton warns AI could outsmart humans, urges regulation.39:24 Microsoft invested in OpenAI; AGI limits tech use.40:36 Microsoft revised AGI use agreement with OpenAI.Keywords:Large Language Models, Google's AI Focus, Gemini language models, AI evaluation, OpenAI Robotics, AI Manipulation Study, Anticipatory AI, Artificial General Intelligence, DeepSeek, Open-source AI, B3 model, Meta, AI Characters, Social Media AI, OpenAI corporate restructuring, Public Benefit Corporation, AI investment, Anthropic's Claude AI, AI Compliance, AI safety, Synthetic Data, AI User Manipulation, Geoffrey Hinton, AI risks, AI regulation, AGI Valuation, Microsoft-OpenAI partnership, Intellectual property in AI, AGI Potential, Sam Altman. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Do you want to activate your heroic potential?In this episode of the Happy Hustle Podcast, I'm bringing you a game-changing guest guru training straight from the private mastermind community, the Happy Hustle Club! This time, we're featuring my man, Mr. Brian Johnson, the founder and CEO of Heroic Public Benefit Corporation and author of Arête: Activate Your Heroic Potential. This dude is the real deal. He's half philosopher, half CEO, and 101% committed to making the world a better place. His mission? To see 51% of humanity flourishing by 2051.Brian isn't just an entrepreneur; he's a visionary who's raised over $25 million, made crowdfunding history, and built and sold two social platforms. Today, he leads a global movement with 10,000+ heroic coaches across 100+ countries. This guy knows what it takes to dream big and deliver. And in this episode, he's sharing the goods with YOU.In this deep-dive conversation, we cover topics that will inspire, challenge, and empower you:Arête: The Path to Excellence – Discover the ancient Greek philosophy of living up to your highest potential and how it applies to your modern-day hustle.Anti-Fragile Confidence – Learn how to embrace challenges, setbacks, and even trauma as fuel for your growth.Business as a Force for Good – Explore the concept of public benefit corporations and how entrepreneurship can drive meaningful change.Mental Health & Resilience – Gain practical strategies to strengthen your mindset and handle adversity with grit and grace.Parenting & Play – Hear Brian's perspective on how creativity, play, and conscious leadership can enhance both business and family life.This training isn't just another podcast episode—it's a front-row seat to the Happy Hustle Club's exclusive mastermind experience. So, what are you waiting for? Hit play on this episode, take notes, and start activating YOUR heroic potential today. Because the world needs more heroes, and that journey starts with YOU. Connect with Brianhttps://www.instagram.com/heroicbrianhttps://www.facebook.com/heroicbrianhttps://www.youtube.com/@HeroicBrianhttps://www.linkedin.com/in/heroicbrianhttps://twitter.com/heroicbrianFind Brian on his website: https://www.heroic.us/Connect with Cary!https://www.instagram.com/caryjack/https://www.facebook.com/SirCaryJackhttps://www.linkedin.com/in/cary-jack-kendzior/https://twitter.com/thehappyhustlehttps://www.youtube.com/channel/UCFDNsD59tLxv2JfEuSsNMOQ/featuredGet a free copy of his new book, The Happy Hustle, 10 Alignments to Avoid Burnout & Achieve Blissful Balance https://www.thehappyhustle.com/bookSign up for The Journey: 10 Days To Become a Happy Hustler Online Course https://thehappyhustle.com/thejourney/Apply to the Montana Mastermind Epic Camping Adventure https://thehappyhustle.com/mastermind/“It's time to Happy Hustle, a blissfully balanced life you love, full of passion, purpose, and positive impact!”Episode Sponsor: Magnesium Breakthrough from BiOptimizers https://bioptimizers.com/happyIf you've been on a restricted diet lately or maybe even taken some meds to shed those pounds for the summer, I gotta warn ya—be careful! You might have unknowingly created a nutrient deficiency that could not only mess with your health but also jeopardize those weight loss goals.Did you know that over 75% of Americans are already deficient in magnesium? Yeah, it's wild! Magnesium is this powerhouse mineral that's involved in over 600 biological reactions in your body. It helps with everything from sleep to stress management to hormone balance—all key players in keeping your weight on track.And if you're still on those meds, you might be dealing with some side effects like sleepless nights, digestive issues, or irritability, which can totally throw off your commitment to your goals. Whether you're taking meds or not, setting up healthy habits is crucial to maintaining your weight over time. One of the best things you can do? Make sure you're getting all the magnesium your body needs.Don't let a magnesium deficiency derail your progress! Give Magnesium Breakthrough by BIOptimizers a shot. Unlike other supplements, this one's got all 7 forms of magnesium that your body can actually absorb, so you get the full spectrum of benefits.This approach will help you crush your goals and maintain a healthy weight while keeping your overall health in check. For an exclusive offer, head to bioptimizers.com/happy and use the promo code 'happy10' at checkout to save 10%. And if you subscribe, you'll snag amazing discounts, free gifts, and a guaranteed monthly supply.
Sharing a new podcast episode with Chris Mirabile about reducing your biological age We talk about: - His background and how he became so passionate about longevity - Why he created NOVOS - What is biological age and his everyday tips for reducing your biological age - What to do if you're already consistent with health foundations and want to take it to the next level and so.much.more. 175: Reducing your biological age with Chris Mirabile Here's more about Chris and his background: Chris Mirabile is an internationally recognized longevity expert and the founder and CEO of NOVOS, the world's leading consumer longevity platform. At an early age, Chris faced health challenges – a brain tumor – that sparked his passion for longevity medicine and gaining a deeper understanding of the connection between aging, disease, and mortality. Chris shares his personal longevity journey on his blog, SlowMyAge, where he tracks his lifestyle, self-experiments, and biological data while sharing how he successfully slowed his biological aging by 37%. Chris is featured as a longevity leader frequently in the press and has spoken on 100+ podcasts, forums, and at numerous global conferences. Based in Miami and New York, Chris created NOVOS, a Public Benefit Corporation with the mission of making longevity achievable and accessible for all. NOVOS is the first human longevity company to simultaneously address the 12 biological causes of aging through its scientifically validated, patent-pending, over-the-counter formulations, best-in-class biological age tests, and free digital tools. NOVOS' formulations are in a class of their own, being studied at academic labs and found to reduce DNA damage, cellular senescence, and oxytosis/ferroptosis; shown in a human case study to slow down epigenetic pace of aging for 73% of participants, with 0% accelerating; and multiple NOVOS customers featured in the news for being globally ranked in the top 5 for slowing their pace of aging. Check out NOVOS here and connect with Chris on Instagram. Partners: Check out my all-time favorite weighted blanket and other health gadgets at Thersage and use FITNESSISTA for your reader discount! I've been using Nutrisense on and off for a couple of years now. I love being able to see how my blood sugar responds to my diet and habits, and run experiments. You can try out Nutrisense here and use GINA50 for $50 off. If any of my fellow health professional friends are looking for another way to help their clients, I highly recommend IHP. You can also use this information to heal yourself and then go one to heal others, which I think is a beautiful mission. You can absolutely join if you don't currently work in the health or fitness industry; many IHPs don't begin on this path. They're friends who are passionate to learn more about health and wellness, and want to share this information with those they love. You can do this as a passion, or start an entirely new career. You can use my referral link here and the code FITNESSISTA for up to $250 off the Integrative Health Practitioner program. I highly recommend it! You can check out my review IHP Level 1 here and my review of Level 2 here. Thank you so much for listening and for all of your support with the podcast! Please be sure to subscribe, and leave a rating or review if you enjoyed this episode. If you leave a rating, head to this page and you'll get a little “thank you” gift from me to you.
Send us a textSubscribe to AG Dillon Pre-IPO Stock Research at agdillon.com/subscribe;- Wednesday = secondary market valuations, revenue multiples, performance, index fact sheets- Saturdays = pre-IPO news and insights, webinar replays00:02 | AG Dillon Pre-IPO Stock Funds closing to new investors. Wires due on Oct 15. Top 10 Index Fund, OpenAI, xAI, Groq, Hugging Face, and Databricks.00:45 | Anthropic Aims for $30-40B Valuation in New Capital Raise- AI company behind Claude conversational AI- Projected annual revenue: $1B in 2024, up 1000% YoY- Gross profit margins dropped to 38% from 50-55%- Expects $2.7B cash burn in 2024- Current secondary market valuation: $25.3B (+40% vs Jan 2024)01:49 | Wiz Considers Share Sale, Valued at $15-20B- Cybersecurity company scanning data stored in cloud providers- Shareholders may cash out $500M-$700M- Rejected $23B acquisition offer from Google in July 2024- Raised $1.8B from investors like Sequoia Capital and Index Ventures- Secondary market valuation: $18.2B (+52% vs May 2024)02:35 | Revolut Gains Investment from Mubadala at $45B Valuation- London-based fintech with 45M global customers- Founder Nik Storonsky sold $200-$300M worth of shares- Mubadala participated alongside Coatue, D1 Capital Partners, and Tiger Global- Processes over 500M transactions monthly; B2B service generates €450M annually03:17 | Chime Gears Up for 2025 IPO with Morgan Stanley- Fintech banking startup with 7M active users- Became profitable in Q1 2024- Launched a cash advance product for users up to $500 before payday- Secondary market valuation: $9.2B (-63% vs Sep 2021)04:03 | Scale AI's Revenue Quadruples to $400M in H1 2024- Provides data for AI developers like Meta and Google- Operating margin improved: $1.20 spent per $1 revenue, down from $1.50- Annualized revenue near $1B, valued at $13.8B (primary, May 2024)- Secondary market valuation: $14.1B (+2% vs May 2024)04:56 | OpenAI Transitions to Public Benefit Corporation, Hits $156.5B Valuation- Raised $6.5B in funding round led by Thrive Capital ($1B commitment)- New voice technology "Advanced Voice Mode" rolling out to more users- Key departures: CTO Mira Murati, Chief Research Officer Bob McGrew, and others07:40 | Google Expands Gemini Chatbot for Workspace Integration- Gemini API requests increased 36-fold in eight months- Targeting enterprise adoption via Workspace subscribers- Emphasizes security with SOC 1/2/3, ISO 27001, and ISO 27701 certifications08:41 | Rippling Launches AI Talent Signal Tool, Valued at $15.9B- HR/IT business software platform with a new AI-driven tool- Evaluates employees with task-specific metrics- Built using AI models from OpenAI and Anthropic- Secondary market valuation: $15.9B (+18% vs Apr 2024)09:36 | Anduril Secures Major Defense Contracts, Valued at $17.4B- Defense tech contractor partnered with Microsoft for US Army goggles- Ghost-X UAS chosen for Army's reconnaissance capabilities- Potential $21.9B in orders over a decade- Secondary market valuation: $17.4B (+24% vs Aug 2024)10:37 | Klarna Expands BNPL Services with Adyen- Partnership integrates BNPL into 450K+ Adyen payment terminals- BNPL market gaining traction with 26% of paycheck-to-paycheck consumers- Secondary market valuation: $10.3B (+54% vs Jul 2022)11:34 | Pre-IPO Stock Market Weekly Performance- agdillon.com/subscribe to receive weekly pdf report in your inbox12:18 | Pre-IPO Stock Vintage Index Weekly Performance- agdillon.com/subscribe to receive weekly pdf report in your inbox
Episode 16 - Featuring Jacob Pechenik, Co Founder and CEO of Lettuce GrowIn this episode we get curious about our 'fresh' produce. Something we buy each week from the store to feed ourselves and our families and never question how fresh & nutritious it actually is? Well our guest today, sheds some light on this and offers listeners a farm to table solution that is easy to install in your own home. See below for an exclusive listeners discount code.GUEST BIO:Jacob Pechenik is a passionate entrepreneur who's built a career around questioning and improving industry status quo. Upon graduating from MIT with a BS in Chemical Engineering, Jacob founded and led TechTrader, an early web-based B2B supply chain platform, followed by the development and launch of YellowJacket Software, a peer-to-peer derivatives trading platform supporting a broad spectrum of segments (from weather and energy to agriculture) in 2004. In 2011, he set his sights on the film industry, founding Venture Forth, a film, finance and production company focused on high-quality and impact-driven independent films. His dedication to innovation and socially constructive disruption led to the co-founding in 2017 of The Farm Project, a Public Benefit Corporation on a mission to transform our food system, and Lettuce Grow, an altruistic initiative that aims to reconnect people with their food. Today, Jacob remains committed to delivering healthy, sustainable harvests to every home and enabling growers to have healthier connections with our planet.HELPFUL LINKS:www.lettucegrow.comListeners discount code to use at check out is: BECURIOUSwww.instagram.com/lettucegrowwww.instagram.com/becurious_podcastCREDITS:The BE CURIOUS PODCAST is brought to you by ECODA MEDIAHost: Louise HoughtonProduction by: Deviants MediaProducer: Louise HoughtonAssistant Producer: Ralph CortezMotion Graphics: Josh Dage
Disclaimer: We recorded this episode ~1.5 months ago, timing for the FastHTML release. It then got bottlenecked by Llama3.1, Winds of AI Winter, and SAM2 episodes, so we're a little late. Since then FastHTML was released, swyx is building an app in it for AINews, and Anthropic has also released their prompt caching API. Remember when Dylan Patel of SemiAnalysis coined the GPU Rich vs GPU Poor war? (if not, see our pod with him). The idea was that if you're GPU poor you shouldn't waste your time trying to solve GPU rich problems (i.e. pre-training large models) and are better off working on fine-tuning, optimized inference, etc. Jeremy Howard (see our “End of Finetuning” episode to catchup on his background) and Eric Ries founded Answer.AI to do exactly that: “Practical AI R&D”, which is very in-line with the GPU poor needs. For example, one of their first releases was a system based on FSDP + QLoRA that let anyone train a 70B model on two NVIDIA 4090s. Since then, they have come out with a long list of super useful projects (in no particular order, and non-exhaustive):* FSDP QDoRA: this is just as memory efficient and scalable as FSDP/QLoRA, and critically is also as accurate for continued pre-training as full weight training.* Cold Compress: a KV cache compression toolkit that lets you scale sequence length without impacting speed.* colbert-small: state of the art retriever at only 33M params* JaColBERTv2.5: a new state-of-the-art retrievers on all Japanese benchmarks.* gpu.cpp: portable GPU compute for C++ with WebGPU.* Claudette: a better Anthropic API SDK. They also recently released FastHTML, a new way to create modern interactive web apps. Jeremy recently released a 1 hour “Getting started” tutorial on YouTube; while this isn't AI related per se, but it's close to home for any AI Engineer who are looking to iterate quickly on new products: In this episode we broke down 1) how they recruit 2) how they organize what to research 3) and how the community comes together. At the end, Jeremy gave us a sneak peek at something new that he's working on that he calls dialogue engineering: So I've created a new approach. It's not called prompt engineering. I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it.He explains it a bit more ~44:53 in the pod, but we'll just have to wait for the public release to figure out exactly what he means.Timestamps* [00:00:00] Intro by Suno AI* [00:03:02] Continuous Pre-Training is Here* [00:06:07] Schedule-Free Optimizers and Learning Rate Schedules* [00:07:08] Governance and Structural Issues within OpenAI and Other AI Labs* [00:13:01] How Answer.ai works* [00:23:40] How to Recruit Productive Researchers* [00:27:45] Building a new BERT* [00:31:57] FSDP, QLoRA, and QDoRA: Innovations in Fine-Tuning Large Models* [00:36:36] Research and Development on Model Inference Optimization* [00:39:49] FastHTML for Web Application Development* [00:46:53] AI Magic & Dialogue Engineering* [00:52:19] AI wishlist & predictionsShow Notes* Jeremy Howard* Previously on Latent Space: The End of Finetuning, NeurIPS Startups* Answer.ai* Fast.ai* FastHTML* answerai-colbert-small-v1* gpu.cpp* Eric Ries* Aaron DeFazio* Yi Tai* Less Wright* Benjamin Warner* Benjamin Clavié* Jono Whitaker* Austin Huang* Eric Gilliam* Tim Dettmers* Colin Raffel* Sebastian Raschka* Carson Gross* Simon Willison* Sepp Hochreiter* Llama3.1 episode* Snowflake Arctic* Ranger Optimizer* Gemma.cpp* HTMX* UL2* BERT* DeBERTa* Efficient finetuning of Llama 3 with FSDP QDoRA* xLSTMTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: And today we're back with Jeremy Howard, I think your third appearance on Latent Space. Welcome.Jeremy [00:00:19]: Wait, third? Second?Swyx [00:00:21]: Well, I grabbed you at NeurIPS.Jeremy [00:00:23]: I see.Swyx [00:00:24]: Very fun, standing outside street episode.Jeremy [00:00:27]: I never heard that, by the way. You've got to send me a link. I've got to hear what it sounded like.Swyx [00:00:30]: Yeah. Yeah, it's a NeurIPS podcast.Alessio [00:00:32]: I think the two episodes are six hours, so there's plenty to listen, we'll make sure to send it over.Swyx [00:00:37]: Yeah, we're trying this thing where at the major ML conferences, we, you know, do a little audio tour of, give people a sense of what it's like. But the last time you were on, you declared the end of fine tuning. I hope that I sort of editorialized the title a little bit, and I know you were slightly uncomfortable with it, but you just own it anyway. I think you're very good at the hot takes. And we were just discussing in our pre-show that it's really happening, that the continued pre-training is really happening.Jeremy [00:01:02]: Yeah, absolutely. I think people are starting to understand that treating the three ULM FIT steps of like pre-training, you know, and then the kind of like what people now call instruction tuning, and then, I don't know if we've got a general term for this, DPO, RLHFE step, you know, or the task training, they're not actually as separate as we originally suggested they were in our paper, and when you treat it more as a continuum, and that you make sure that you have, you know, more of kind of the original data set incorporated into the later stages, and that, you know, we've also seen with LLAMA3, this idea that those later stages can be done for a lot longer. These are all of the things I was kind of trying to describe there. It wasn't the end of fine tuning, but more that we should treat it as a continuum, and we should have much higher expectations of how much you can do with an already trained model. You can really add a lot of behavior to it, you can change its behavior, you can do a lot. So a lot of our research has been around trying to figure out how to modify the model by a larger amount rather than starting from random weights, because I get very offended at the idea of starting from random weights.Swyx [00:02:14]: Yeah, I saw that in ICLR in Vienna, there was an outstanding paper about starting transformers from data-driven piers. I don't know if you saw that one, they called it sort of never trained from scratch, and I think it was kind of rebelling against like the sort of random initialization.Jeremy [00:02:28]: Yeah, I've, you know, that's been our kind of continuous message since we started Fast AI, is if you're training for random weights, you better have a really good reason, you know, because it seems so unlikely to me that nobody has ever trained on data that has any similarity whatsoever to the general class of data you're working with, and that's the only situation in which I think starting from random weights makes sense.Swyx [00:02:51]: The other trends since our last pod that I would point people to is I'm seeing a rise in multi-phase pre-training. So Snowflake released a large model called Snowflake Arctic, where they detailed three phases of training where they had like a different mixture of like, there was like 75% web in the first instance, and then they reduced the percentage of the web text by 10% each time and increased the amount of code in each phase. And I feel like multi-phase is being called out in papers more. I feel like it's always been a thing, like changing data mix is not something new, but calling it a distinct phase is new, and I wonder if there's something that you're seeingJeremy [00:03:32]: on your end. Well, so they're getting there, right? So the point at which they're doing proper continued pre-training is the point at which that becomes a continuum rather than a phase. So the only difference with what I was describing last time is to say like, oh, there's a function or whatever, which is happening every batch. It's not a huge difference. You know, I always used to get offended when people had learning rates that like jumped. And so one of the things I started doing early on in Fast.ai was to say to people like, no, you should actually have your learning rate schedule should be a function, not a list of numbers. So now I'm trying to give the same idea about training mix.Swyx [00:04:07]: There's been pretty public work from Meta on schedule-free optimizers. I don't know if you've been following Aaron DeFazio and what he's doing, just because you mentioned learning rate schedules, you know, what if you didn't have a schedule?Jeremy [00:04:18]: I don't care very much, honestly. I don't think that schedule-free optimizer is that exciting. It's fine. We've had non-scheduled optimizers for ages, like Less Wright, who's now at Meta, who was part of the Fast.ai community there, created something called the Ranger optimizer. I actually like having more hyperparameters. You know, as soon as you say schedule-free, then like, well, now I don't get to choose. And there isn't really a mathematically correct way of, like, I actually try to schedule more parameters rather than less. So like, I like scheduling my epsilon in my atom, for example. I schedule all the things. But then the other thing we always did with the Fast.ai library was make it so you don't have to set any schedules. So Fast.ai always supported, like, you didn't even have to pass a learning rate. Like, it would always just try to have good defaults and do the right thing. But to me, I like to have more parameters I can play with if I want to, but you don't have to.Alessio [00:05:08]: And then the more less technical side, I guess, of your issue, I guess, with the market was some of the large research labs taking all this innovation kind of behind closed doors and whether or not that's good, which it isn't. And now we could maybe make it more available to people. And then a month after we released the episode, there was the whole Sam Altman drama and like all the OpenAI governance issues. And maybe people started to think more, okay, what happens if some of these kind of labs, you know, start to break from within, so to speak? And the alignment of the humans is probably going to fall before the alignment of the models. So I'm curious, like, if you have any new thoughts and maybe we can also tie in some of the way that we've been building Answer as like a public benefit corp and some of those aspects.Jeremy [00:05:51]: Sure. So, yeah, I mean, it was kind of uncomfortable because two days before Altman got fired, I did a small public video interview in which I said, I'm quite sure that OpenAI's current governance structure can't continue and that it was definitely going to fall apart. And then it fell apart two days later and a bunch of people were like, what did you know, Jeremy?Alessio [00:06:13]: What did Jeremy see?Jeremy [00:06:15]: I didn't see anything. It's just obviously true. Yeah. So my friend Eric Ries and I spoke a lot before that about, you know, Eric's, I think probably most people would agree, the top expert in the world on startup and AI governance. And you know, we could both clearly see that this didn't make sense to have like a so-called non-profit where then there are people working at a company, a commercial company that's owned by or controlled nominally by the non-profit, where the people in the company are being given the equivalent of stock options, like everybody there was working there with expecting to make money largely from their equity. So the idea that then a board could exercise control by saying like, oh, we're worried about safety issues and so we're going to do something that decreases the profit of the company, when every stakeholder in the company, their remuneration pretty much is tied to their profit, it obviously couldn't work. So I mean, that was a huge oversight there by someone. I guess part of the problem is that the kind of people who work at non-profits and in this case the board, you know, who are kind of academics and, you know, people who are kind of true believers. I think it's hard for them to realize that 99.999% of the world is driven very heavily by money, especially huge amounts of money. So yeah, Eric and I had been talking for a long time before that about what could be done differently, because also companies are sociopathic by design and so the alignment problem as it relates to companies has not been solved. Like, companies become huge, they devour their founders, they devour their communities and they do things where even the CEOs, you know, often of big companies tell me like, I wish our company didn't do that thing. You know, I know that if I didn't do it, then I would just get fired and the board would put in somebody else and the board knows if they don't do it, then their shareholders can sue them because they're not maximizing profitability or whatever. So what Eric's spent a lot of time doing is trying to think about how do we make companies less sociopathic, you know, how to, or more, you know, maybe a better way to think of it is like, how do we make it so that the founders of companies can ensure that their companies continue to actually do the things they want them to do? You know, when we started a company, hey, we very explicitly decided we got to start a company, not a academic lab, not a nonprofit, you know, we created a Delaware Seacorp, you know, the most company kind of company. But when we did so, we told everybody, you know, including our first investors, which was you Alessio. They sound great. We are going to run this company on the basis of maximizing long-term value. And in fact, so when we did our second round, which was an angel round, we had everybody invest through a long-term SPV, which we set up where everybody had to agree to vote in line with long-term value principles. So like never enough just to say to people, okay, we're trying to create long-term value here for society as well as for ourselves and everybody's like, oh, yeah, yeah, I totally agree with that. But when it comes to like, okay, well, here's a specific decision we have to make, which will not maximize short-term value, people suddenly change their mind. So you know, it has to be written into the legal documents of everybody so that no question that that's the way the company has to be managed. So then you mentioned the PBC aspect, Public Benefit Corporation, which I never quite understood previously. And turns out it's incredibly simple, like it took, you know, like one paragraph added to our corporate documents to become a PBC. It was cheap, it was easy, but it's got this huge benefit, which is if you're not a public benefit corporation, then somebody can come along and offer to buy you with a stated description of like turning your company into the thing you most hate, right? And if they offer you more than the market value of your company and you don't accept it, then you are not necessarily meeting the kind of your fiduciary responsibilities. So the way like Eric always described it to me is like, if Philip Morris came along and said that you've got great technology for marketing cigarettes to children, so we're going to pivot your company to do that entirely, and we're going to pay you 50% more than the market value, you're going to have to say yes. If you have a PBC, then you are more than welcome to say no, if that offer is not in line with your stated public benefit. So our stated public benefit is to maximize the benefit to society through using AI. So given that more children smoking doesn't do that, then we can say like, no, we're not selling to you.Alessio [00:11:01]: I was looking back at some of our emails. You sent me an email on November 13th about talking and then on the 14th, I sent you an email working together to free AI was the subject line. And then that was kind of the start of the C round. And then two days later, someone got fired. So you know, you were having these thoughts even before we had like a public example of like why some of the current structures didn't work. So yeah, you were very ahead of the curve, so to speak. You know, people can read your awesome introduction blog and answer and the idea of having a R&D lab versus our lab and then a D lab somewhere else. I think to me, the most interesting thing has been hiring and some of the awesome people that you've been bringing on that maybe don't fit the central casting of Silicon Valley, so to speak. Like sometimes I got it like playing baseball cards, you know, people are like, oh, what teams was this person on, where did they work versus focusing on ability. So I would love for you to give a shout out to some of the awesome folks that you have on the team.Jeremy [00:11:58]: So, you know, there's like a graphic going around describing like the people at XAI, you know, Elon Musk thing. And like they are all connected to like multiple of Stanford, Meta, DeepMind, OpenAI, Berkeley, Oxford. Look, these are all great institutions and they have good people. And I'm definitely not at all against that, but damn, there's so many other people. And one of the things I found really interesting is almost any time I see something which I think like this is really high quality work and it's something I don't think would have been built if that person hadn't built the thing right now, I nearly always reach out to them and ask to chat. And I tend to dig in to find out like, okay, you know, why did you do that thing? Everybody else has done this other thing, your thing's much better, but it's not what other people are working on. And like 80% of the time, I find out the person has a really unusual background. So like often they'll have like, either they like came from poverty and didn't get an opportunity to go to a good school or had dyslexia and, you know, got kicked out of school in year 11, or they had a health issue that meant they couldn't go to university or something happened in their past and they ended up out of the mainstream. And then they kind of succeeded anyway. Those are the people that throughout my career, I've tended to kind of accidentally hire more of, but it's not exactly accidentally. It's like when I see somebody who's done, two people who have done extremely well, one of them did extremely well in exactly the normal way from the background entirely pointing in that direction and they achieved all the hurdles to get there. And like, okay, that's quite impressive, you know, but another person who did just as well, despite lots of constraints and doing things in really unusual ways and came up with different approaches. That's normally the person I'm likely to find useful to work with because they're often like risk-takers, they're often creative, they're often extremely tenacious, they're often very open-minded. So that's the kind of folks I tend to find myself hiring. So now at Answer.ai, it's a group of people that are strong enough that nearly every one of them has independently come to me in the past few weeks and told me that they have imposter syndrome and they're not convinced that they're good enough to be here. And I kind of heard it at the point where I was like, okay, I don't think it's possible that all of you are so far behind your peers that you shouldn't get to be here. But I think part of the problem is as an R&D lab, the great developers look at the great researchers and they're like, wow, these big-brained, crazy research people with all their math and s**t, they're too cool for me, oh my God. And then the researchers look at the developers and they're like, oh, they're killing it, making all this stuff with all these people using it and talking on Twitter about how great it is. I think they're both a bit intimidated by each other, you know. And so I have to kind of remind them like, okay, there are lots of things in this world where you suck compared to lots of other people in this company, but also vice versa, you know, for all things. And the reason you came here is because you wanted to learn about those other things from those other people and have an opportunity to like bring them all together into a single unit. You know, it's not reasonable to expect you're going to be better at everything than everybody else. I guess the other part of it is for nearly all of the people in the company, to be honest, they have nearly always been better than everybody else at nearly everything they're doing nearly everywhere they've been. So it's kind of weird to be in this situation now where it's like, gee, I can clearly see that I suck at this thing that I'm meant to be able to do compared to these other people where I'm like the worst in the company at this thing for some things. So I think that's a healthy place to be, you know, as long as you keep reminding each other about that's actually why we're here. And like, it's all a bit of an experiment, like we don't have any managers. We don't have any hierarchy from that point of view. So for example, I'm not a manager, which means I don't get to tell people what to do or how to do it or when to do it. Yeah, it's been a bit of an experiment to see how that would work out. And it's been great. So for instance, Ben Clavier, who you might have come across, he's the author of Ragatouille, he's the author of Rerankers, super strong information retrieval guy. And a few weeks ago, you know, this additional channel appeared on Discord, on our private Discord called Bert24. And these people started appearing, as in our collab sections, we have a collab section for like collaborating with outsiders. And these people started appearing, there are all these names that I recognize, like Bert24, and they're all talking about like the next generation of Bert. And I start following along, it's like, okay, Ben decided that I think, quite rightly, we need a new Bert. Because everybody, like so many people are still using Bert, and it's still the best at so many things, but it actually doesn't take advantage of lots of best practices. And so he just went out and found basically everybody who's created better Berts in the last four or five years, brought them all together, suddenly there's this huge collaboration going on. So yeah, I didn't tell him to do that. He didn't ask my permission to do that. And then, like, Benjamin Warner dived in, and he's like, oh, I created a whole transformers from scratch implementation designed to be maximally hackable. He originally did it largely as a teaching exercise to show other people, but he was like, I could, you know, use that to create a really hackable BERT implementation. In fact, he didn't say that. He said, I just did do that, you know, and I created a repo, and then everybody's like starts using it. They're like, oh my god, this is amazing. I can now implement all these other BERT things. And it's not just answer AI guys there, you know, there's lots of folks, you know, who have like contributed new data set mixes and blah, blah, blah. So, I mean, I can help in the same way that other people can help. So like, then Ben Clavier reached out to me at one point and said, can you help me, like, what have you learned over time about how to manage intimidatingly capable and large groups of people who you're nominally meant to be leading? And so, you know, I like to try to help, but I don't direct. Another great example was Kerem, who, after our FSTP QLORA work, decided quite correctly that it didn't really make sense to use LoRa in today's world. You want to use the normalized version, which is called Dora. Like two or three weeks after we did FSTP QLORA, he just popped up and said, okay, I've just converted the whole thing to Dora, and I've also created these VLLM extensions, and I've got all these benchmarks, and, you know, now I've got training of quantized models with adapters that are as fast as LoRa, and as actually better than, weirdly, fine tuning. Just like, okay, that's great, you know. And yeah, so the things we've done to try to help make these things happen as well is we don't have any required meetings, you know, but we do have a meeting for each pair of major time zones that everybody's invited to, and, you know, people see their colleagues doing stuff that looks really cool and say, like, oh, how can I help, you know, or how can I learn or whatever. So another example is Austin, who, you know, amazing background. He ran AI at Fidelity, he ran AI at Pfizer, he ran browsing and retrieval for Google's DeepMind stuff, created Jemma.cpp, and he's been working on a new system to make it easier to do web GPU programming, because, again, he quite correctly identified, yeah, so I said to him, like, okay, I want to learn about that. Not an area that I have much expertise in, so, you know, he's going to show me what he's working on and teach me a bit about it, and hopefully I can help contribute. I think one of the key things that's happened in all of these is everybody understands what Eric Gilliam, who wrote the second blog post in our series, the R&D historian, describes as a large yard with narrow fences. Everybody has total flexibility to do what they want. We all understand kind of roughly why we're here, you know, we agree with the premises around, like, everything's too expensive, everything's too complicated, people are building too many vanity foundation models rather than taking better advantage of fine-tuning, like, there's this kind of general, like, sense of we're all on the same wavelength about, you know, all the ways in which current research is fucked up, and, you know, all the ways in which we're worried about centralization. We all care a lot about not just research for the point of citations, but research that actually wouldn't have happened otherwise, and actually is going to lead to real-world outcomes. And so, yeah, with this kind of, like, shared vision, people understand, like, you know, so when I say, like, oh, well, you know, tell me, Ben, about BERT 24, what's that about? And he's like, you know, like, oh, well, you know, you can see from an accessibility point of view, or you can see from a kind of a actual practical impact point of view, there's far too much focus on decoder-only models, and, you know, like, BERT's used in all of these different places and industry, and so I can see, like, in terms of our basic principles, what we're trying to achieve, this seems like something important. And so I think that's, like, a really helpful that we have that kind of shared perspective, you know?Alessio [00:21:14]: Yeah. And before we maybe talk about some of the specific research, when you're, like, reaching out to people, interviewing them, what are some of the traits, like, how do these things come out, you know, usually? Is it working on side projects that you, you know, you're already familiar with? Is there anything, like, in the interview process that, like, helps you screen for people that are less pragmatic and more research-driven versus some of these folks that are just gonna do it, you know? They're not waiting for, like, the perfect process.Jeremy [00:21:40]: Everybody who comes through the recruiting is interviewed by everybody in the company. You know, our goal is 12 people, so it's not an unreasonable amount. So the other thing to say is everybody so far who's come into the recruiting pipeline, everybody bar one, has been hired. So which is to say our original curation has been good. And that's actually pretty easy, because nearly everybody who's come in through the recruiting pipeline are people I know pretty well. So Jono Whitaker and I, you know, he worked on the stable diffusion course we did. He's outrageously creative and talented, and he's super, like, enthusiastic tinkerer, just likes making things. Benjamin was one of the strongest parts of the fast.ai community, which is now the alumni. It's, like, hundreds of thousands of people. And you know, again, like, they're not people who a normal interview process would pick up, right? So Benjamin doesn't have any qualifications in math or computer science. Jono was living in Zimbabwe, you know, he was working on, like, helping some African startups, you know, but not FAANG kind of credentials. But yeah, I mean, when you actually see people doing real work and they stand out above, you know, we've got lots of Stanford graduates and open AI people and whatever in our alumni community as well. You know, when you stand out above all of those people anyway, obviously you've got something going for you. You know, Austin, him and I worked together on the masks study we did in the proceeding at the National Academy of Science. You know, we had worked together, and again, that was a group of, like, basically the 18 or 19 top experts in the world on public health and epidemiology and research design and so forth. And Austin, you know, one of the strongest people in that collaboration. So yeah, you know, like, I've been lucky enough to have had opportunities to work with some people who are great and, you know, I'm a very open-minded person, so I kind of am always happy to try working with pretty much anybody and some people stand out. You know, there have been some exceptions, people I haven't previously known, like Ben Clavier, actually, I didn't know before. But you know, with him, you just read his code, and I'm like, oh, that's really well-written code. And like, it's not written exactly the same way as everybody else's code, and it's not written to do exactly the same thing as everybody else's code. So yeah, and then when I chatted to him, it's just like, I don't know, I felt like we'd known each other for years, like we just were on the same wavelength, but I could pretty much tell that was going to happen just by reading his code. I think you express a lot in the code you choose to write and how you choose to write it, I guess. You know, or another example, a guy named Vic, who was previously the CEO of DataQuest, and like, in that case, you know, he's created a really successful startup. He won the first, basically, Kaggle NLP competition, which was automatic essay grading. He's got the current state-of-the-art OCR system, Surya. Again, he's just a guy who obviously just builds stuff, you know, he doesn't ask for permission, he doesn't need any, like, external resources. Actually, Karim's another great example of this, I mean, I already knew Karim very well because he was my best ever master's student, but it wasn't a surprise to me then when he then went off to create the world's state-of-the-art language model in Turkish on his own, in his spare time, with no budget, from scratch. This is not fine-tuning or whatever, he, like, went back to Common Crawl and did everything. Yeah, it's kind of, I don't know what I'd describe that process as, but it's not at all based on credentials.Swyx [00:25:17]: Assemble based on talent, yeah. We wanted to dive in a little bit more on, you know, turning from the people side of things into the technical bets that you're making. Just a little bit more on Bert. I was actually, we just did an interview with Yi Tay from Reka, I don't know if you're familiar with his work, but also another encoder-decoder bet, and one of his arguments was actually people kind of over-index on the decoder-only GPT-3 type paradigm. I wonder if you have thoughts there that is maybe non-consensus as well. Yeah, no, absolutely.Jeremy [00:25:45]: So I think it's a great example. So one of the people we're collaborating with a little bit with BERT24 is Colin Raffle, who is the guy behind, yeah, most of that stuff, you know, between that and UL2, there's a lot of really interesting work. And so one of the things I've been encouraging the BERT group to do, Colin has as well, is to consider using a T5 pre-trained encoder backbone as a thing you fine-tune, which I think would be really cool. You know, Colin was also saying actually just use encoder-decoder as your Bert, you know, why don't you like use that as a baseline, which I also think is a good idea. Yeah, look.Swyx [00:26:25]: What technical arguments are people under-weighting?Jeremy [00:26:27]: I mean, Colin would be able to describe this much better than I can, but I'll give my slightly non-expert attempt. Look, I mean, think about like diffusion models, right? Like in stable diffusion, like we use things like UNet. You have this kind of downward path and then in the upward path you have the cross connections, which it's not a tension, but it's like a similar idea, right? You're inputting the original encoding path into your decoding path. It's critical to make it work, right? Because otherwise in the decoding part, the model has to do so much kind of from scratch. So like if you're doing translation, like that's a classic kind of encoder-decoder example. If it's decoder only, you never get the opportunity to find the right, you know, feature engineering, the right feature encoding for the original sentence. And it kind of means then on every token that you generate, you have to recreate the whole thing, you know? So if you have an encoder, it's basically saying like, okay, this is your opportunity model to create a really useful feature representation for your input information. So I think there's really strong arguments for encoder-decoder models anywhere that there is this kind of like context or source thing. And then why encoder only? Well, because so much of the time what we actually care about is a classification, you know? It's like an output. It's like generating an arbitrary length sequence of tokens. So anytime you're not generating an arbitrary length sequence of tokens, decoder models don't seem to make much sense. Now the interesting thing is, you see on like Kaggle competitions, that decoder models still are at least competitive with things like Deberta v3. They have to be way bigger to be competitive with things like Deberta v3. And the only reason they are competitive is because people have put a lot more time and money and effort into training the decoder only ones, you know? There isn't a recent Deberta. There isn't a recent Bert. Yeah, it's a whole part of the world that people have slept on a little bit. And this is just what happens. This is how trends happen rather than like, to me, everybody should be like, oh, let's look at the thing that has shown signs of being useful in the past, but nobody really followed up with properly. That's the more interesting path, you know, where people tend to be like, oh, I need to get citations. So what's everybody else doing? Can I make it 0.1% better, you know, or 0.1% faster? That's what everybody tends to do. Yeah. So I think it's like, Itay's work commercially now is interesting because here's like a whole, here's a whole model that's been trained in a different way. So there's probably a whole lot of tasks it's probably better at than GPT and Gemini and Claude. So that should be a good commercial opportunity for them if they can figure out what those tasks are.Swyx [00:29:07]: Well, if rumors are to be believed, and he didn't comment on this, but, you know, Snowflake may figure out the commercialization for them. So we'll see.Jeremy [00:29:14]: Good.Alessio [00:29:16]: Let's talk about FSDP, Qlora, Qdora, and all of that awesome stuff. One of the things we talked about last time, some of these models are meant to run on systems that nobody can really own, no single person. And then you were like, well, what if you could fine tune a 70B model on like a 4090? And I was like, no, that sounds great, Jeremy, but like, can we actually do it? And then obviously you all figured it out. Can you maybe tell us some of the worst stories behind that, like the idea behind FSDP, which is kind of taking sharded data, parallel computation, and then Qlora, which is do not touch all the weights, just go quantize some of the model, and then within the quantized model only do certain layers instead of doing everything.Jeremy [00:29:57]: Well, do the adapters. Yeah.Alessio [00:29:59]: Yeah. Yeah. Do the adapters. Yeah. I will leave the floor to you. I think before you published it, nobody thought this was like a short term thing that we're just going to have. And now it's like, oh, obviously you can do it, but it's not that easy.Jeremy [00:30:12]: Yeah. I mean, to be honest, it was extremely unpleasant work to do. It's like not at all enjoyable. I kind of did version 0.1 of it myself before we had launched the company, or at least the kind of like the pieces. They're all pieces that are difficult to work with, right? So for the quantization, you know, I chatted to Tim Detmers quite a bit and, you know, he very much encouraged me by saying like, yeah, it's possible. He actually thought it'd be easy. It probably would be easy for him, but I'm not Tim Detmers. And, you know, so he wrote bits and bytes, which is his quantization library. You know, he wrote that for a paper. He didn't write that to be production like code. It's now like everybody's using it, at least the CUDA bits. So like, it's not particularly well structured. There's lots of code paths that never get used. There's multiple versions of the same thing. You have to try to figure it out. So trying to get my head around that was hard. And you know, because the interesting bits are all written in CUDA, it's hard to like to step through it and see what's happening. And then, you know, FSTP is this very complicated library and PyTorch, which not particularly well documented. So the only really, really way to understand it properly is again, just read the code and step through the code. And then like bits and bytes doesn't really work in practice unless it's used with PEF, the HuggingFace library and PEF doesn't really work in practice unless you use it with other things. And there's a lot of coupling in the HuggingFace ecosystem where like none of it works separately. You have to use it all together, which I don't love. So yeah, trying to just get a minimal example that I can play with was really hard. And so I ended up having to rewrite a lot of it myself to kind of create this like minimal script. One thing that helped a lot was Medec had this LlamaRecipes repo that came out just a little bit before I started working on that. And like they had a kind of role model example of like, here's how to train FSTP, LoRa, didn't work with QLoRa on Llama. A lot of the stuff I discovered, the interesting stuff would be put together by Les Wright, who's, he was actually the guy in the Fast.ai community I mentioned who created the Ranger Optimizer. So he's doing a lot of great stuff at Meta now. So yeah, I kind of, that helped get some minimum stuff going and then it was great once Benjamin and Jono joined full time. And so we basically hacked at that together and then Kerim joined like a month later or something. And it was like, gee, it was just a lot of like fiddly detailed engineering on like barely documented bits of obscure internals. So my focus was to see if it kind of could work and I kind of got a bit of a proof of concept working and then the rest of the guys actually did all the work to make it work properly. And, you know, every time we thought we had something, you know, we needed to have good benchmarks, right? So we'd like, it's very easy to convince yourself you've done the work when you haven't, you know, so then we'd actually try lots of things and be like, oh, and these like really important cases, the memory use is higher, you know, or it's actually slower. And we'd go in and we just find like all these things that were nothing to do with our library that just didn't work properly. And nobody had noticed they hadn't worked properly because nobody had really benchmarked it properly. So we ended up, you know, trying to fix a whole lot of different things. And even as we did so, new regressions were appearing in like transformers and stuff that Benjamin then had to go away and figure out like, oh, how come flash attention doesn't work in this version of transformers anymore with this set of models and like, oh, it turns out they accidentally changed this thing, so it doesn't work. You know, there's just, there's not a lot of really good performance type evals going on in the open source ecosystem. So there's an extraordinary amount of like things where people say like, oh, we built this thing and it has this result. And when you actually check it, so yeah, there's a shitload of war stories from getting that thing to work. And it did require a particularly like tenacious group of people and a group of people who don't mind doing a whole lot of kind of like really janitorial work, to be honest, to get the details right, to check them. Yeah.Alessio [00:34:09]: We had a trade out on the podcast and we talked about how a lot of it is like systems work to make some of these things work. It's not just like beautiful, pure math that you do on a blackboard. It's like, how do you get into the nitty gritty?Jeremy [00:34:22]: I mean, flash attention is a great example of that. Like it's, it basically is just like, oh, let's just take the attention and just do the tiled version of it, which sounds simple enough, you know, but then implementing that is challenging at lots of levels.Alessio [00:34:36]: Yeah. What about inference? You know, obviously you've done all this amazing work on fine tuning. Do you have any research you've been doing on the inference side, how to make local inference really fast on these models too?Jeremy [00:34:47]: We're doing quite a bit on that at the moment. We haven't released too much there yet. But one of the things I've been trying to do is also just to help other people. And one of the nice things that's happened is that a couple of folks at Meta, including Mark Seraphim, have done a nice job of creating this CUDA mode community of people working on like CUDA kernels or learning about that. And I tried to help get that going well as well and did some lessons to help people get into it. So there's a lot going on in both inference and fine tuning performance. And a lot of it's actually happening kind of related to that. So PyTorch team have created this Torch AO project on quantization. And so there's a big overlap now between kind of the FastAI and AnswerAI and CUDA mode communities of people working on stuff for both inference and fine tuning. But we're getting close now. You know, our goal is that nobody should be merging models, nobody should be downloading merged models, everybody should be using basically quantized plus adapters for almost everything and just downloading the adapters. And that should be much faster. So that's kind of the place we're trying to get to. It's difficult, you know, because like Karim's been doing a lot of work with VLM, for example. These inference engines are pretty complex bits of code. They have a whole lot of custom kernel stuff going on as well, as do the quantization libraries. So we've been working on, we're also quite a bit of collaborating with the folks who do HQQ, which is a really great quantization library and works super well. So yeah, there's a lot of other people outside AnswerAI that we're working with a lot who are really helping on all this performance optimization stuff, open source.Swyx [00:36:27]: Just to follow up on merging models, I picked up there that you said nobody should be merging models. That's interesting because obviously a lot of people are experimenting with this and finding interesting results. I would say in defense of merging models, you can do it without data. That's probably the only thing that's going for it.Jeremy [00:36:45]: To explain, it's not that you shouldn't merge models. You shouldn't be distributing a merged model. You should distribute a merged adapter 99% of the time. And actually often one of the best things happening in the model merging world is actually that often merging adapters works better anyway. The point is, Sean, that once you've got your new model, if you distribute it as an adapter that sits on top of a quantized model that somebody's already downloaded, then it's a much smaller download for them. And also the inference should be much faster because you're not having to transfer FB16 weights from HPM memory at all or ever load them off disk. You know, all the main weights are quantized and the only floating point weights are in the adapters. So that should make both inference and fine tuning faster. Okay, perfect.Swyx [00:37:33]: We're moving on a little bit to the rest of the fast universe. I would have thought that, you know, once you started Answer.ai, that the sort of fast universe would be kind of on hold. And then today you just dropped Fastlight and it looks like, you know, there's more activity going on in sort of Fastland.Jeremy [00:37:49]: Yeah. So Fastland and Answerland are not really distinct things. Answerland is kind of like the Fastland grown up and funded. They both have the same mission, which is to maximize the societal benefit of AI broadly. We want to create thousands of commercially successful products at Answer.ai. And we want to do that with like 12 people. So that means we need a pretty efficient stack, you know, like quite a few orders of magnitude more efficient, not just for creation, but for deployment and maintenance than anything that currently exists. People often forget about the D part of our R&D firm. So we've got to be extremely good at creating, deploying and maintaining applications, not just models. Much to my horror, the story around creating web applications is much worse now than it was 10 or 15 years ago in terms of, if I say to a data scientist, here's how to create and deploy a web application, you know, either you have to learn JavaScript or TypeScript and about all the complex libraries like React and stuff, and all the complex like details around security and web protocol stuff around how you then talk to a backend and then all the details about creating the backend. You know, if that's your job and, you know, you have specialists who work in just one of those areas, it is possible for that to all work. But compared to like, oh, write a PHP script and put it in the home directory that you get when you sign up to this shell provider, which is what it was like in the nineties, you know, here are those 25 lines of code and you're done and now you can pass that URL around to all your friends, or put this, you know, .pl file inside the CGI bin directory that you got when you signed up to this web host. So yeah, the thing I've been mainly working on the last few weeks is fixing all that. And I think I fixed it. I don't know if this is an announcement, but I tell you guys, so yeah, there's this thing called fastHTML, which basically lets you create a complete web application in a single Python file. Unlike excellent projects like Streamlit and Gradio, you're not working on top of a highly abstracted thing. That's got nothing to do with web foundations. You're working with web foundations directly, but you're able to do it by using pure Python. There's no template, there's no ginger, there's no separate like CSS and JavaScript files. It looks and behaves like a modern SPA web application. And you can create components for like daisy UI, or bootstrap, or shoelace, or whatever fancy JavaScript and or CSS tailwind etc library you like, but you can write it all in Python. You can pip install somebody else's set of components and use them entirely from Python. You can develop and prototype it all in a Jupyter notebook if you want to. It all displays correctly, so you can like interactively do that. And then you mentioned Fastlight, so specifically now if you're using SQLite in particular, it's like ridiculously easy to have that persistence, and all of your handlers will be passed database ready objects automatically, that you can just call dot delete dot update dot insert on. Yeah, you get session, you get security, you get all that. So again, like with most everything I do, it's very little code. It's mainly tying together really cool stuff that other people have written. You don't have to use it, but a lot of the best stuff comes from its incorporation of HTMX, which to me is basically the thing that changes your browser to make it work the way it always should have. So it just does four small things, but those four small things are the things that are basically unnecessary constraints that HTML should never have had, so it removes the constraints. It sits on top of Starlet, which is a very nice kind of lower level platform for building these kind of web applications. The actual interface matches as closely as possible to FastAPI, which is a really nice system for creating the kind of classic JavaScript type applications. And Sebastian, who wrote FastAPI, has been kind enough to help me think through some of these design decisions, and so forth. I mean, everybody involved has been super helpful. Actually, I chatted to Carson, who created HTMX, you know, so about it. Some of the folks involved in Django, like everybody in the community I've spoken to definitely realizes there's a big gap to be filled around, like, highly scalable, web foundation-based, pure Python framework with a minimum of fuss. So yeah, I'm getting a lot of support and trying to make sure that FastHTML works well for people.Swyx [00:42:38]: I would say, when I heard about this, I texted Alexio. I think this is going to be pretty huge. People consider Streamlit and Gradio to be the state of the art, but I think there's so much to improve, and having what you call web foundations and web fundamentals at the core of it, I think, would be really helpful.Jeremy [00:42:54]: I mean, it's based on 25 years of thinking and work for me. So like, FastML was built on a system much like this one, but that was of hell. And so I spent, you know, 10 years working on that. We had millions of people using that every day, really pushing it hard. And I really always enjoyed working in that. Yeah. So, you know, and obviously lots of other people have done like great stuff, and particularly HTMX. So I've been thinking about like, yeah, how do I pull together the best of the web framework I created for FastML with HTMX? There's also things like PicoCSS, which is the CSS system, which by default, FastHTML comes with. Although, as I say, you can pip install anything you want to, but it makes it like super easy to, you know, so we try to make it so that just out of the box, you don't have any choices to make. Yeah. You can make choices, but for most people, you just, you know, it's like the PHP in your home directory thing. You just start typing and just by default, you'll get something which looks and feels, you know, pretty okay. And if you want to then write a version of Gradio or Streamlit on top of that, you totally can. And then the nice thing is if you then write it in kind of the Gradio equivalent, which will be, you know, I imagine we'll create some kind of pip installable thing for that. Once you've outgrown, or if you outgrow that, it's not like, okay, throw that all away and start again. And this like whole separate language that it's like this kind of smooth, gentle path that you can take step-by-step because it's all just standard web foundations all the way, you know.Swyx [00:44:29]: Just to wrap up the sort of open source work that you're doing, you're aiming to create thousands of projects with a very, very small team. I haven't heard you mention once AI agents or AI developer tooling or AI code maintenance. I know you're very productive, but you know, what is the role of AI in your own work?Jeremy [00:44:47]: So I'm making something. I'm not sure how much I want to say just yet.Swyx [00:44:52]: Give us a nibble.Jeremy [00:44:53]: All right. I'll give you the key thing. So I've created a new approach. It's not called prompt engineering. It's called dialogue engineering. But I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it. So I always just build stuff for myself and hope that it'll be useful for somebody else. Think about chat GPT with code interpreter, right? The basic UX is the same as a 1970s teletype, right? So if you wrote APL on a teletype in the 1970s, you typed onto a thing, your words appeared at the bottom of a sheet of paper and you'd like hit enter and it would scroll up. And then the answer from APL would be printed out, scroll up, and then you would type the next thing. And like, which is also the way, for example, a shell works like bash or ZSH or whatever. It's not terrible, you know, like we all get a lot done in these like very, very basic teletype style REPL environments, but I've never felt like it's optimal and everybody else has just copied chat GPT. So it's also the way BART and Gemini work. It's also the way the Claude web app works. And then you add code interpreter. And the most you can do is to like plead with chat GPT to write the kind of code I want. It's pretty good for very, very, very beginner users who like can't code at all, like by default now the code's even hidden away, so you never even have to see it ever happened. But for somebody who's like wanting to learn to code or who already knows a bit of code or whatever, it's, it seems really not ideal. So okay, that's one end of the spectrum. The other end of the spectrum, which is where Sean's work comes in, is, oh, you want to do more than chat GPT? No worries. Here is Visual Studio Code. I run it. There's an empty screen with a flashing cursor. Okay, start coding, you know, and it's like, okay, you can use systems like Sean's or like cursor or whatever to be like, okay, Apple K in cursors, like a creative form that blah, blah, blah. But in the end, it's like a convenience over the top of this incredibly complicated system that full-time sophisticated software engineers have designed over the past few decades in a totally different environment as a way to build software, you know. And so we're trying to like shoehorn in AI into that. And it's not easy to do. And I think there are like much better ways of thinking about the craft of software development in a language model world to be much more interactive, you know. So the thing that I'm building is neither of those things. It's something between the two. And it's built around this idea of crafting a dialogue, you know, where the outcome of the dialogue is the artifacts that you want, whether it be a piece of analysis or whether it be a Python library or whether it be a technical blog post or whatever. So as part of building that, I've created something called Claudette, which is a library for Claude. I've created something called Cosette, which is a library for OpenAI. They're libraries which are designed to make those APIs much more usable, much easier to use, much more concise. And then I've written AI magic on top of those. And that's been an interesting exercise because I did Claudette first, and I was looking at what Simon Willison did with his fantastic LLM library. And his library is designed around like, let's make something that supports all the LLM inference engines and commercial providers. I thought, okay, what if I did something different, which is like make something that's as Claude friendly as possible and forget everything else. So that's what Claudette was. So for example, one of the really nice things in Claude is prefill. So by telling the assistant that this is what your response started with, there's a lot of powerful things you can take advantage of. So yeah, I created Claudette to be as Claude friendly as possible. And then after I did that, and then particularly with GPT 4.0 coming out, I kind of thought, okay, now let's create something that's as OpenAI friendly as possible. And then I tried to look to see, well, where are the similarities and where are the differences? And now can I make them compatible in places where it makes sense for them to be compatible without losing out on the things that make each one special for what they are. So yeah, those are some of the things I've been working on in that space. And I'm thinking we might launch AI magic via a course called how to solve it with code. The name is based on the classic Polya book, if you know how to solve it, which is, you know, one of the classic math books of all time, where we're basically going to try to show people how to solve challenging problems that they didn't think they could solve without doing a full computer science course, by taking advantage of a bit of AI and a bit of like practical skills, as particularly for this like whole generation of people who are learning to code with and because of ChatGPT. Like I love it, I know a lot of people who didn't really know how to code, but they've created things because they use ChatGPT, but they don't really know how to maintain them or fix them or add things to them that ChatGPT can't do, because they don't really know how to code. And so this course will be designed to show you how you can like either become a developer who can like supercharge their capabilities by using language models, or become a language model first developer who can supercharge their capabilities by understanding a bit about process and fundamentals.Alessio [00:50:19]: Nice. That's a great spoiler. You know, I guess the fourth time you're going to be on learning space, we're going to talk about AI magic. Jeremy, before we wrap, this was just a great run through everything. What are the things that when you next come on the podcast in nine, 12 months, we're going to be like, man, Jeremy was like really ahead of it. Like, is there anything that you see in the space that maybe people are not talking enough? You know, what's the next company that's going to fall, like have drama internally, anything in your mind?Jeremy [00:50:47]: You know, hopefully we'll be talking a lot about fast HTML and hopefully the international community that at that point has come up around that. And also about AI magic and about dialogue engineering. Hopefully dialogue engineering catches on because I think it's the right way to think about a lot of this stuff. What else? Just trying to think about all on the research side. Yeah. I think, you know, I mean, we've talked about a lot of it. Like I think encoder decoder architectures, encoder only architectures, hopefully we'll be talking about like the whole re-interest in BERT that BERT 24 stimulated.Swyx [00:51:17]: There's a safe space model that came out today that might be interesting for this general discussion. One thing that stood out to me with Cartesia's blog posts was that they were talking about real time ingestion, billions and trillions of tokens, and keeping that context, obviously in the state space that they have.Jeremy [00:51:34]: Yeah.Swyx [00:51:35]: I'm wondering what your thoughts are because you've been entirely transformers the whole time.Jeremy [00:51:38]: Yeah. No. So obviously my background is RNNs and LSTMs. Of course. And I'm still a believer in the idea that state is something you can update, you know? So obviously Sepp Hochreiter came up, came out with xLSTM recently. Oh my God. Okay. Another whole thing we haven't talked about, just somewhat related. I've been going crazy for like a long time about like, why can I not pay anybody to save my KV cash? I just ingested the Great Gatsby or the documentation for Starlet or whatever, you know, I'm sending it as my prompt context. Why are you redoing it every time? So Gemini is about to finally come out with KV caching, and this is something that Austin actually in Gemma.cpp had had on his roadmap for years, well not years, months, long time. The idea that the KV cache is like a thing that, it's a third thing, right? So there's RAG, you know, there's in-context learning, you know, and prompt engineering, and there's KV cache creation. I think it creates like a whole new class almost of applications or as techniques where, you know, for me, for example, I very often work with really new libraries or I've created my own library that I'm now writing with rather than on. So I want all the docs in my new library to be there all the time. So I want to upload them once, and then we have a whole discussion about building this application using FastHTML. Well nobody's got FastHTML in their language model yet, I don't want to send all the FastHTML docs across every time. So one of the things I'm looking at doing in AI Magic actually is taking advantage of some of these ideas so that you can have the documentation of the libraries you're working on be kind of always available. Something over the next 12 months people will be spending time thinking about is how to like, where to use RAG, where to use fine-tuning, where to use KV cache storage, you know. And how to use state, because in state models and XLSTM, again, state is something you update. So how do we combine the best of all of these worlds?Alessio [00:53:46]: And Jeremy, I know before you talked about how some of the autoregressive models are not maybe a great fit for agents. Any other thoughts on like JEPA, diffusion for text, any interesting thing that you've seen pop up?Jeremy [00:53:58]: In the same way that we probably ought to have state that you can update, i.e. XLSTM and state models, in the same way that a lot of things probably should have an encoder, JEPA and diffusion both seem like the right conceptual mapping for a lot of things we probably want to do. So the idea of like, there should be a piece of the generative pipeline, which is like thinking about the answer and coming up with a sketch of what the answer looks like before you start outputting tokens. That's where it kind of feels like diffusion ought to fit, you know. And diffusion is, because it's not autoregressive, it's like, let's try to like gradually de-blur the picture of how to solve this. So this is also where dialogue engineering fits in, by the way. So with dialogue engineering, one of the reasons it's working so well for me is I use it to kind of like craft the thought process before I generate the code, you know. So yeah, there's a lot of different pieces here and I don't know how they'll all kind of exactly fit together. I don't know if JEPA is going to actually end up working in the text world. I don't know if diffusion will end up working in the text world, but they seem to be like trying to solve a class of problem which is currently unsolved.Alessio [00:55:13]: Awesome, Jeremy. This was great, as usual. Thanks again for coming back on the pod and thank you all for listening. Yeah, that was fantastic. Get full access to Latent Space at www.latent.space/subscribe
The latest episode of The Eric Ries Show features my conversation with Reid Hoffman. Executive Vice President of PayPal, co-founder of LinkedIn, and legendary investor at Greylock Partners are just a few of his official roles that have changed our world. He's also been a mentor to countless founders of iconic companies like Airbnb, Facebook, and OpenAI. He's an author, a podcast host – both Masters of Scale and his new show, Possible, with Aria Finger – and perhaps most importantly a crucial steward of AI, including co-founding Inflection AI, a Public Benefit Corporation, in 2022. Reid has also long been a voice of moral clarity and a stabilizing influence on the tech ecosystem, supporting people who are working to make the world a better place at every level. He's a firm believer that “the way that we express ourselves over time is by being citizens of the polis – tribal members.” That includes not just supporting the legal system and democratic process but also building organizations “from the founding and through scaling and ongoing iteration to have a functional and healthy society.” We talked about all of this, as well as AI, from multiple angles – including the story of how he came to broker the first meeting between Sam Altman and Satya Nadella that led to the OpenAI-Microsoft partnership. He also had a lot to say about how AI will work as a meta-tool for all the other tools we use. We are, as he said,” homo techne,” – meaning we evolve through the technology we make. We also broke down his famous saying that “entrepreneurship is like jumping off a cliff and assembling the plane on the way down” and: • The human tendency to form groups • The relationship between doing good for people and profits • AI as a meta-tool • What he looks for in a leader • The necessity of evolving culture • Being willing to take public positions • His thoughts on the economy and the upcoming election — Brought to you by: Mercury – The art of simplified finances. Learn more. DigitalOcean – The cloud loved by developers and founders alike. Sign up. Neo4j – The graph database and analytics leader. Learn more. — Where to find Reid Hoffman: • Reid's Website: https://www.reidhoffman.org/ • LinkedIn: https://www.linkedin.com/in/reidhoffman/ • Instagram: https://www.instagram.com/reidhoffman/ • X: https://x.com/reidhoffman Where to find Eric: • Newsletter: https://ericries.carrd.co/ • Podcast: https://ericriesshow.com/ • X: https://twitter.com/ericries • LinkedIn: https://www.linkedin.com/in/eries/ • YouTube: https://www.youtube.com/@theericriesshow — In This Episode We Cover: (01:15) Meet Reid Hoffman (06:01) The three eras of LinkedIn (08:21) The alignment of LinkedIn and Microsoft's missions (10:39) The power of being mission-driven (18:42) Embedding culture in every function (21:08) The purpose of organizations (23:45) Organizations as tribes for human expression (29:08) Reid's advice for navigating profit vs. purpose (38:33) The moment Reid realized the AI future is actually now (41:57) Home techne (44:52) AI as meta-tool (47:05) Why Reid co-founded Inflection AI (49:53) The early days of OpenAI (55:41) How Reid introduced Sam Altman and Satya Nadella (58:26) The unusual structure of the Microsoft-OpenAI deal (1:04:42) The importance of aligning governance structure with mission (1:09:56) Making a company trustworthy through accountability (1:15:59) Inflection's pivot a unique model (1:19:53) Companies that are doing lean AI right (1:22:52) Reid's advice for deploying AI effectively (1:26:21) Being a voice of moral clarity in complicated times (1:31:26) The economy and what's at stake in the 2024 election (1:37:24) The qualities Reid looks for in a leader (1:39:43) Lightning round, including board games, the PayPal mafia, regulation, and more — Production and marketing by https://penname.co/. Eric may be an investor in the companies discussed.
I'm excited to share some incredible insights from our latest podcast episode, where I had the pleasure of interviewing Michael Walsh, the founder and CEO of Cariloop. Michael's journey is not only inspiring but also packed with valuable lessons for investment groups and growth-stage business owners. Here are the key takeaways:Key Lessons and Ideas:The Power of Personal Experience:Michael's journey began with his own challenges in family caregiving, which led him to create Cariloop. His story underscores the importance of turning personal struggles into impactful business solutions.Evolving Business Models:Cariloop started as a platform similar to OpenTable for long-term care facilities but pivoted to focus on supporting employers in helping their caregiving employees. This shift highlights the importance of adaptability and listening to market needs.Employer Support for Caregivers:Michael emphasizes the critical role employers play in supporting employees who are caregivers. Providing resources and support can significantly enhance employee well-being and loyalty.Public Benefit Corporation (PBC) and B Corp Certification:Cariloop's transition to a PBC and achieving B Corp certification demonstrates a commitment to social good alongside profitability. This approach can attract top talent and build a strong, values-driven company culture.Servant Leadership:Michael's leadership philosophy focuses on empowering his team and prioritizing their needs. This servant leadership approach fosters a supportive and productive work environment.Workforce Dynamics and Flexibility:The future of work is evolving, with a growing emphasis on flexibility and well-being. Companies that adapt to these changes and support their employees holistically will thrive in attracting and retaining top talent.Continuous Growth and Breaking Ceilings:Entrepreneurs should continuously evaluate their growth and push through perceived ceilings. Surrounding yourself with great people and staying adaptable is key to long-term success.Global Impact and Advocacy:Michael's vision extends beyond national borders, advocating for better support for family caregivers globally. This mission-driven approach can inspire other businesses to integrate social responsibility into their core operations.Curiosities and Insights:Did You Know? Cariloop initially aimed to be a platform like OpenTable for long-term care facilities but pivoted to focus on employer support for caregivers.Curious Fact: Cariloop is a Public Benefit Corporation and a Certified B Corporation, balancing profit with social good.Insightful Quote: “Success for me is about reaching as many people as possible and improving their experiences.” - Michael WalshI hope these insights spark your curiosity and inspire you to listen to the full episode. Michael's journey and the evolution of Cariloop offer valuable lessons for anyone looking to make a meaningful impact through their business.Thank you for being a part of our community. Stay tuned for more inspiring stories and actionable insights in our upcoming episodes!Send us a Text Message.
Welcome to EO Radio Show - Your Nonprofit Legal Resource. Episode 90 is a refresh of the third of the original episodes published in the summer of 2022. During these dog days of summer, it's a good time to bring back some of our earliest episodes that cover fundamental topics that nonprofit leaders and aspiring leaders need to understand. In this refresh of episode three, I cover the general principles of state law rules that address corporate governance. Under state law, the business and affairs of a nonprofit corporation must be managed and all corporate powers are to be exercised by and under the direction of the board of directors. Each director has personal duties to the corporation. Principally, these duties are the duty of care and the duty of loyalty. This nonprofit basics episode addresses director duties and best practices for the typical nonprofit public benefit corporation. In the first section, I summarize the duty of care, including a discussion of the manner in which a careful director meets this standard. The second section discusses the duty of loyalty with a particular emphasis on dealing with actual and potential conflicts of interest. Since this episode was first published, we've added the podcast to our YouTube channel, so listeners might want to look over the playlist on our channel that covers all the basics and also our broad coverage of the formation mechanics in 32 states, so far. Take a look at the show notes for updated links. Show Notes: Attorney General's Guide for Charities; Best Practices for nonprofits that operate or fundraise in California; see Chapter 7, Directors & Officers of Public Benefit Corporations What Every Prospective Nonprofit Board Member Needs to Knowby Lauren A. Galbraith, family wealth partner, Farella Braun + Martel EO Radio Show #84: Nonprofit Book Review: ABA Guidebook for Directors of Nonprofit Corporations Farella YouTube podcast channel If you have suggestions for topics you would like us to discuss, please email us at eoradioshow@fbm.com. Additional episodes can be found at EORadioShowByFarella.com. DISCLAIMER: This podcast is for general informational purposes only. It is not intended to be, nor should it be interpreted as, legal advice or opinion.
Founder Amy-Willard Cross discusses the mission and operations of Gender Fair, the first consumer rating system for gender equality. Gender Fair aims to measure and promote gender equality within consumer-facing companies by utilizing data and the UN Women Empowerment Principles. Amy highlights the importance of transparency and data-driven insights to create social change, emphasizing that gender equality in corporate practices benefits not just women but overall fairness in the workplace. Gender Fair evaluates companies across five categories: women in leadership, employee policies, diversity reporting, supplier diversity, and philanthropy for women. Amy also shares how Gender Fair has incorporated technology to increase its impact, including an app and browser extension that allow consumers to easily access company ratings on gender equality. These tools enable users to make informed purchasing decisions based on a company's gender equality practices. The app features functionalities like barcode scanning and logo recognition to provide real-time information about products. Amy emphasizes the significance of making gender equality data accessible and actionable for consumers, believing that collective consumer power can drive corporate accountability and fairness. Throughout the conversation, Amy discusses the challenges and successes of building Gender Fair, the importance of leveraging economic power for social change, and the role of technology in facilitating gender fairness. She also touches on the broader impact of Gender Fair's work in promoting fair business practices and the potential for future expansions, such as a B2B database for procurement. Gender Fair (https://www.genderfair.com/) Follow Gender Fair on LinkedIn (https://www.linkedin.com/company/begenderfair/), Facebook (https://www.facebook.com/GenderFair/), or Instagram (https://www.instagram.com/genderfair). Follow Amy-Willard Cross on LinkedIn (https://www.linkedin.com/in/amy-willard-cross-genderfair/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Transcript: CHAD: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Chad Pytel, and with me today is Amy-Willard Cross, the Founder of Gender Fair, the first consumer rating system for gender equality. Amy, thank you so much for joining me. AMY-WILLARD: Well, I'm very happy to be talking to robots, giant and small. CHAD: [laughs] We'll try not to smash into each other too much on this show. I think we probably have a lot to learn from each other rather than conflicting. AMY-WILLARD: I think so. CHAD: Let's just get started by digging in a little bit to what Gender Fair actually is in terms of what we mean when we say a consumer rating system for gender equality. AMY-WILLARD: It's about data. So, I was originally a journalist. I've written for a living my whole life: books, magazines, articles [laughs], you know, radio shows. I wanted to do something to promote equality in the world. And I realized that data is one way that you can want to have commercial value. Data has value that isn't, like, just blah, blah, blogging, and also, data can create social change. So, I decided to do something like, you know, we know fair trade has created great change as has, you know, marine stewards certified. And also, I was inspired by something that the Human Rights Campaign, the LGBTQ organization, does, which is called the Guide to Corporate Equality. So, our goal is to measure how companies do on gender and then share that with the public. And I didn't just make this up. We use a set of principles called the UN Women Empowerment Principles, which look at eight different sort of areas of an organization. And so, we created metrics that are based on these UN Women Empowerment Principles and also based on what is findable in the public record. We rate consumer-facing public companies, you know, like Unilever, Procter & Gamble, the shampoos that you use, the cars that you buy, the airplanes you ride on. And we look at five major categories, such as, like, women in leadership. We look at employee policies like parental leave, and flex time, part-time, summer Fridays. I'll be curious to know what you do at Giant Robot. I bet you have good ones. And then, we also look at diversity reporting. Our company is upfront with their attempt to bring more diversity into the workforce and also supplier diversity. I don't know, are you familiar with supplier diversity, Chad? CHAD: I am because we often are a supplier, so... AMY-WILLARD: You are. So, when they ask you if you're diverse...but one way companies, especially the big companies that we rate on this public database, they can make a big impact by trying to buy from women and minority-owned businesses, right? When procurement spending is huge. That's a metric that people may not know as well, but it's one that I would encourage every business to undertake because it's not that expensive. And you could just intentionally try to move capital into communities that are not typically the most rewarded. The last category that we measure is philanthropy for women, and that's important. People say, "Well, why do you measure philanthropy?" One, because the amount of philanthropy that goes to women and girls is 1.5% of all donations, and it used to be 1.8. So, pets get more money than women. I don't know how that makes you feel, Chad, but it doesn't make me feel very happy. I mean, I suppose if you're Monster Beverage and you don't have any women clientele, one, it's okay if you don't score well on your gender metrics; just meet the basic fairness. But maybe Monster Beverage doesn't have to donate to the community of women. But if you're making billions of dollars a year selling a shampoo, I would sort of think it's fair to ask that there's some capital that goes back the other way towards the community of women. So, that's the measurement. So, we could do it...and we do it for small companies like yours, too. I imagine your company would do well from the little bit I've talked to people on your staff. It sounds like you have a lot of women in leadership. And I don't know your policies yet, but I'm sure you...I bet in Massachusetts I know you have parental leave anyway in the state, but you're a more progressive state. But I think this is something that all of your listeners can benefit from is putting a gender lens on their operations because a gender lens is a fairness lens. And it includes usually, you know, this includes people who are not just all the same men, White men. So, it helps all businesses sort of operate in a more fair way to put a gender lens on their operations. And it's not hard to do. CHAD: So, one of the things that jumped out at me, in addition to just the Gender Fair mission, as I was learning about Gender Fair, is that you have an app and a browser extension. And so, that's part of why you're on the show, not only do we care about the impact you're having. AMY-WILLARD: That's right. Yeah [laughs]. CHAD: But you're a tech company. Did you always know as you got started that you were going to be making an app and a browser extension? AMY-WILLARD: Well, yes, that was the beginning because you have data. You have to make it used. You have to make it available, right? Personally, I like to see it on packages. But yes, we've had two iterations of the app, and I'm sure it could always get better and better. The current one has a barcode scanner and, also, it can look at a logo and tell you, "Oh, this soda pop is not gender fair. Try this soda pop, which is gender fair." And it can make you a shopping list and stuff like that. But, you know, tech is only good if people use it, so I hope they do. I mean, the idea is making it more accessible to people, right? I would like to have it as a filter, some easy tech. We've talked to big retailers before about having a filter put on online shopping sites, right? So, if I can choose fair-trade coffee, why can't I choose gender-fair shampoo? I like it when people can use technology to create more fairness, right? If this is a great benefit to us if technology can take this journalism we do and make it accessible and available and in your hand for someone, you can do it in the store, for Pete's sake. You could just go on the store shelf, and that's pretty liberating, isn't it? When you think of it. It should be easy to know how the companies from which you buy are doing on values that you care about. So, I never really thought of it as a tech. I wish it was better tech, but, you know, I'd need millions and millions of dollars to do that. CHAD: [laughs] Had you ever built in any of your prior companies, or had been directly responsible for the creation of an app? AMY-WILLARD: No, but I did actually once when I worked at the major women's magazine in Canada, I did hire the person who created the first online sort of magazine in Canada, and she made money, so I felt good about that. I plucked her from...she was working as sort of tech support at the major...what do you call those? Internet providers in Canada. But no, I had not, and so I relied on experts. I had a friend who was on the board of Southby, and he helped me find a tech team. I went through a few of them and, you know, it's hard to find. Like, where do you go and find people who will build something for you when you're a novice, right? As a journalist, I don't really know anything about building technology, and I certainly wasn't about to start at my age. It was definitely a voyage of discovery and learning, and I don't think I really learned much coding myself. CHAD: That's okay. AMY-WILLARD: That's okay [laughs]. CHAD: But was there something that sort of surprised you that you didn't anticipate in the process of creating a digital app? AMY-WILLARD: Oh gosh. Well, you know, of course, it's difficult, and there's lots of iterations, and there's lots of bugs. And in every business, mistakes are part of what people...in the construction industry, they'll tell you, "Mistakes are just going to happen every day. You just have to figure out how to fix each one." But, no, it's a difficult road. So yeah, I wish I could have coded it myself. I wish I could have done it myself, but I could not. But yeah, it's good learning. And, of course, you know, I think anyone who's going to start building a company with technology...if it were me now 10 years ago, I would have actually done some coding classes so I could just even communicate better to people who were building for me. But I did learn something, but not really enough. But it's a very interesting partnership, that's for sure. CHAD: And there is a lot of online classes now... AMY-WILLARD: Right [laughs]. CHAD: If someone is out there thinking, oh, you know, maybe that's good advice. And there's a lot of opportunities for sort of an on-ramp, and you don't need to become an expert. AMY-WILLARD: No. CHAD: But, like you said, even just knowing the vocabulary can be helpful. AMY-WILLARD: I think that would have been useful. Yeah, definitely useful. But I definitely, like, you learn a little bit as a text-based person. You learn the rigor of just sort of, like, you have to think in ones and zeros. It either is or isn't. That helps. I learned that a little bit in working with tech devs. The last version we did actually white labeled off of someone who had created a technology to do with...it was to do with building communities online. And their project failed, but it had enough backbone that we were able to efficiently build what we needed to on top of what they built. CHAD: Oh, that's really...was it someone you knew already, or how did you get connected? AMY-WILLARD: Yes, they knew one of our partners in New York. We tried it first as a community project. It didn't really work. And then, we realized it could actually hold our data at the same time. So, my first iteration of the app was different. But yeah, anyway, we've built it a couple of times, and I could build it even more times... CHAD: [laughs] AMY-WILLARD: And make it even better and better. CHAD: So, on the sort of company side of you've worked with companies like Procter & Gamble, MasterCard, Microsoft, do you find it difficult to convince companies to participate? AMY-WILLARD: What we do is data journalism. We don't contact the companies. We have researchers. We have journalists go and look through the SEC data and CSR reports and collect the data points on which we measure them. So, no one has to cooperate with us to get the data. It's journalism. It's not opt-in surveys, which is a very common...when I first started, no one was measuring women, and now there's lots of different measurements. And they're often pay-to-play surveys, so they're not really very valuable. Ours is objective and fully transparent journalism. But then afterwards, our business model how we typically used to pay for this is that companies that did well on our index were then invited to be quote, "certified." And this was a business model that was sort of suggested to us at the Clinton Global Initiative, to which I belonged in 2016. And they loved what we were doing, using the free market to drive gender equality. Because, you know, our whole point is that women and people who care about women and equality, we have a lot of power as consumers, or as taxpayers, or as tuition payers, or as donors to nonprofits. And whenever you give money to an organization or a company, you have the right to sort of ask questions about the fairness of that organization. Well, that's our whole ethic, really. I answered that question and came around to a different idea, but yes, no. So, the companies do participate to be certified, and some of them are interested and some of them are not, and that's fine. We do projects with them sort of like when we...we've talked about MasterCard, and we did a big conference with them in New York. This is pre-pandemic. And then, we did a big, global exhibit with P&G, and Eli Lilly, and Microsoft at TED Global, which was very fun. It was all about fairness. And it was great to talk to technologists such as yourself. And we made a booth about fairness in general, not just about women. And we had a fairness game, and it was very interesting to just discuss with people. I think people like to think about fairness, right? I don't know if you have children, but little children get very interested in the idea of what's fair very early on. Yeah, so some companies participate...now we have companies...we do some work in B2B procurement which is something that your listeners might be interested in thinking about is that just, like, supplier diversity. If I were purchasing your services, your company services, I would ask about the gender metrics of your organization. I already learned they're quite good. So, big companies buying from other companies can put a gender lens on their B2B procurement. And so, that's a project we're doing with Salesforce, Logitech, Zoetis, Andela, which is another tech provider, and Quinnox, which is a similar sort of tech labor force, I believe. And so, we're going to be releasing a database about B2B suppliers. Actually, I should make sure that you get on it. That's a good idea. CHAD: Yes. AMY-WILLARD: That's a good idea because then it's going to be embedded in procurement platforms because this is a huge amount of money. It's even probably more...it could be more money than consumer spending, right? B2B spending. So, I'm excited about working with more companies on that to help promulgate this data and this idea because it's an easy way to drive fairness in a culture. When the government isn't requiring fairness, at least large companies can. And in some countries, actually, the government requires its vendors to do well on gender. Like, Italy now has a certification for gender, the government does, and companies that do well are privileged in RFPs and also get a tax deduction. CHAD: I don't want to say something incorrect, but I think the UK has, like, a rule around equity in pay... AMY-WILLARD: Yeah, absolutely. You're absolutely correct. CHAD: And yet they don't have equity in pay, the data shows. AMY-WILLARD: That's right. And we don't have that in the United States. It's voluntary in the U.S. We measure that, actually, too. That's seven points over a hundred points scale is whether they, one, publish the results of their pay study. In the U.S., though, we do it in a way that isn't rigorous as the way they do it in the UK. In the UK...you're great to remember that, Chad, in the UK, I mean, I wish my government did that. In the UK, companies report on the overall salaries paid to men and the overall salaries paid to women. So, that means if, you know, all the million-dollar jobs are held by men, it shows very clearly, and all the five-dollar jobs are held by women, it shows very clearly there's an imbalance. And in the United States, we just say, "Oh, well, is the male VP paid the same as a female VP?" That's sort of easier to do, right? CHAD: When we've talked with some larger companies about different products we're creating or those kinds of things, sometimes what I hear is they're looking for big wins, comprehensive things. And so, I was wondering whether you ever get pushback or feedback that's like, "Well, not that your issue is not important, but it's just focused on one aspect of what our goals are for this year." AMY-WILLARD: Right. Yeah, that's always a hard thing because when I think about fairness to half of the population, it's a hard thing for me to think that's not hugely important. CHAD: Yes. AMY-WILLARD: I have a really hard time, but yes, of course, we get that a lot. And, you know, quite frankly, when we did this B2B project with Logitech and Zoetis, they would ask their vendors, like, the major consulting companies and big companies, to take a SaaS assessment that we do. We have a SaaS product that private companies can take, or just instead of doing our journalism, they can just get their own assessment. And they were very, very reluctant to do this. That was just, you know, half an hour. It was a thousand-dollar assessment. And it took many months to convince these companies to do it. And that was their big customers. So, yes, it is very hard to have...what's the word? Coherence on what one company wants versus what a big company wants, and it's hard to know what they want. And it's, yeah, that's a difficult road for sure. And it changes [laughs]. CHAD: Part of the reason why I asked is because from a product perspective, from a business perspective, at thoughtbot, we're big fans of, like, what can be called, like, niching down or being super clear about who you are, and what you believe, and what you offer. And if you try to be everything to everybody, it's usually not a very good tactic in the market. AMY-WILLARD: That's right. That's right. CHAD: So, the fact that you focus on one particular thing like you said, it's very important, and it's 50% of the population. But I imagine that focus is really healthy for you from a clarity of purpose perspective. AMY-WILLARD: That's right. But at the same time, now there's lots of...when I started in 2016, there weren't a lot of things in this space, and now there's many, many, many, many, many, many, so corporations that want to sort of connect to the community of women or do better for women. There's many different options. So, there's many flavors of this ice cream. Even though we're niche, the niche is very crowded, I would say, actually, and people are very confused. I mean, I think I remember hearing from Heineken that they're assaulted daily by things to, you know, ways to support women in different organizations and events. And they said they took our call because we were different. But yeah, there's many competitors. But, I mean, that's the main thing. In any business, in any endeavor in life, one has to show one's value to the people who may participate, and that's a challenge everywhere, isn't it? CHAD: Yeah. AMY-WILLARD: But the niching down thing is...and interesting we hear a lot these days is that women are done. We've moved on from that. Now we care about racial equality, and we say, "That's a yes, and… We can't move on." CHAD: Well, the data doesn't show that we've moved on. AMY-WILLARD: The data doesn't show that at all, and we're going way backwards, as you well know. So, I mean, actually, I don't know if you know, there's something called the named executive officers in public companies. Are you familiar with that? The top five paid people. CHAD: Yeah. AMY-WILLARD: They have to be registered with the government. Well, that number really hasn't changed in six years. That's where the big capital is, and the stock options, and the bonuses, and the big salaries. So, to me, that's very important that I would like, you know, rights and capital to be more...well, I want rights to be solid and capital to be flowing. And so, that's what we hope to do in our work. MID-ROLL AD: Now that you have funding, it's time to design, build, and ship the most impactful MVP that wows customers now and can scale in the future. thoughtbot Liftoff brings you the most reliable cross-functional team of product experts to mitigate risk and set you up for long-term success. As your trusted, experienced technical partner, we'll help launch your new product and guide you into a future-forward business that takes advantage of today's new technologies and agile best practices. Make the right decisions for tomorrow today. Get in touch at thoughtbot.com/liftoff. CHAD: So, going back to the founding of Gender Fair, when did you know that this was something you needed to do? AMY-WILLARD: I wanted to serve, you know, you want to be useful in life. And I wanted to do work in this field that I care so much about. As I said, I think I told you I started doing journalism before, and I realized anyone could take the journalism, and they could, you know, Upworthy would publish things we would create and then not pay us for it. And I thought that's crazy. But it's interesting talking to my husband. My husband's, like, a very privileged White guy. And I remember he said something to me very interesting. He said, "You either have power, or you take it." And he said, "Women have all this power." So, he helped me understand this. Like, you know, I think sometimes as women or communities that are underserved, you start thinking very oppositionally about what you don't have. But at the same time, you can realize that you do have this power. So, what we're trying to do with Gender Fair is remind people they have this economic power, and they can use it everywhere, you know, in addition to our consumer database. I told you that we're doing a B2B database this year. And we also...I think next week I'm going to release a database of 20,000 nonprofits looking at their gender ratings. That was done as a volunteer project by Rose-Hulman Institute of Technology if you know them. So, yeah, this is an ethic that you can take everywhere in your life is you have this power, even as a consumer. Chad, even in your little town, you can ask your coffee shop if they pay fair wages. Like, this is just a way of looking at the world that I hope to encourage people to do. CHAD: Along the journey of getting started, I assume you ran into many roadblocks. AMY-WILLARD: Mm-hmm. CHAD: Did you ever think maybe this is too hard? AMY-WILLARD: Oh yes. Well, not in building. In building, you're very optimistic, you know, it's just like when you're writing your first book. You think it's going to be a bestseller. Like, you build something, and you think the whole world is going to use it right away, and you're going to...I did have a great...when I first launched, I had a wonderful, I had, you know, press in Fortune. I had Chelsea Clinton. I had big people writing about us. Melinda Gates has written about us many, many times. The fact that...well, I've always wanted to build, like, a consumer revolution of women, and I'm going to keep at it. But it's very daunting. It's very daunting when you're trying to move a boulder such as, you know, big institutions and companies that don't really want to change, and they're not motivated to do it. So, yes, those are my roadblocks. It's not creating the massive amount of change that I wanted to do. And I'm not going to give up, but, yes, it is very daunting, and it's very daunting to see how little people care. Some people don't care about it, but some people in power don't care about it. But I think if you asked, you know, regular women, they would say, "We would like fair pay. We would like equal opportunity. We would like paid parental leave." They would want all these things, and hopefully, together, we can fight for them. CHAD: Well, and, like you said, the premise of what you're doing is you're focused on the power that you do have, which is the dollars that you spend with these companies. I think that's such a smart angle on this because especially for...it seems like the core in terms of the consumer-facing companies. That's so inherent in what this is. AMY-WILLARD: That's right. CHAD: Yeah, the angle of empowering consumers, and giving them the information, and leveraging the power that consumers have with these companies seems really smart to me... AMY-WILLARD: That's right. If it works -- CHAD: As opposed to individually going to the companies and saying, you know -- AMY-WILLARD: "Please make it." Yeah. And some people would refute your use of the word empower because that implies that people don't have power. So, when I give speeches...I have a pair of beautiful gemstone red pumps, and I say it's the ruby slippers. We had this power all along. We just were not exercising it. But this power will only work, Chad, if it's done in the aggregate. So, our challenge is to reach the aggregate of American women. I have to, you know, I have to go reach 50 million women this year. That's my goal. Reach 50 million women with this message that we have the power in the aggregate to make change. And that's the only way this will work. If it's just one by one, it really doesn't. When I first launched, I found when I showed the app to people on the lower end of the economic scale, like, you know, people in the cash register; they understood this more than middle-class women. They understood the fact that if all women come together and, you know, buy from this company or don't buy from this company based on how they treat women, they understood that as a collective power. Whereas middle-class women who don't have as many struggles didn't really groove to that idea as quickly, which I thought was very...to me, it was very interesting, you know, individuals feel more powerful on the higher end of the social scale. They may or may not -- CHAD: That is interesting. AMY-WILLARD: Yeah. So, yeah, that's my goal. We'll see if I can do it. That's going to be my life's work, I think, Chad. CHAD: How do you reach 50 million people? AMY-WILLARD: I don't know. That's what I'm going to think about. You know, we're talking to different people about campaigns. We actually stopped the consumer work during the pandemic because it just, you know, everything changed. And so, now, this year, we're going back. I don't know; I mean, I guess if Ryan Reynolds tweeted about me, you know, that would help. If [laughs] anyone listening has any ideas how to reach 50 million women...no, maybe 3 million is what I need to create social change. CHAD: I imagine that it doesn't just come down to spending money on advertising. One, you might not have that money. AMY-WILLARD: No. And that would be, you know, that also would be not in the ethics of what Gender Fair is, for example, right? That means I would be paying money to Facebook and basically Facebook, I guess, and Google. If you look at the major spends of nonprofits, they're advertising with these big tech giants. And so, we have...actually, we have some partnerships with large women's organizations, and I think that's the way we hope to spread that. And if I had money for advertising, I would want to spend it with other women's organizations, or women's owned media, or women influencers. There's another idea I talk about in my work I call the female domestic product, and so talking about how much money women earn or capital we control. And the more we can grow that female domestic product, the more we can achieve equality actually. I always say, in America, you get as much equality as you can pay for sadly. CHAD: I was just about to say, "Sadly." AMY-WILLARD: Sadly, yeah. It's true. We still don't have the Equal Rights Amendment. A hundred years. CHAD: Well, 50% of the population would say, "Why do we need an Equal Rights Amendment [laughs]?" AMY-WILLARD: All men are created equal, but yeah, it's quite astonishing. I don't know. Do you have daughter, too, or just a son? CHAD: I have a son, and my younger one is non-binary. AMY-WILLARD: Well, I'm sorry to be so binary. Excuse me. CHAD: It's okay. AMY-WILLARD: Well, interesting. And that's great, too, isn't it? Because we see how fluid gender is and their rights are just as important as a woman's rights. And these are, you know, women and non-binary people are often excluded from things. And so, we are all working together just to create fairness. I'm sure that the same thing happens in your family, too. CHAD: Yeah. I think fairness is one of those things. Sometimes equality is not necessarily the same as fairness. AMY-WILLARD: Yes. CHAD: But I think, like you said at the top of the show, fairness is something that we seemingly learn very early on. But one of the ways that it comes across is I'm being. It is unfair to me, especially in little kids, at least with my kids [laughs]. AMY-WILLARD: Of course, yes. CHAD: That was the thing that they learned first and caused them the most pain. And it was very difficult for them to see that something was unfair for somebody else. So, I remember saying to my kids when they were little, "Fair doesn't mean you get your way." AMY-WILLARD: That's right. Not fair. CHAD: Right [laughs]. AMY-WILLARD: It's true. But then, you know, it's funny. When I talk about equal pay, I often say to people, "When I used to cut cakes for my children, I cut equal slices, and I didn't put them under the table," like, you know what I mean [laughter]? So, why are we so cagey about the slices of economic pie we give to one another? I mean, there's no reason why pay has to be secret, right? If it's fair. You could easily talk to people. Well, you know, Chad gets paid more money because he's the CEO, and he does the podcast, and he has to talk to the bank, you know what I mean? So, you could easily explain that to people. And I don't know why we have to keep salaries a secret from one another. It seems very irrational to me and not really a part of fairness. CHAD: Yeah. Yep. That's something...so, all of our salary bands at thoughtbot are public on the internet. AMY-WILLARD: Cool. On the internet. Oh, I'm very impressed. CHAD: Yeah. So, you can go to thoughtbot and use our compensation calculator. You enter in your location, what role you have. AMY-WILLARD: Oh. So, you do it for other people. Oh, that's cool. That's a great service. And that was just some sort of tech that was sort of pro bono tech that you all built for the world. CHAD: Yeah, we created it for ourselves. AMY-WILLARD: And then you shared it. CHAD: Mm-hmm. AMY-WILLARD: Then you open-sourced it. Great. Well, I bet you have a lot of happy employees. CHAD: I like to think so [laughs]. I do think that there is an inherent understanding of fairness. And when people ask how we do things at thoughtbot or how we should do things, I say, "How do you want it to be?" I think that guides a lot of how we do things and why a lot of stuff we do is just common sense. And it's not until ulterior motives or maintaining power comes into play where the people in power don't want to give it up. Because, like you said, people don't understand that by giving someone else a bigger piece, they think that that means their piece is smaller. AMY-WILLARD: Right. Or they just think they deserve it. I was reading last night about succession planning and CEOs. And apparently, a lot of them just stay...oh, sorry, in big public companies, not in their own companies, they stay on way too long. And all these consultants are saying it's the four Ps, you know, position, privilege, pay, and then...I forget the other one. But one of them was jets. They don't want to give up their jets. So yeah, I think when you have things, it seems fair, and sharing them seems...giving up some of what you have seems unfair. But I do think humans can see fairness. But sometimes, when you have a lot, it's hard to see it. You're able to justify why it may be not unfair to people who don't have as much as you do. But anyway, I can't change human nature, but most people do understand fairness. I think you're right about that. CHAD: Well, one thing...I noticed...so, you're a Public Benefit Corporation. AMY-WILLARD: Yes. CHAD: Did you set out to be a Public Benefit Corporation from day one? AMY-WILLARD: Yes, you know, originally, when it came to how was I going to pay for this, the first part I paid myself with my own money. I hired MBAs. I hired researchers. I built the tech. And then, I wasn't sure how I was going to pay for it going forward. But I knew I didn't want to become a nonprofit because, in my mind, there are so many things that...there are so many problems that women have that need to be solved by nonprofit organizations, planned parenthood first among them. Like, I don't want to take money away from women's organizations that help women fleeing abusive homes. So, I wanted to see if I could pay for this in the private sphere, which we've been able to do, and not have to seek donations because, really, I felt very strongly about not taking money out of that. That's part of the FDP, the part of the female domestic product, but the part that's contributed by people philanthropically. And there isn't a lot of philanthropic dollars going to women, as I mentioned before. So, yes, I knew definitely I wanted to be a Public Benefit Corporation. And there's no tax benefits to that, you know, I don't know if you are yet, but... CHAD: No, it's something that we've looked at, but it's very attractive to me. AMY-WILLARD: Right. And there's also the private version of it being a B Corp, which is also very useful. It's an onerous process. Public Benefit Corporation isn't quite as onerous, I don't believe. I mean, we're in Delaware and New York, but it just says that you're, I mean, we exist for the public good. I'm not existing to make millions of dollars. I'm existing to create social change. And some organizations don't want...are leery of working with us because we're not a nonprofit so that's to assuage them. Well, it's not really about...we're not about enriching shareholders. It's just a different way to pay for it. But yeah, I would encourage all companies to look into being a Public Benefit Corporation or do a B Corp assessment or a Gender Fair assessment. It helps them, you know, operate in a world that is increasingly more values concerned. Maybe 20 or 30 years ago, it wasn't so on the top of mind of many people. We were coming out of, you know, warring '80s capitalism. But nowadays, the younger people, especially, are very focused on issues of fairness and equality. So, I think those tools making business better that way are very useful. CHAD: Well, I would encourage, you know, everyone listening to go check out the app, if you're at a company, to look at doing the assessment. Where can people do those things? AMY-WILLARD: Ah, well, yeah, I would encourage them to do all those things. You're right, Chad. I would encourage you to download the app and check some of your favorite brands. It's very simple. Do the paid subscription. And then, if you're a company, you can do an online assessment. You just go Gender Fair assessment, and you'll find it. If you're a business and would like to participate in our B2B database, you can also do the assessment, or there's a coalition for Gender Fair procurement, where you can get information. We had the prime minister of Australia speak at our launch. It was quite excellent. We'll be launching our nonprofit. Actually, I think it's already online. It's called genderfair-nonprofits.org, if you want to see how your favorite nonprofits do. But, basically, we're here to help any business or organization do better on gender. And you can email me amy.cross@genderfair.com. And I would love to help anyone in their journey for fairness of any kind. Yeah, many ways to participate. Just go to genderfair.com or genderfairprocurement.com. CHAD: Awesome. Amy, thank you so much for sharing with us. I really appreciate it. And thank you for all the good that you're doing in the world with Gender Fair. AMY-WILLARD: Well, I appreciate the way you're running your company in a very new, interesting, and apparently ethical way. Privately, I could look at your website and your career page and figure out how you're doing. But it sounds, to me, when I've talked to people, that you're doing very well. And I honor your curiosity about learning from others. CHAD: Awesome. Well, listeners, you can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. You can find me on Mastodon @cpytel@thoughtbot.social. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks so much for listening, and see you next time. AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.
Every day, the urgency to address climate change grows more pressing. Yet, amidst the daunting challenges, there's a beacon of hope: a movement driven by countless ideas, companies, and individuals dedicated to effecting positive change. Today, we're sitting down with Kip Pastor, an award-winning filmmaker and the Founder & CEO of Pique Action, a Public Benefit Corporation at the forefront of this movement. Kip shares his journey from filmmaker to climate advocate, emphasizing the power of storytelling in shaping perceptions and driving action. With a track record of producing viral content and impactful documentaries, Kip understands the pivotal role of media in educating, inspiring, and mobilizing communities. Through Pique Action, he's harnessing this power to change the climate change conversation, focusing on solutions, positive stories, and tangible impact. Join us as Kip discusses Pique Action's mission, its commitment to carbon neutrality, and its innovative approach to connecting the dots between workers, investors, and policymakers in the fight against climate change. From social media outreach to public awareness campaigns, Pique Action is reshaping the narrative, one story at a time. Prepare to be inspired and empowered to join the movement for a sustainable future! Follow us on Instagram: @someonelikeyoupodcast
Jenny Xia Spradling is the co-CEO of Freewill, an online software company revolutionizing estate planning and charitable giving. With a background in math competitions, Jenny brings a unique perspective to the intersection of social impact and economic success. Under her leadership, Freewill has helped nearly one million Americans complete their estate planning and committed almost $10 billion to charity. Jenny's commitment to values like courage, joy, kindness, and focus has shaped Freewill's inclusive culture and mission-driven approach. You'll hear Lindsay and Jessica discuss: Early establishment of values is crucial: Defining values early on helps align founders, guide decision-making, and set the tone for company culture, ensuring long-term cohesion. Public Benefit Corporation designation: Choosing a legal structure that aligns profit with mission can provide clarity, transparency, and protection for purpose-driven companies in decision-making and operations. Values drive decision-making: Mission, vision, and values act as a litmus test for prioritizing opportunities, ensuring alignment with the company's core beliefs and goals. Human connection and trust: Building a culture centered on people fosters loyalty, engagement, and a positive customer experience, leading to long-term relationships and brand advocacy. Inclusivity through hiring practices: Prioritizing diversity, humility, and low ego individuals fosters an inclusive environment, promotes collaboration, and prevents biases in decision-making. Balancing values: Recognizing the dual nature of values, understanding their edges, and embracing the trade-offs helps navigate complex decisions and maintain integrity in actions. Math and decision-making: While numbers provide structure and analysis, embracing ambiguity and intuition alongside quantitative data allows for more holistic and nuanced decision-making. Culture is sticky: Establishing a strong culture early on, emphasizing inclusivity, humility, and feedback, creates a foundation for long-term success and employee retention. Joy and gratitude: Incorporating moments of celebration, gratitude, and joy in the workplace fosters morale, engagement, and a positive work environment, enhancing overall productivity and satisfaction. Transparency and clarity: Explicitly defining values, mission, and vision helps guide actions, align stakeholders, and provide a clear framework for decision-making, ensuring consistency and focus. Resources Jenny Xia Spradling | LinkedIn | FreeWill
Eric Ries has invested in over 100+ early-stage startups. He is best known as the author of The Lean Startup, a must-read for entrepreneurs worldwide. He also founded the Long-Term Stock Exchange (LTSE), a new stock exchange designed to support companies with long-term goals. He recently launched a new podcast discussing ways to re-think corporate governance to be mission-first.In Part 1 of our interview, he shared insights from angel investing. In Part 2, Eric shares his new ethos for startups rooted in long-term thinking, putting a company's mission at the center of everything and aligning all stakeholders. This mission-first approach challenges the traditional capitalist, and data shows it leads to better company performance.Eric writes checks of $10K or less as an Angel at the earliest stages. He is interested in mission-driven founders, education, fintech, AI, and more.Highlights:Eric Angel invests for reasons beyond financial outcomes. He focuses on giving back to people in his network, learning about startup approaches and various industries, and doubling down in areas he is passionate about. Any time he has strayed from his investing criteria, it hasn't worked out. Advising then investing: Eric prefers to work with a startup as a friend or advisor before investing. He keeps his check size to $10K to support his goal of high-velocity learning. He can write more checks with smaller checks, which means more learning.Investing as a spiritual journey: Eric practices introspection to support continuous learning and to avoid overgeneralizing when things don't work out. When he invests, he applies Lean Startup thinking by asking, “Is this outcome falsifiable”? Invest -> Measure -> Learn. We guess this makes him is a "Lean Investor"Eric's second act after Lean Startup is supporting mission-first startups: He advocates for a new ethos that he believes will lead to better performance in the long term.Eric shares tools for mission-first founders, including the Public Benefit Corporation, the LTSPV, employee voting trust, and more.(00:00) - Introduction to First Funders (00:54) - Meeting Eric Ries: a journey down memory lane (02:45) - The Impact of Lean Startup (07:15) - Eric's Angel investing journey starts with being an advisor and a mindset of giving back (09:55) - What is Eric's investing criteria (14:04) - Eric prefers to advise startups first before investing (17:47) - Eric's second act: from Lean Startup to nurturing mission-driven founders who will also realize massive profits (25:14) - Writing small $10K checks into a high volume of early-stage startups enables high velocity learning (27:24) - (28:28) - Eric decouples investing from outcomes to stay focused on giving back and learning (30:44) - How to be a useful startup advisor: stand for something that creates competitive advantage for startups (34:31) - A challenging investment: lessons from high-stakes and high-stress moments and can the startup journey be a force for healing trauma? (43:04) - The Long-Term Stock Exchange vision: a new ethos and governance approach for the startup community (45:01) - How mission-driven founders and investors need to be brave to challenge the capitalist status quo (47:33) - How tech startup can leverage the Public Benefit Corp and how the B-Corp certification won't work for software companies (51:24) - LTSPV: An SPV Angels can leverage to align their check with long-term thinking (54:08) - The spiritual journey of investing: what did you really learn vs what do you think happened? (57:07) - Speed Round and Final Thoughts (59:24) - Takeaways Connect with Us:Follow the First Funders PodcastNewsletter with behind-the-scenes access and key takeawaysTwitter/X: @shaherose | @aviraniEmail us with feedback and suggestions on topics and guestsDisclaimer: This is for information purposes only. This is not investment advice.
Jim talks with Alex Fink about his company Otherweb, which uses AI to filter out fake news and create a more reliable news ecosystem. They discuss how Alex came to care about this problem, the decline of news media, how advertising wrecked the internet, the idea of an info agent, Otherweb's curation engine, information filtering systems, unhooking the internet from advertising, the fight between AdBlock and Facebook, the decision to disinclude paywalled websites, economic tradeoffs of paywalling, AI in movie production, money-on-money return, initial results of the printing press, watermarking images, fair witnesses, how porn has driven internet innovation, catering to the seven deadly sins, social media addiction, binding the future of the company, public benefit corporations, the stewardship capital model, the crowdfunding process, and much more. Otherweb The Other Web (Podcast) Alex Fink is a Tech Executive, Silicon Valley Expat, and the Founder and CEO of the Otherweb, a Public Benefit Corporation that uses AI to help people read news and commentary, listen to podcasts and search the web without paywalls, clickbait, ads, autoplaying videos, affiliate links, or any other junk. The Otherweb is available as an app (ios and android), a website, a newsletter, or a standalone browser extension.
In this insightful conversation, Dr Tim Pletcher, a prominent figure in health information technology, shares his remarkable journey and vision for transforming healthcare through data interoperability and AI. From his early experiences that shaped his passion for healthcare to his groundbreaking work in building regional health information exchanges and fostering data sharing, Dr Pletcher offers a wealth of knowledge and practical insights. He delves into the challenges and opportunities of leveraging data analytics and AI in healthcare, the role of incentives and collaboration, and the potential impact on patient care and cost reduction. With a unique perspective spanning engineering, informatics, and healthcare administration, Dr Pletcher provides a comprehensive outlook on the future of healthcare in the digital age. 00:08- About Dr Tim Pletcher Dr Tim Pletcher has an amazing background and is definitely a prominent figure in health information technology. He's been in digital transformation and data analytics for a long time. He is doing incredible work in shared services, and particularly in health. --- Support this podcast: https://podcasters.spotify.com/pod/show/tbcy/support
Episode 337 of The VentureFizz Podcast features Gloria Hwang, Founder & CEO of Thousand. Gloria's career has centered around mission oriented initiatives and companies. Starting at Habitat for Humanity… then to TOMS, which is the gold standard in terms of businesses with a social mission based on their One for One model. She joined the company in its early years and played several key roles while there. As the Founder of Thousand, a Public Benefit Corporation, Gloria named the company after an initial goal of saving 1,000 lives (which they have passed) by making a bike helmet that consumers will actually enjoy wearing. The inspiration for the company came from a tragic story of a coworker who passed away from a bike accident in NYC. When you think of it, it is a category that hasn't really evolved in terms of style and functionality. What started out as a Kickstarter project… then fast forward to today, Thousand has brought a new level of style and innovation to a category that has been stagnant forever. And, it is working. The company continues to grow aggressively and has been expanding into other categories like the launch of its Thousand Jr. collection. In this episode of our podcast, we cover: * A deep discussion around building a company that is mission and community focused. * Gloria's background story and her mentality as a builder that started as a child which was influenced by her father who started the robotics program for NASA. * Her experience at TOMS including playing a key role in building out their non-profit and giving programs and what it was like working closely with Blake Mycoskie. * The full lifecycle story of Thousand, including how she got started from designing the product to manufacturing to building a brand. * How they decided to expand their product line based on feedback from existing customers versus trying to build out new markets. * Starting the company with a bootstrapped mentality and how that helped the company over the long run. * And so much more.
On this episode of the Character Outs podcast, I chat with Cody who is not only intelligent and insightful, but also a really cool, down to earth dude. He brings a fascinating perspective to how trauma impacts our mind and our body and how we can get to a place of becoming a 'human doing' and not just a 'human being.' Cody is a seasoned entrepreneur at the forefront of AI and Mental Health. Armed with a degree in Cognitive Behavioral Neuroscience and trained in IFS Psychotherapy, he is the Founder & CEO of Mind, Brain, Body Lab. This groundbreaking Mental Health AI company specializes in treating Complex PTSD using AI/ML algorithms to analyze biometric and behavioral data. Committed to bridging the gap between AI and human experience, Cody also heads MyCompanion, a Public Benefit Corporation providing a free, 24/7 conversational AI companion for healing Complex PTSD. Cody disseminates daily Neuroscience and Psychology tools to improve mental and emotional health post-trauma. His mission is to help one billion people to find happiness and purpose in their lives. Follow on Instagram: https://www.instagram.com/mindbrainbodylab/ Check out the Mind, Brain, Body Lab website here: https://www.mindbrainbodylab.com/ In case no one has told you, it's not you, never has been, you are not alone, and you don't have to fix it. https://characteroutspodcast.com/
Ethan Frisch and Ori Zohar are Co-Founders of Burlap & Barrel, the spice company deeply beloved by the chef community and home cooks, in search of spices that taste like they're meant to taste. On this episode of ITS, Ethan and Ori tell Ali how they not only quintupled in Covid, but how they made sure to stay in the hearts and carts of consumers after the pandemic.Photo courtesy of Burlap and Barrel.Heritage Radio Network is a listener supported nonprofit podcast network. Support In The Sauce by becoming a member!In The Sauce is Powered by Simplecast.
Maggie, Linus, Geoffrey, and the LS crew are reuniting for our second annual AI UX demo day in SF on Apr 28. Sign up to demo here! And don't forget tickets for the AI Engineer World's Fair — for early birds who join before keynote announcements!It's become fashionable for many AI startups to project themselves as “the next Google” - while the search engine is so 2000s, both Perplexity and Exa referred to themselves as a “research engine” or “answer engine” in our NeurIPS pod. However these searches tend to be relatively shallow, and it is challenging to zoom up and down the ladders of abstraction to garner insights. For serious researchers, this level of simple one-off search will not cut it.We've commented in our Jan 2024 Recap that Flow Engineering (simply; multi-turn processes over many-shot single prompts) seems to offer far more performance, control and reliability for a given cost budget. Our experiments with Devin and our understanding of what the new Elicit Notebooks offer a glimpse into the potential for very deep, open ended, thoughtful human-AI collaboration at scale.It starts with promptsWhen ChatGPT exploded in popularity in November 2022 everyone was turned into a prompt engineer. While generative models were good at "vibe based" outcomes (tell me a joke, write a poem, etc) with basic prompts, they struggled with more complex questions, especially in symbolic fields like math, logic, etc. Two of the most important "tricks" that people picked up on were:* Chain of Thought prompting strategy proposed by Wei et al in the “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”. Rather than doing traditional few-shot prompting with just question and answers, adding the thinking process that led to the answer resulted in much better outcomes.* Adding "Let's think step by step" to the prompt as a way to boost zero-shot reasoning, which was popularized by Kojima et al in the Large Language Models are Zero-Shot Reasoners paper from NeurIPS 2022. This bumped accuracy from 17% to 79% compared to zero-shot.Nowadays, prompts include everything from promises of monetary rewards to… whatever the Nous folks are doing to turn a model into a world simulator. At the end of the day, the goal of prompt engineering is increasing accuracy, structure, and repeatability in the generation of a model.From prompts to agentsAs prompt engineering got more and more popular, agents (see “The Anatomy of Autonomy”) took over Twitter with cool demos and AutoGPT became the fastest growing repo in Github history. The thing about AutoGPT that fascinated people was the ability to simply put in an objective without worrying about explaining HOW to achieve it, or having to write very sophisticated prompts. The system would create an execution plan on its own, and then loop through each task. The problem with open-ended agents like AutoGPT is that 1) it's hard to replicate the same workflow over and over again 2) there isn't a way to hard-code specific steps that the agent should take without actually coding them yourself, which isn't what most people want from a product. From agents to productsPrompt engineering and open-ended agents were great in the experimentation phase, but this year more and more of these workflows are starting to become polished products. Today's guests are Andreas Stuhlmüller and Jungwon Byun of Elicit (previously Ought), an AI research assistant that they think of as “the best place to understand what is known”. Ought was a non-profit, but last September, Elicit spun off into a PBC with a $9m seed round. It is hard to quantify how much a workflow can be improved, but Elicit boasts some impressive numbers for research assistants:Just four months after launch, Elicit crossed $1M ARR, which shows how much interest there is for AI products that just work.One of the main takeaways we had from the episode is how teams should focus on supervising the process, not the output. Their philosophy at Elicit isn't to train general models, but to train models that are extremely good at focusing processes. This allows them to have pre-created steps that the user can add to their workflow (like classifying certain features that are specific to their research field) without having to write a prompt for it. And for Hamel Husain's happiness, they always show you the underlying prompt. Elicit recently announced notebooks as a new interface to interact with their products: (fun fact, they tried to implement this 4 times before they landed on the right UX! We discuss this ~33:00 in the podcast)The reasons why they picked notebooks as a UX all tie back to process:* They are systematic; once you have a instruction/prompt that works on a paper, you can run hundreds of papers through the same workflow by creating a column. Notebooks can also be edited and exported at any point during the flow.* They are transparent - Many papers include an opaque literature review as perfunctory context before getting to their novel contribution. But PDFs are “dead” and it is difficult to follow the thought process and exact research flow of the authors. Sharing “living” Elicit Notebooks opens up this process.* They are unbounded - Research is an endless stream of rabbit holes. So it must be easy to dive deeper and follow up with extra steps, without losing the ability to surface for air. We had a lot of fun recording this, and hope you have as much fun listening!AI UX in SFLong time Latent Spacenauts might remember our first AI UX meetup with Linus Lee, Geoffrey Litt, and Maggie Appleton last year. Well, Maggie has since joined Elicit, and they are all returning at the end of this month! Sign up here: https://lu.ma/aiuxAnd submit demos here! https://forms.gle/iSwiesgBkn8oo4SS8We expect the 200 seats to “sell out” fast. Attendees with demos will be prioritized.Show Notes* Elicit* Ought (their previous non-profit)* “Pivoting” with GPT-4* Elicit notebooks launch* Charlie* Andreas' BlogTimestamps* [00:00:00] Introductions* [00:07:45] How Johan and Andreas Joined Forces to Create Elicit* [00:10:26] Why Products > Research* [00:15:49] The Evolution of Elicit's Product* [00:19:44] Automating Literature Review Workflow* [00:22:48] How GPT-3 to GPT-4 Changed Things* [00:25:37] Managing LLM Pricing and Performance* [00:31:07] Open vs. Closed: Elicit's Approach to Model Selection* [00:31:56] Moving to Notebooks* [00:39:11] Elicit's Budget for Model Queries and Evaluations* [00:41:44] Impact of Long Context Windows* [00:47:19] Underrated Features and Surprising Applications* [00:51:35] Driving Systematic and Efficient Research* [00:53:00] Elicit's Team Growth and Transition to a Public Benefit Corporation* [00:55:22] Building AI for GoodFull Interview on YouTubeAs always, a plug for our youtube version for the 80% of communication that is nonverbal:TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we are back in the studio with Andreas and Jungwon from Elicit. Welcome.Jungwon [00:00:20]: Thanks guys.Andreas [00:00:21]: It's great to be here.Swyx [00:00:22]: Yeah. So I'll introduce you separately, but also, you know, we'd love to learn a little bit more about you personally. So Andreas, it looks like you started Elicit first, Jungwon joined later.Andreas [00:00:32]: That's right. For all intents and purposes, the Elicit and also the Ought that existed before then were very different from what I started. So I think it's like fair to say that you co-founded it.Swyx [00:00:43]: Got it. And Jungwon, you're a co-founder and COO of Elicit now.Jungwon [00:00:46]: Yeah, that's right.Swyx [00:00:47]: So there's a little bit of a history to this. I'm not super aware of like the sort of journey. I was aware of OTT and Elicit as sort of a nonprofit type situation. And recently you turned into like a B Corp, Public Benefit Corporation. So yeah, maybe if you want, you could take us through that journey of finding the problem. You know, obviously you're working together now. So like, how do you get together to decide to leave your startup career to join him?Andreas [00:01:10]: Yeah, it's truly a very long journey. I guess truly, it kind of started in Germany when I was born. So even as a kid, I was always interested in AI, like I kind of went to the library. There were books about how to write programs in QBasic and like some of them talked about how to implement chatbots.Jungwon [00:01:27]: To be clear, he grew up in like a tiny village on the outskirts of Munich called Dinkelschirben, where it's like a very, very idyllic German village.Andreas [00:01:36]: Yeah, important to the story. So basically, the main thing is I've kind of always been thinking about AI my entire life and been thinking about, well, at some point, this is going to be a huge deal. It's going to be transformative. How can I work on it? And was thinking about it from when I was a teenager, after high school did a year where I started a startup with the intention to become rich. And then once I'm rich, I can affect the trajectory of AI. Did not become rich, decided to go back to college and study cognitive science there, which was like the closest thing I could find at the time to AI. In the last year of college, moved to the US to do a PhD at MIT, working on broadly kind of new programming languages for AI because it kind of seemed like the existing languages were not great at expressing world models and learning world models doing Bayesian inference. Was always thinking about, well, ultimately, the goal is to actually build tools that help people reason more clearly, ask and answer better questions and make better decisions. But for a long time, it seemed like the technology to put reasoning in machines just wasn't there. Initially, at the end of my postdoc at Stanford, I was thinking about, well, what to do? I think the standard path is you become an academic and do research. But it's really hard to actually build interesting tools as an academic. You can't really hire great engineers. Everything is kind of on a paper-to-paper timeline. And so I was like, well, maybe I should start a startup, pursued that for a little bit. But it seemed like it was too early because you could have tried to do an AI startup, but probably would not have been this kind of AI startup we're seeing now. So then decided to just start a nonprofit research lab that's going to do research for a while until we better figure out how to do thinking in machines. And that was odd. And then over time, it became clear how to actually build actual tools for reasoning. And only over time, we developed a better way to... I'll let you fill in some of the details here.Jungwon [00:03:26]: Yeah. So I guess my story maybe starts around 2015. I kind of wanted to be a founder for a long time, and I wanted to work on an idea that stood the test of time for me, like an idea that stuck with me for a long time. And starting in 2015, actually, originally, I became interested in AI-based tools from the perspective of mental health. So there are a bunch of people around me who are really struggling. One really close friend in particular is really struggling with mental health and didn't have any support, and it didn't feel like there was anything before kind of like getting hospitalized that could just help her. And so luckily, she came and stayed with me for a while, and we were just able to talk through some things. But it seemed like lots of people might not have that resource, and something maybe AI-enabled could be much more scalable. I didn't feel ready to start a company then, that's 2015. And I also didn't feel like the technology was ready. So then I went into FinTech and kind of learned how to do the tech thing. And then in 2019, I felt like it was time for me to just jump in and build something on my own I really wanted to create. And at the time, I looked around at tech and felt like not super inspired by the options. I didn't want to have a tech career ladder, or I didn't want to climb the career ladder. There are two kind of interesting technologies at the time, there was AI and there was crypto. And I was like, well, the AI people seem like a little bit more nice, maybe like slightly more trustworthy, both super exciting, but threw my bet in on the AI side. And then I got connected to Andreas. And actually, the way he was thinking about pursuing the research agenda at OTT was really compatible with what I had envisioned for an ideal AI product, something that helps kind of take down really complex thinking, overwhelming thoughts and breaks it down into small pieces. And then this kind of mission that we need AI to help us figure out what we ought to do was really inspiring, right? Yeah, because I think it was clear that we were building the most powerful optimizer of our time. But as a society, we hadn't figured out how to direct that optimization potential. And if you kind of direct tremendous amounts of optimization potential at the wrong thing, that's really disastrous. So the goal of OTT was make sure that if we build the most transformative technology of our lifetime, it can be used for something really impactful, like good reasoning, like not just generating ads. My background was in marketing, but like, so I was like, I want to do more than generate ads with this. But also if these AI systems get to be super intelligent enough that they are doing this really complex reasoning, that we can trust them, that they are aligned with us and we have ways of evaluating that they're doing the right thing. So that's what OTT did. We did a lot of experiments, you know, like I just said, before foundation models really like took off. A lot of the issues we were seeing were more in reinforcement learning, but we saw a future where AI would be able to do more kind of logical reasoning, not just kind of extrapolate from numerical trends. We actually kind of set up experiments with people where kind of people stood in as super intelligent systems and we effectively gave them context windows. So they would have to like read a bunch of text and one person would get less text and one person would get all the texts and the person with less text would have to evaluate the work of the person who could read much more. So like in a world we were basically simulating, like in 2018, 2019, a world where an AI system could read significantly more than you and you as the person who couldn't read that much had to evaluate the work of the AI system. Yeah. So there's a lot of the work we did. And from that, we kind of iterated on the idea of breaking complex tasks down into smaller tasks, like complex tasks, like open-ended reasoning, logical reasoning into smaller tasks so that it's easier to train AI systems on them. And also so that it's easier to evaluate the work of the AI system when it's done. And then also kind of, you know, really pioneered this idea, the importance of supervising the process of AI systems, not just the outcomes. So a big part of how Elicit is built is we're very intentional about not just throwing a ton of data into a model and training it and then saying, cool, here's like scientific output. Like that's not at all what we do. Our approach is very much like, what are the steps that an expert human does or what is like an ideal process as granularly as possible, let's break that down and then train AI systems to perform each of those steps very robustly. When you train like that from the start, after the fact, it's much easier to evaluate, it's much easier to troubleshoot at each point. Like where did something break down? So yeah, we were working on those experiments for a while. And then at the start of 2021, decided to build a product.Swyx [00:07:45]: Do you mind if I, because I think you're about to go into more modern thought and Elicit. And I just wanted to, because I think a lot of people are in where you were like sort of 2018, 19, where you chose a partner to work with. Yeah. Right. And you didn't know him. Yeah. Yeah. You were just kind of cold introduced. A lot of people are cold introduced. Yeah. Never work with them. I assume you had a lot, a lot of other options, right? Like how do you advise people to make those choices?Jungwon [00:08:10]: We were not totally cold introduced. So one of our closest friends introduced us. And then Andreas had written a lot on the OTT website, a lot of blog posts, a lot of publications. And I just read it and I was like, wow, this sounds like my writing. And even other people, some of my closest friends I asked for advice from, they were like, oh, this sounds like your writing. But I think I also had some kind of like things I was looking for. I wanted someone with a complimentary skillset. I want someone who was very values aligned. And yeah, that was all a good fit.Andreas [00:08:38]: We also did a pretty lengthy mutual evaluation process where we had a Google doc where we had all kinds of questions for each other. And I think it ended up being around 50 pages or so of like various like questions and back and forth.Swyx [00:08:52]: Was it the YC list? There's some lists going around for co-founder questions.Andreas [00:08:55]: No, we just made our own questions. But I guess it's probably related in that you ask yourself, what are the values you care about? How would you approach various decisions and things like that?Jungwon [00:09:04]: I shared like all of my past performance reviews. Yeah. Yeah.Swyx [00:09:08]: And he never had any. No.Andreas [00:09:10]: Yeah.Swyx [00:09:11]: Sorry, I just had to, a lot of people are going through that phase and you kind of skipped over it. I was like, no, no, no, no. There's like an interesting story.Jungwon [00:09:20]: Yeah.Alessio [00:09:21]: Yeah. Before we jump into what a list it is today, the history is a bit counterintuitive. So you start with figuring out, oh, if we had a super powerful model, how would we align it? But then you were actually like, well, let's just build the product so that people can actually leverage it. And I think there are a lot of folks today that are now back to where you were maybe five years ago that are like, oh, what if this happens rather than focusing on actually building something useful with it? What clicked for you to like move into a list and then we can cover that story too.Andreas [00:09:49]: I think in many ways, the approach is still the same because the way we are building illicit is not let's train a foundation model to do more stuff. It's like, let's build a scaffolding such that we can deploy powerful models to good ends. I think it's different now in that we actually have like some of the models to plug in. But if in 2017, we had had the models, we could have run the same experiments we did run with humans back then, just with models. And so in many ways, our philosophy is always, let's think ahead to the future of what models are going to exist in one, two years or longer. And how can we make it so that they can actually be deployed in kind of transparent, controllableJungwon [00:10:26]: ways? I think motivationally, we both are kind of product people at heart. The research was really important and it didn't make sense to build a product at that time. But at the end of the day, the thing that always motivated us is imagining a world where high quality reasoning is really abundant and AI is a technology that's going to get us there. And there's a way to guide that technology with research, but we can have a more direct effect through product because with research, you publish the research and someone else has to implement that into the product and the product felt like a more direct path. And we wanted to concretely have an impact on people's lives. Yeah, I think the kind of personally, the motivation was we want to build for people.Swyx [00:11:03]: Yep. And then just to recap as well, like the models you were using back then were like, I don't know, would they like BERT type stuff or T5 or I don't know what timeframe we're talking about here.Andreas [00:11:14]: I guess to be clear, at the very beginning, we had humans do the work. And then I think the first models that kind of make sense were TPT-2 and TNLG and like Yeah, early generative models. We do also use like T5 based models even now started with TPT-2.Swyx [00:11:30]: Yeah, cool. I'm just kind of curious about like, how do you start so early? You know, like now it's obvious where to start, but back then it wasn't.Jungwon [00:11:37]: Yeah, I used to nag Andreas a lot. I was like, why are you talking to this? I don't know. I felt like TPT-2 is like clearly can't do anything. And I was like, Andreas, you're wasting your time, like playing with this toy. But yeah, he was right.Alessio [00:11:50]: So what's the history of what Elicit actually does as a product? You recently announced that after four months, you get to a million in revenue. Obviously, a lot of people use it, get a lot of value, but it would initially kind of like structured data extraction from papers. Then you had kind of like concept grouping. And today, it's maybe like a more full stack research enabler, kind of like paper understander platform. What's the definitive definition of what Elicit is? And how did you get here?Jungwon [00:12:15]: Yeah, we say Elicit is an AI research assistant. I think it will continue to evolve. That's part of why we're so excited about building and research, because there's just so much space. I think the current phase we're in right now, we talk about it as really trying to make Elicit the best place to understand what is known. So it's all a lot about like literature summarization. There's a ton of information that the world already knows. It's really hard to navigate, hard to make it relevant. So a lot of it is around document discovery and processing and analysis. I really kind of want to import some of the incredible productivity improvements we've seen in software engineering and data science and into research. So it's like, how can we make researchers like data scientists of text? That's why we're launching this new set of features called Notebooks. It's very much inspired by computational notebooks, like Jupyter Notebooks, you know, DeepNode or Colab, because they're so powerful and so flexible. And ultimately, when people are trying to get to an answer or understand insight, they're kind of like manipulating evidence and information. Today, that's all packaged in PDFs, which are super brittle. So with language models, we can decompose these PDFs into their underlying claims and evidence and insights, and then let researchers mash them up together, remix them and analyze them together. So yeah, I would say quite simply, overall, Elicit is an AI research assistant. Right now we're focused on text-based workflows, but long term, really want to kind of go further and further into reasoning and decision making.Alessio [00:13:35]: And when you say AI research assistant, this is kind of meta research. So researchers use Elicit as a research assistant. It's not a generic you-can-research-anything type of tool, or it could be, but like, what are people using it for today?Andreas [00:13:49]: Yeah. So specifically in science, a lot of people use human research assistants to do things. You tell your grad student, hey, here are a couple of papers. Can you look at all of these, see which of these have kind of sufficiently large populations and actually study the disease that I'm interested in, and then write out like, what are the experiments they did? What are the interventions they did? What are the outcomes? And kind of organize that for me. And the first phase of understanding what is known really focuses on automating that workflow because a lot of that work is pretty rote work. I think it's not the kind of thing that we need humans to do. Language models can do it. And then if language models can do it, you can obviously scale it up much more than a grad student or undergrad research assistant would be able to do.Jungwon [00:14:31]: Yeah. The use cases are pretty broad. So we do have a very large percent of our users are just using it personally or for a mix of personal and professional things. People who care a lot about health or biohacking or parents who have children with a kind of rare disease and want to understand the literature directly. So there is an individual kind of consumer use case. We're most focused on the power users. So that's where we're really excited to build. So Lissette was very much inspired by this workflow in literature called systematic reviews or meta-analysis, which is basically the human state of the art for summarizing scientific literature. And it typically involves like five people working together for over a year. And they kind of first start by trying to find the maximally comprehensive set of papers possible. So it's like 10,000 papers. And they kind of systematically narrow that down to like hundreds or 50 extract key details from every single paper. Usually have two people doing it, like a third person reviewing it. So it's like an incredibly laborious, time consuming process, but you see it in every single domain. So in science, in machine learning, in policy, because it's so structured and designed to be reproducible, it's really amenable to automation. So that's kind of the workflow that we want to automate first. And then you make that accessible for any question and make these really robust living summaries of science. So yeah, that's one of the workflows that we're starting with.Alessio [00:15:49]: Our previous guest, Mike Conover, he's building a new company called Brightwave, which is an AI research assistant for financial research. How do you see the future of these tools? Does everything converge to like a God researcher assistant, or is every domain going to have its own thing?Andreas [00:16:03]: I think that's a good and mostly open question. I do think there are some differences across domains. For example, some research is more quantitative data analysis, and other research is more high level cross domain thinking. And we definitely want to contribute to the broad generalist reasoning type space. Like if researchers are making discoveries often, it's like, hey, this thing in biology is actually analogous to like these equations in economics or something. And that's just fundamentally a thing that where you need to reason across domains. At least within research, I think there will be like one best platform more or less for this type of generalist research. I think there may still be like some particular tools like for genomics, like particular types of modules of genes and proteins and whatnot. But for a lot of the kind of high level reasoning that humans do, I think that is a more of a winner type all thing.Swyx [00:16:52]: I wanted to ask a little bit deeper about, I guess, the workflow that you mentioned. I like that phrase. I see that in your UI now, but that's as it is today. And I think you were about to tell us about how it was in 2021 and how it may be progressed. How has this workflow evolved over time?Jungwon [00:17:07]: Yeah. So the very first version of Elicit actually wasn't even a research assistant. It was a forecasting assistant. So we set out and we were thinking about, you know, what are some of the most impactful types of reasoning that if we could scale up, AI would really transform the world. We actually started with literature review, but we're like, oh, so many people are going to build literature review tools. So let's start there. So then we focused on geopolitical forecasting. So I don't know if you're familiar with like manifold or manifold markets. That kind of stuff. Before manifold. Yeah. Yeah. I'm not predicting relationships. We're predicting like, is China going to invade Taiwan?Swyx [00:17:38]: Markets for everything.Andreas [00:17:39]: Yeah. That's a relationship.Swyx [00:17:41]: Yeah.Jungwon [00:17:42]: Yeah. It's true. And then we worked on that for a while. And then after GPT-3 came out, I think by that time we realized that originally we were trying to help people convert their beliefs into probability distributions. And so take fuzzy beliefs, but like model them more concretely. And then after a few months of iterating on that, just realize, oh, the thing that's blocking people from making interesting predictions about important events in the world is less kind of on the probabilistic side and much more on the research side. And so that kind of combined with the very generalist capabilities of GPT-3 prompted us to make a more general research assistant. Then we spent a few months iterating on what even is a research assistant. So we would embed with different researchers. We built data labeling workflows in the beginning, kind of right off the bat. We built ways to find experts in a field and like ways to ask good research questions. So we just kind of iterated through a lot of workflows and no one else was really building at this time. And it was like very quick to just do some prompt engineering and see like what is a task that is at the intersection of what's technologically capable and like important for researchers. And we had like a very nondescript landing page. It said nothing. But somehow people were signing up and we had to sign a form that was like, why are you here? And everyone was like, I need help with literature review. And we're like, oh, literature review. That sounds so hard. I don't even know what that means. We're like, we don't want to work on it. But then eventually we were like, okay, everyone is saying literature review. It's overwhelmingly people want to-Swyx [00:19:02]: And all domains, not like medicine or physics or just all domains. Yeah.Jungwon [00:19:06]: And we also kind of personally knew literature review was hard. And if you look at the graphs for academic literature being published every single month, you guys know this in machine learning, it's like up into the right, like superhuman amounts of papers. So we're like, all right, let's just try it. I was really nervous, but Andreas was like, this is kind of like the right problem space to jump into, even if we don't know what we're doing. So my take was like, fine, this feels really scary, but let's just launch a feature every single week and double our user numbers every month. And if we can do that, we'll fail fast and we will find something. I was worried about like getting lost in the kind of academic white space. So the very first version was actually a weekend prototype that Andreas made. Do you want to explain how that worked?Andreas [00:19:44]: I mostly remember that it was really bad. The thing I remember is you entered a question and it would give you back a list of claims. So your question could be, I don't know, how does creatine affect cognition? It would give you back some claims that are to some extent based on papers, but they were often irrelevant. The papers were often irrelevant. And so we ended up soon just printing out a bunch of examples of results and putting them up on the wall so that we would kind of feel the constant shame of having such a bad product and would be incentivized to make it better. And I think over time it has gotten a lot better, but I think the initial version was like really very bad. Yeah.Jungwon [00:20:20]: But it was basically like a natural language summary of an abstract, like kind of a one sentence summary, and which we still have. And then as we learned kind of more about this systematic review workflow, we started expanding the capability so that you could extract a lot more data from the papers and do more with that.Swyx [00:20:33]: And were you using like embeddings and cosine similarity, that kind of stuff for retrieval, or was it keyword based?Andreas [00:20:40]: I think the very first version didn't even have its own search engine. I think the very first version probably used the Semantic Scholar or API or something similar. And only later when we discovered that API is not very semantic, we then built our own search engine that has helped a lot.Swyx [00:20:58]: And then we're going to go into like more recent products stuff, but like, you know, I think you seem the more sort of startup oriented business person and you seem sort of more ideologically like interested in research, obviously, because of your PhD. What kind of market sizing were you guys thinking? Right? Like, because you're here saying like, we have to double every month. And I'm like, I don't know how you make that conclusion from this, right? Especially also as a nonprofit at the time.Jungwon [00:21:22]: I mean, market size wise, I felt like in this space where so much was changing and it was very unclear what of today was actually going to be true tomorrow. We just like really rested a lot on very, very simple fundamental principles, which is like, if you can understand the truth, that is very economically beneficial and valuable. If you like know the truth.Swyx [00:21:42]: On principle.Jungwon [00:21:43]: Yeah. That's enough for you. Yeah. Research is the key to many breakthroughs that are very commercially valuable.Swyx [00:21:47]: Because my version of it is students are poor and they don't pay for anything. Right? But that's obviously not true. As you guys have found out. But you had to have some market insight for me to have believed that, but you skipped that.Andreas [00:21:58]: Yeah. I remember talking to VCs for our seed round. A lot of VCs were like, you know, researchers, they don't have any money. Why don't you build legal assistant? I think in some short sighted way, maybe that's true. But I think in the long run, R&D is such a big space of the economy. I think if you can substantially improve how quickly people find new discoveries or avoid controlled trials that don't go anywhere, I think that's just huge amounts of money. And there are a lot of questions obviously about between here and there. But I think as long as the fundamental principle is there, we were okay with that. And I guess we found some investors who also were. Yeah.Swyx [00:22:35]: Congrats. I mean, I'm sure we can cover the sort of flip later. I think you're about to start us on like GPT-3 and how that changed things for you. It's funny. I guess every major GPT version, you have some big insight. Yeah.Jungwon [00:22:48]: Yeah. I mean, what do you think?Andreas [00:22:51]: I think it's a little bit less true for us than for others, because we always believed that there will basically be human level machine work. And so it is definitely true that in practice for your product, as new models come out, your product starts working better, you can add some features that you couldn't add before. But I don't think we really ever had the moment where we were like, oh, wow, that is super unanticipated. We need to do something entirely different now from what was on the roadmap.Jungwon [00:23:21]: I think GPT-3 was a big change because it kind of said, oh, now is the time that we can use AI to build these tools. And then GPT-4 was maybe a little bit more of an extension of GPT-3. GPT-3 over GPT-2 was like qualitative level shift. And then GPT-4 was like, okay, great. Now it's like more accurate. We're more accurate on these things. We can answer harder questions. But the shape of the product had already taken place by that time.Swyx [00:23:44]: I kind of want to ask you about this sort of pivot that you've made. But I guess that was just a way to sell what you were doing, which is you're adding extra features on grouping by concepts. The GPT-4 pivot, quote unquote pivot that you-Jungwon [00:23:55]: Oh, yeah, yeah, exactly. Right, right, right. Yeah. Yeah. When we launched this workflow, now that GPT-4 was available, basically Elisa was at a place where we have very tabular interfaces. So given a table of papers, you can extract data across all the tables. But you kind of want to take the analysis a step further. Sometimes what you'd care about is not having a list of papers, but a list of arguments, a list of effects, a list of interventions, a list of techniques. And so that's one of the things we're working on is now that you've extracted this information in a more structured way, can you pivot it or group by whatever the information that you extracted to have more insight first information still supported by the academic literature?Swyx [00:24:33]: Yeah, that was a big revelation when I saw it. Basically, I think I'm very just impressed by how first principles, your ideas around what the workflow is. And I think that's why you're not as reliant on like the LLM improving, because it's actually just about improving the workflow that you would recommend to people. Today we might call it an agent, I don't know, but you're not relying on the LLM to drive it. It's relying on this is the way that Elicit does research. And this is what we think is most effective based on talking to our users.Jungwon [00:25:01]: The problem space is still huge. Like if it's like this big, we are all still operating at this tiny part, bit of it. So I think about this a lot in the context of moats, people are like, oh, what's your moat? What happens if GPT-5 comes out? It's like, if GPT-5 comes out, there's still like all of this other space that we can go into. So I think being really obsessed with the problem, which is very, very big, has helped us like stay robust and just kind of directly incorporate model improvements and they keep going.Swyx [00:25:26]: And then I first encountered you guys with Charlie, you can tell us about that project. Basically, yeah. Like how much did cost become a concern as you're working more and more with OpenAI? How do you manage that relationship?Jungwon [00:25:37]: Let me talk about who Charlie is. And then you can talk about the tech, because Charlie is a special character. So Charlie, when we found him was, had just finished his freshman year at the University of Warwick. And I think he had heard about us on some discord. And then he applied and we were like, wow, who is this freshman? And then we just saw that he had done so many incredible side projects. And we were actually on a team retreat in Barcelona visiting our head of engineering at that time. And everyone was talking about this wonder kid or like this kid. And then on our take home project, he had done like the best of anyone to that point. And so people were just like so excited to hire him. So we hired him as an intern and they were like, Charlie, what if you just dropped out of school? And so then we convinced him to take a year off. And he was just incredibly productive. And I think the thing you're referring to is at the start of 2023, Anthropic kind of launched their constitutional AI paper. And within a few days, I think four days, he had basically implemented that in production. And then we had it in app a week or so after that. And he has since kind of contributed to major improvements, like cutting costs down to a tenth of what they were really large scale. But yeah, you can talk about the technical stuff. Yeah.Andreas [00:26:39]: On the constitutional AI project, this was for abstract summarization, where in illicit, if you run a query, it'll return papers to you, and then it will summarize each paper with respect to your query for you on the fly. And that's a really important part of illicit because illicit does it so much. If you run a few searches, it'll have done it a few hundred times for you. And so we cared a lot about this both being fast, cheap, and also very low on hallucination. I think if illicit hallucinates something about the abstract, that's really not good. And so what Charlie did in that project was create a constitution that expressed what are the attributes of a good summary? Everything in the summary is reflected in the actual abstract, and it's like very concise, et cetera, et cetera. And then used RLHF with a model that was trained on the constitution to basically fine tune a better summarizer on an open source model. Yeah. I think that might still be in use.Jungwon [00:27:34]: Yeah. Yeah, definitely. Yeah. I think at the time, the models hadn't been trained at all to be faithful to a text. So they were just generating. So then when you ask them a question, they tried too hard to answer the question and didn't try hard enough to answer the question given the text or answer what the text said about the question. So we had to basically teach the models to do that specific task.Swyx [00:27:54]: How do you monitor the ongoing performance of your models? Not to get too LLM-opsy, but you are one of the larger, more well-known operations doing NLP at scale. I guess effectively, you have to monitor these things and nobody has a good answer that I talk to.Andreas [00:28:10]: I don't think we have a good answer yet. I think the answers are actually a little bit clearer on the just kind of basic robustness side of where you can import ideas from normal software engineering and normal kind of DevOps. You're like, well, you need to monitor kind of latencies and response times and uptime and whatnot.Swyx [00:28:27]: I think when we say performance, it's more about hallucination rate, isn't it?Andreas [00:28:30]: And then things like hallucination rate where I think there, the really important thing is training time. So we care a lot about having our own internal benchmarks for model development that reflect the distribution of user queries so that we can know ahead of time how well is the model going to perform on different types of tasks. So the tasks being summarization, question answering, given a paper, ranking. And for each of those, we want to know what's the distribution of things the model is going to see so that we can have well-calibrated predictions on how well the model is going to do in production. And I think, yeah, there's some chance that there's distribution shift and actually the things users enter are going to be different. But I think that's much less important than getting the kind of training right and having very high quality, well-vetted data sets at training time.Jungwon [00:29:18]: I think we also end up effectively monitoring by trying to evaluate new models as they come out. And so that kind of prompts us to go through our eval suite every couple of months. And every time a new model comes out, we have to see how is this performing relative to production and what we currently have.Swyx [00:29:32]: Yeah. I mean, since we're on this topic, any new models that have really caught your eye this year?Jungwon [00:29:37]: Like Claude came out with a bunch. Yeah. I think Claude is pretty, I think the team's pretty excited about Claude. Yeah.Andreas [00:29:41]: Specifically, Claude Haiku is like a good point on the kind of Pareto frontier. It's neither the cheapest model, nor is it the most accurate, most high quality model, but it's just like a really good trade-off between cost and accuracy.Swyx [00:29:57]: You apparently have to 10-shot it to make it good. I tried using Haiku for summarization, but zero-shot was not great. Then they were like, you know, it's a skill issue, you have to try harder.Jungwon [00:30:07]: I think GPT-4 unlocked tables for us, processing data from tables, which was huge. GPT-4 Vision.Andreas [00:30:13]: Yeah.Swyx [00:30:14]: Yeah. Did you try like Fuyu? I guess you can't try Fuyu because it's non-commercial. That's the adept model.Jungwon [00:30:19]: Yeah.Swyx [00:30:20]: We haven't tried that one. Yeah. Yeah. Yeah. But Claude is multimodal as well. Yeah. I think the interesting insight that we got from talking to David Luan, who is CEO of multimodality has effectively two different flavors. One is we recognize images from a camera in the outside natural world. And actually the more important multimodality for knowledge work is screenshots and PDFs and charts and graphs. So we need a new term for that kind of multimodality.Andreas [00:30:45]: But is the claim that current models are good at one or the other? Yeah.Swyx [00:30:50]: They're over-indexed because of the history of computer vision is Coco, right? So now we're like, oh, actually, you know, screens are more important, OCR, handwriting. You mentioned a lot of like closed model lab stuff, and then you also have like this open source model fine tuning stuff. Like what is your workload now between closed and open? It's a good question.Andreas [00:31:07]: I think- Is it half and half? It's a-Swyx [00:31:10]: Is that even a relevant question or not? Is this a nonsensical question?Andreas [00:31:13]: It depends a little bit on like how you index, whether you index by like computer cost or number of queries. I'd say like in terms of number of queries, it's maybe similar. In terms of like cost and compute, I think the closed models make up more of the budget since the main cases where you want to use closed models are cases where they're just smarter, where no existing open source models are quite smart enough.Jungwon [00:31:35]: Yeah. Yeah.Alessio [00:31:37]: We have a lot of interesting technical questions to go in, but just to wrap the kind of like UX evolution, now you have the notebooks. We talked a lot about how chatbots are not the final frontier, you know? How did you decide to get into notebooks, which is a very iterative kind of like interactive interface and yeah, maybe learnings from that.Jungwon [00:31:56]: Yeah. This is actually our fourth time trying to make this work. Okay. I think the first time was probably in early 2021. I think because we've always been obsessed with this idea of task decomposition and like branching, we always wanted a tool that could be kind of unbounded where you could keep going, could do a lot of branching where you could kind of apply language model operations or computations on other tasks. So in 2021, we had this thing called composite tasks where you could use GPT-3 to brainstorm a bunch of research questions and then take each research question and decompose those further into sub questions. This kind of, again, that like task decomposition tree type thing was always very exciting to us, but that was like, it didn't work and it was kind of overwhelming. Then at the end of 22, I think we tried again and at that point we were thinking, okay, we've done a lot with this literature review thing. We also want to start helping with kind of adjacent domains and different workflows. Like we want to help more with machine learning. What does that look like? And as we were thinking about it, we're like, well, there are so many research workflows. How do we not just build three new workflows into Elicit, but make Elicit really generic to lots of workflows? What is like a generic composable system with nice abstractions that can like scale to all these workflows? So we like iterated on that a bunch and then didn't quite narrow the problem space enough or like quite get to what we wanted. And then I think it was at the beginning of 2023 where we're like, wow, computational notebooks kind of enable this, where they have a lot of flexibility, but kind of robust primitives such that you can extend the workflow and it's not limited. It's not like you ask a query, you get an answer, you're done. You can just constantly keep building on top of that. And each little step seems like a really good unit of work for the language model. And also there was just like really helpful to have a bit more preexisting work to emulate. Yeah, that's kind of how we ended up at computational notebooks for Elicit.Andreas [00:33:44]: Maybe one thing that's worth making explicit is the difference between computational notebooks and chat, because on the surface, they seem pretty similar. It's kind of this iterative interaction where you add stuff. In both cases, you have a back and forth between you enter stuff and then you get some output and then you enter stuff. But the important difference in our minds is with notebooks, you can define a process. So in data science, you can be like, here's like my data analysis process that takes in a CSV and then does some extraction and then generates a figure at the end. And you can prototype it using a small CSV and then you can run it over a much larger CSV later. And similarly, the vision for notebooks in our case is to not make it this like one-off chat interaction, but to allow you to then say, if you start and first you're like, okay, let me just analyze a few papers and see, do I get to the correct conclusions for those few papers? Can I then later go back and say, now let me run this over 10,000 papers now that I've debugged the process using a few papers. And that's an interaction that doesn't fit quite as well into the chat framework because that's more for kind of quick back and forth interaction.Alessio [00:34:49]: Do you think in notebooks, it's kind of like structure, editable chain of thought, basically step by step? Like, is that kind of where you see this going? And then are people going to reuse notebooks as like templates? And maybe in traditional notebooks, it's like cookbooks, right? You share a cookbook, you can start from there. Is this similar in Elizit?Andreas [00:35:06]: Yeah, that's exactly right. So that's our hope that people will build templates, share them with other people. I think chain of thought is maybe still like kind of one level lower on the abstraction hierarchy than we would think of notebooks. I think we'll probably want to think about more semantic pieces like a building block is more like a paper search or an extraction or a list of concepts. And then the model's detailed reasoning will probably often be one level down. You always want to be able to see it, but you don't always want it to be front and center.Alessio [00:35:36]: Yeah, what's the difference between a notebook and an agent? Since everybody always asks me, what's an agent? Like how do you think about where the line is?Andreas [00:35:44]: Yeah, it's an interesting question. In the notebook world, I would generally think of the human as the agent in the first iteration. So you have the notebook and the human kind of adds little action steps. And then the next point on this kind of progress gradient is, okay, now you can use language models to predict which action would you take as a human. And at some point, you're probably going to be very good at this, you'll be like, okay, in some cases I can, with 99.9% accuracy, predict what you do. And then you might as well just execute it, like why wait for the human? And eventually, as you get better at this, that will just look more and more like agents taking actions as opposed to you doing the thing. I think templates are a specific case of this where you're like, okay, well, there's just particular sequences of actions that you often want to chunk and have available as primitives, just like in normal programming. And those, you can view them as action sequences of agents, or you can view them as more normal programming language abstraction thing. And I think those are two valid views. Yeah.Alessio [00:36:40]: How do you see this change as, like you said, the models get better and you need less and less human actual interfacing with the model, you just get the results? Like how does the UX and the way people perceive it change?Jungwon [00:36:52]: Yeah, I think this kind of interaction paradigms for evaluation is not really something the internet has encountered yet, because up to now, the internet has all been about getting data and work from people. So increasingly, I really want kind of evaluation, both from an interface perspective and from like a technical perspective and operation perspective to be a superpower for Elicit, because I think over time, models will do more and more of the work, and people will have to do more and more of the evaluation. So I think, yeah, in terms of the interface, some of the things we have today, you know, for every kind of language model generation, there's some citation back, and we kind of try to highlight the ground truth in the paper that is most relevant to whatever Elicit said, and make it super easy so that you can click on it and quickly see in context and validate whether the text actually supports the answer that Elicit gave. So I think we'd probably want to scale things up like that, like the ability to kind of spot check the model's work super quickly, scale up interfaces like that. And-Swyx [00:37:44]: Who would spot check? The user?Jungwon [00:37:46]: Yeah, to start, it would be the user. One of the other things we do is also kind of flag the model's uncertainty. So we have models report out, how confident are you that this was the sample size of this study? The model's not sure, we throw a flag. And so the user knows to prioritize checking that. So again, we can kind of scale that up. So when the model's like, well, I searched this on Google, I'm not sure if that was the right thing. I have an uncertainty flag, and the user can go and be like, oh, okay, that was actually the right thing to do or not.Swyx [00:38:10]: I've tried to do uncertainty readings from models. I don't know if you have this live. You do? Yeah. Because I just didn't find them reliable because they just hallucinated their own uncertainty. I would love to base it on log probs or something more native within the model rather than generated. But okay, it sounds like they scale properly for you. Yeah.Jungwon [00:38:30]: We found it to be pretty calibrated. It varies on the model.Andreas [00:38:32]: I think in some cases, we also use two different models for the uncertainty estimates than for the question answering. So one model would say, here's my chain of thought, here's my answer. And then a different type of model. Let's say the first model is Llama, and let's say the second model is GPT-3.5. And then the second model just looks over the results and is like, okay, how confident are you in this? And I think sometimes using a different model can be better than using the same model. Yeah.Swyx [00:38:58]: On the topic of models, evaluating models, obviously you can do that all day long. What's your budget? Because your queries fan out a lot. And then you have models evaluating models. One person typing in a question can lead to a thousand calls.Andreas [00:39:11]: It depends on the project. So if the project is basically a systematic review that otherwise human research assistants would do, then the project is basically a human equivalent spend. And the spend can get quite large for those projects. I don't know, let's say $100,000. In those cases, you're happier to spend compute then in the kind of shallow search case where someone just enters a question because, I don't know, maybe I heard about creatine. What's it about? Probably don't want to spend a lot of compute on that. This sort of being able to invest more or less compute into getting more or less accurate answers is I think one of the core things we care about. And that I think is currently undervalued in the AI space. I think currently you can choose which model you want and you can sometimes, I don't know, you'll tip it and it'll try harder or you can try various things to get it to work harder. But you don't have great ways of converting willingness to spend into better answers. And we really want to build a product that has this sort of unbounded flavor where if you care about it a lot, you should be able to get really high quality answers, really double checked in every way.Alessio [00:40:14]: And you have a credits-based pricing. So unlike most products, it's not a fixed monthly fee.Jungwon [00:40:19]: Right, exactly. So some of the higher costs are tiered. So for most casual users, they'll just get the abstract summary, which is kind of an open source model. Then you can add more columns, which have more extractions and these uncertainty features. And then you can also add the same columns in high accuracy mode, which also parses the table. So we kind of stack the complexity on the calls.Swyx [00:40:39]: You know, the fun thing you can do with a credit system, which is data for data, basically you can give people more credits if they give data back to you. I don't know if you've already done that. We've thought about something like this.Jungwon [00:40:49]: It's like if you don't have money, but you have time, how do you exchange that?Swyx [00:40:54]: It's a fair trade.Jungwon [00:40:55]: I think it's interesting. We haven't quite operationalized it. And then, you know, there's been some kind of like adverse selection. Like, you know, for example, it would be really valuable to get feedback on our model. So maybe if you were willing to give more robust feedback on our results, we could give you credits or something like that. But then there's kind of this, will people take it seriously? And you want the good people. Exactly.Swyx [00:41:11]: Can you tell who are the good people? Not right now.Jungwon [00:41:13]: But yeah, maybe at the point where we can, we can offer it. We can offer it up to them.Swyx [00:41:16]: The perplexity of questions asked, you know, if it's higher perplexity, these are the smarterJungwon [00:41:20]: people. Yeah, maybe.Andreas [00:41:23]: If you put typos in your queries, you're not going to get off the stage.Swyx [00:41:28]: Negative social credit. It's very topical right now to think about the threat of long context windows. All these models that we're talking about these days, all like a million token plus. Is that relevant for you? Can you make use of that? Is that just prohibitively expensive because you're just paying for all those tokens or you're just doing rag?Andreas [00:41:44]: It's definitely relevant. And when we think about search, as many people do, we think about kind of a staged pipeline of retrieval where first you use semantic search database with embeddings, get like the, in our case, maybe 400 or so most relevant papers. And then, then you still need to rank those. And I think at that point it becomes pretty interesting to use larger models. So specifically in the past, I think a lot of ranking was kind of per item ranking where you would score each individual item, maybe using increasingly expensive scoring methods and then rank based on the scores. But I think list-wise re-ranking where you have a model that can see all the elements is a lot more powerful because often you can only really tell how good a thing is in comparison to other things and what things should come first. It really depends on like, well, what other things that are available, maybe you even care about diversity in your results. You don't want to show 10 very similar papers as the first 10 results. So I think a long context models are quite interesting there. And especially for our case where we care more about power users who are perhaps a little bit more willing to wait a little bit longer to get higher quality results relative to people who just quickly check out things because why not? And I think being able to spend more on longer contexts is quite valuable.Jungwon [00:42:55]: Yeah. I think one thing the longer context models changed for us is maybe a focus from breaking down tasks to breaking down the evaluation. So before, you know, if we wanted to answer a question from the full text of a paper, we had to figure out how to chunk it and like find the relevant chunk and then answer based on that chunk. And the nice thing was then, you know, kind of which chunk the model used to answer the question. So if you want to help the user track it, yeah, you can be like, well, this was the chunk that the model got. And now if you put the whole text in the paper, you have to like kind of find the chunk like more retroactively basically. And so you need kind of like a different set of abilities and obviously like a different technology to figure out. You still want to point the user to the supporting quotes in the text, but then the interaction is a little different.Swyx [00:43:38]: You like scan through and find some rouge score floor.Andreas [00:43:41]: I think there's an interesting space of almost research problems here because you would ideally make causal claims like if this hadn't been in the text, the model wouldn't have said this thing. And maybe you can do expensive approximations to that where like, I don't know, you just throw out chunk of the paper and re-answer and see what happens. But hopefully there are better ways of doing that where you just get that kind of counterfactual information for free from the model.Alessio [00:44:06]: Do you think at all about the cost of maintaining REG versus just putting more tokens in the window? I think in software development, a lot of times people buy developer productivity things so that we don't have to worry about it. Context window is kind of the same, right? You have to maintain chunking and like REG retrieval and like re-ranking and all of this versus I just shove everything into the context and like it costs a little more, but at least I don't have to do all of that. Is that something you thought about?Jungwon [00:44:31]: I think we still like hit up against context limits enough that it's not really, do we still want to keep this REG around? It's like we do still need it for the scale of the work that we're doing, yeah.Andreas [00:44:41]: And I think there are different kinds of maintainability. In one sense, I think you're right that throw everything into the context window thing is easier to maintain because you just can swap out a model. In another sense, if things go wrong, it's harder to debug where like, if you know, here's the process that we go through to go from 200 million papers to an answer. And there are like little steps and you understand, okay, this is the step that finds the relevant paragraph or whatever it may be. You'll know which step breaks if the answers are bad, whereas if it's just like a new model version came out and now it suddenly doesn't find your needle in a haystack anymore, then you're like, okay, what can you do? You're kind of at a loss.Alessio [00:45:21]: Let's talk a bit about, yeah, needle in a haystack and like maybe the opposite of it, which is like hard grounding. I don't know if that's like the best name to think about it, but I was using one of these chatwitcher documents features and I put the AMD MI300 specs and the new Blackwell chips from NVIDIA and I was asking questions and does the AMD chip support NVLink? And the response was like, oh, it doesn't say in the specs. But if you ask GPD 4 without the docs, it would tell you no, because NVLink it's a NVIDIA technology.Swyx [00:45:49]: It just says in the thing.Alessio [00:45:53]: How do you think about that? Does using the context sometimes suppress the knowledge that the model has?Andreas [00:45:57]: It really depends on the task because I think sometimes that is exactly what you want. So imagine you're a researcher, you're writing the background section of your paper and you're trying to describe what these other papers say. You really don't want extra information to be introduced there. In other cases where you're just trying to figure out the truth and you're giving the documents because you think they will help the model figure out what the truth is. I think you do want, if the model has a hunch that there might be something that's not in the papers, you do want to surface that. I think ideally you still don't want the model to just tell you, probably the ideal thing looks a bit more like agent control where the model can issue a query that then is intended to surface documents that substantiate its hunch. That's maybe a reasonable middle ground between model just telling you and model being fully limited to the papers you give it.Jungwon [00:46:44]: Yeah, I would say it's, they're just kind of different tasks right now. And the task that Elicit is mostly focused on is what do these papers say? But there's another task which is like, just give me the best possible answer and that give me the best possible answer sometimes depends on what do these papers say, but it can also depend on other stuff that's not in the papers. So ideally we can do both and then kind of do this overall task for you more going forward.Alessio [00:47:08]: We see a lot of details, but just to zoom back out a little bit, what are maybe the most underrated features of Elicit and what is one thing that maybe the users surprise you the most by using it?Jungwon [00:47:19]: I think the most powerful feature of Elicit is the ability to extract, add columns to this table, which effectively extracts data from all of your papers at once. It's well used, but there are kind of many different extensions of that that I think users are still discovering. So one is we let you give a description of the column. We let you give instructions of a column. We let you create custom columns. So we have like 30 plus predefined fields that users can extract, like what were the methods? What were the main findings? How many people were studied? And we actually show you basically the prompts that we're using to
Today, we're joined by Gil Kaminski, co-founder of Humelan, a Public Benefit Corporation dedicated to guiding individuals through hearing health and wellness. Humelan offers a person-centered approach, community support, and an AI companion app for hearing health. Inspired to address the challenges faced by those with hearing loss, Humelan's vision is to empower human potential through communication equity. On the podcast, Gil shares the backstory of how Humelan got started as well as the powerful mission, vision, and goals. Visit humelan.com, join online communities, follow on social media, visit the knowledge hub, and attend virtual events to get connected. Thank you for tuning in, and stay engaged with Humelan's mission to empower communication equity! Visit Humelan at https://www.humelan.com/ Connect with Gil Kaminski: https://www.linkedin.com/in/gilkaminski/ You can listen to this episode wherever you stream podcasts and at www.3cdigitalmedianetwork.com/empowear-audiology-podcast
This week's episode, I chat with Chris Zeunstrom, the Founder and CEO of Ruca and Yorba. Ruca is a global design cooperative and founder support network, while Yorba is a reverse CRM that aims to reduce your digital footprint and keep your personal information safe. Through his businesses, Chris focuses on solving common problems and creating innovative products. In our conversation, we talk about building a privacy-first company, the digital minimalist movement, and the future of decentralized identity and storage.Chris shares his journey as a privacy-focused entrepreneur and his mission to prioritize privacy and decentralization in managing personal data. He also explains the digital minimalist movement and why its teachings reach beyond the industry. Chris touches on Yorba's collaboration with Consumer Reports to implement Permission Slip and creating a Data Rights Protocol ecosystem that automates data deletion for consumers. Chris also emphasizes the benefits of decentralized identity and storage solutions in improving personal privacy and security. Finally, he gives you a sneak peek at what's next in store for Yorba.Topics Covered: How Yorba was designed as a privacy-1st consumer CRM platform; the problems that Yorba solves; and key product functionality & privacy featuresWhy Chris decided to bring a consumer product to market for privacy rather than a B2B productWhy Chris incorporated Yorba as a 'Public Benefit Corporation' (PBC) and sought B Corp statusExploring 'Digital Minimalism' How Yorba's is working with Consumer Reports to advance the CR Data Rights Protocol, leveraging 'Permission Slip' - an authorized agent for consumers to submit data deletion requestsThe architectural design decisions behind Yorba's personal CRM system The benefits to using Matomo Analytics or Fathom Analytics for greater privacy vs. using Google Analytics The privacy benefits to deploying 'Decentralized Identity' & 'Decentralized Storage' architecturesChris' vision for the next stage of the Internet; and, the future of YorbaGuest Info: Follow/Connect with Chris on LinkedInCheck out Yorba's website Resources Mentioned: Read: TechCrunch's review of YorbaRead: 'Digital Minimalism - Choosing a Focused Life In a Noisy World' by Cal NewportSubscribe to the Bullet Journal (AKA Bujo) on Digital Minimalism by Ryder CarrollLearn about Consumer Reports' Permission Slip Protocol Check out Matomo Analytics and Fathom for privacy-first analytics platforms Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this episode, Peter and Will dive into satellite technology, what it takes to create a company like Planet, and its effect on ecosystems across the world. 11:56 | AI Revolutionizes Satellite Technology 21:47 | Will's Exponential Journey to Success 39:15 | Groundbreaking Rocket Launch Technology Improvements Will Marshall, Chairman, Co-Founder, and CEO of Planet, transitioned from a scientist at NASA to an entrepreneur, leading the company from its inception in a garage to a public entity with over 800 staff. With a background in physics and extensive experience in space technology, he has been instrumental in steering Planet towards its mission of propelling humanity towards sustainability and security, as outlined in its Public Benefit Corporation charter. Recognized for his contributions to the field, Marshall serves on the board of the Open Lunar Foundation and was honored as a Young Global Leader by the World Economic Forum. Learn more about Planet ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ Use my code PETER25 for 25% off your first month's supply of Seed's DS-01® Daily Synbiotic: seed.com/moonshots Learn about my executive summit, Abundance360 2025: https://www.abundance360.com/summit _____________ Get my new Longevity Practices 2024 book: https://bit.ly/48Hv1j6 I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
My guest is Alex Fink! Alex is a Tech Executive, Silicon Valley Expat, and the Founder and CEO of the Otherweb, a Public Benefit Corporation that uses AI to help people read news and commentary, listen to podcasts and search the web without paywalls, clickbait, ads, autoplaying videos, affiliate links, or any other junk. Social and Website: Linkedin: https://www.linkedin.com/in/temuchin43/ Company Linkedin: https://www.linkedin.com/company/otherweb/ Website: https://otherweb.com/ Follow Digital Niche Agency on Socials for Up To Date Marketing Expertise and Insights: Facebook: https://www.facebook.com/digitalniche... Linkedin: https://www.linkedin.com/company/digi... Instagram: DNA - Digital Niche Agency @digitalnicheagency • Instagram photos and videos. Twitter: https://twitter.com/DNAgency_CA YouTube: https://www.youtube.com/channel/UCDlz…
Do you want to activate your heroic potential and truly make a positive impact on the world?In this latest episode of The Happy Hustle Podcast, I had the pleasure of chatting with none other than Brian Johnson, the founder and CEO of Heroic Public Benefit Corporation. Brian is not just 50% philosopher and 50% CEO, but he's also 101% committed to creating a world where 51% of humanity is flourishing by 2051. Now, that's a vision worth tuning in for!In this episode, we dive deep into his latest book, "Arete," which is all about activating your heroic potential. Trust me, it's one of the best books I've ever read, and I don't say that lightly. I even went a bit fanboy because I was so inspired by his work. We cover topics like building anti-fragile confidence, living with Arete, and creating a successful SaaS business. Brian shares insights on making your prior best your new baseline, the inspiration behind the Heroic app, and the importance of integrity in decision-making as an entrepreneur.But it doesn't stop there—Brian has a social app called Heroic that can help you operationalize ancient wisdom and modern science to transform your life. I highly recommend checking it out at heroic.us.Ready to unlock your heroic potential? Don't miss out on this conversation that might just change the game for you. In this episode, you will learn:01:10 Introduction to Arête and Activating Heroic Potential04:50 Building Anti-Fragile Confidence08:51 Living with Arête18:37 Activating Heroic Potential in Energy, Work, and Love21:03 Making Your Prior Best Your New Baseline23:54 Creating the Heroic App and Crowdfunding Success27:34 Building a Successful SaaS Business32:12 Inspiration from The Social Dilemma36:58 Integrity and Decision-Making as an Entrepreneur39:45 Creating a Publishing Partnership41:22 Cultivating Emotional Stamina43:56 The Heroic App and Book48:26 The Importance of Consciousness in Money50:15 The Power of Basic Practices for Spirituality54:35 Happy Hustle Hacks and Rapid Fire RoundWhat does Happy Hustlin mean to you? Brian says Happy Hustlin is that joyful, deeper than happy. There's a joy, there's a meaning, there's a purpose, there's a growth to give. Connect with Brianhttps://www.instagram.com/heroicbrianhttps://www.facebook.com/heroicbrianhttps://www.youtube.com/@HeroicBrianhttps://www.linkedin.com/in/heroicbrianhttps://twitter.com/heroicbrianFind Brian on his website: https://www.heroic.us/Connect with Cary!https://www.instagram.com/cary__jack/https://www.facebook.com/SirCaryJackhttps://www.linkedin.com/in/cary-jack-kendzior/https://twitter.com/thehappyhustlehttps://www.youtube.com/channel/UCFDNsD59tLxv2JfEuSsNMOQ/featured Get a copy of his new book, The Happy Hustle, 10 Alignments to Avoid Burnout & Achieve Blissful Balance (https://www.thehappyhustle.com/book)Sign up for The Journey: 10 Days To Become a Happy Hustler Online Course (https://thehappyhustle.com/thejourney/)Apply to the Montana Mastermind Epic Camping Adventure (https://thehappyhustle.com/mastermind/)“It's time to Happy Hustle, a blissfully balanced life you love, full of passion, purpose, and positive impact!” Episode sponsor BIOptimizers Magnesium Breakthrough (https://magbreakthrough.com/vip?gl=65132c943f5d60f00f8b4567&coupon=hustle)This stuff is a game-changer! Magnesium Breakthrough packs all 7 forms of magnesium, designed to support stress management, promote muscle relaxation, regulate the nervous system, control stress hormones, boost brain function, increase energy, and enhance sleep.I take 2 capsules before bedtime, and it's been a game-changer for me. The best part is, BIOptimizers offer a risk-free 365-day money-back guarantee. No results, no problem – they'll refund you, no questions asked. It's a win-win!Head over to magnesiumbreakthrough.com/hustle and use code "hustle" for an exclusive 10% discount on any order. Plus, for a limited time, you'll score some special gifts with your purchase.BELAY Solutions (https://resources.belaysolutions.com/happyhustle?utm_campaign=The%20Happy%20Hustle&utm_source=email&utm_medium=Email%20Newsletter%20-%20D2E&utm_content=HH%20Native%20Ad%201)BELAY is a flexible staffing solution that helps busy leaders find the right help. Their bench of exceptional U.S.-based Virtual Assistants can handle the tedious tasks you no longer have time to accomplish.My experience with BELAY has been a game-changer for my business. They have a talented team of VAs, Bookkeepers, and Social Media Managers that I'm sure can help you and your business. To learn more about how to delegate, download a free copy of BELAY's ebook, Delegate to Elevate, to get back to what you do best with BELAY! (https://resources.belaysolutions.com/happyhustle?utm_campaign=The%20Happy%20Hustle&utm_source=email&utm_medium=Email%20Newsletter%20-%20D2E&utm_content=HH%20Native%20Ad%201)
The Word in Black website states a mission: "To be the most trusted news and information source for, about, and by Black people.” Founded on June 7, 2021, just after the entire world witnessed the tragic death of George Floyd, 10 of the nation's most prestigious Black legacy newspaper publishers joined together to launch a collaborative online news presence to work together to serve their local readers and combine their resources and content into a single branded platform. Word in Black was initially incubated inside the Local Media Foundation (LMF), a 501(c)(3) organization affiliated with the Local Media Association. As of January 1, 2024, as part of the original plan, the foundation has sold the assets to Word in Black's newly formed public benefit company. It will continue to provide support as a shareholder in the new entity. Today, Word in Black boasts a newsroom with 10 full-time journalists and freelancers covering topics including health, education, finance, climate justice, religion and more — all outlined in their recently published Impact Report. The 10 founding newspapers include: · AFRO News · The Atlanta Voice · Dallas Weekly · Houston Defender · Michigan Chronicle · New York Amsterdam News · The Sacramento Observer · The Seattle Medium · The St. Louis American · The Washington Informer In this episode of "E&P Reports," we explore Word in Black. This three-year-old online news collaboration includes 10 of the most prestigious Black newspapers in America that announced its transition to public benefit company status. Appearing along with Nancy Lane, co-CEO of the Local Media Association, whose foundation helped incubate the project, are founding members: Dr. Frances “Toni” Draper, CEO and publisher of AFRO News, Elinor R. Tatum, publisher and editor-in-chief of the New York Amsterdam News and Patrick Washington, CEO/co-Publisher of the Dallas Weekly.
https://www.linkedin.com/in/temuchin43About Alex Fink: Alex Fink is a Silicon Valley Expat & The Founder of the Otherweb, a Public Benefit Corporation trying to combat this issue, He is here today sharing how people (of all ages) can sleep better, reduce stress and improve brain health and mental wellness through mindful media consumption. What We Discuss In this Episode: Excessive media consumption has been linked to anxiety, stress, and sleep issues…Alex explains why he thinks this is such a problem today. Alex explains why he thinks Silicon Valley Is part of the problem. Alex shares how we can reduce stress and improve sleep through mindful media consumption…and exactly what “mindful media consumption” is. What choices can we make in our media consumption to lower stress and sleep better (but still stay informed)? How people can reduce stress & improve mental health by balancing their “Digital Diet” Alex shares PRACTICAL ADVICE about how people can improve their mental health and well-being by balancing their digital diet. He also has VALUABLE INFO & ADVICE about why the information ecosystem (aka the internet) has become a polarized breeding ground for controversy and misinformation, and how people can mine it for truth, instead. Key Takeaways · The prevalence of smartphone usage and the differences between Apple and Android. · Mindful media consumption for better sleep, reduced stress, and improved mental wellness. · How addiction to digital media and exposure to stimulating content before bed can lead to sleep issues and affect overall well-being. · The negative impact of digital media on our attention and mental health. Resources from Alex Fink: Are you “Fake News' Savvy? 5 Ways to Spot “Fake News” Articles Online. Download Otherweb for FREE: https://otherweb.com Connect With Alex Fink: Website: https://otherweb.com Apple: https://apps.apple.com/app/otherweb/id6443798894 Google Play: https://play.google.com/store/apps/details?id=com.otherweb LinkedIn: https://www.linkedin.com/in/temuchin43 Connect with Lynne: If you're looking for a community of like-minded women on a journey - just like you are - to improved health and wellness, overall balance, and increased confidence, check out Lynne's private community in The Energized Healthy Women's Club. It's a supportive and collaborative community where the women in this group share tips and solutions for a healthy and holistic lifestyle. (Discussions include things like weight management, eliminating belly bloat, balancing hormones, wrangling sugar gremlins, overcoming fatigue, recipes, strategies, perimenopause & menopause, and much more ... so women can feel energized, healthy, and lighter, with a new sense of purpose. Website: https://holistic-healthandwellness.com Facebook: https://www.facebook.com/holistichealthandwellnessllc The Energized Healthy Women's Club: https://www.facebook.com/groups/energized.healthy.women Instagram: https://www.instagram.com/lynnewadsworth LinkedIn: https://www.linkedin.com/in/lynnewadsworth Free Resources from Lynne Wadsworth: How to Thrive in Menopause: MENOPAUSE Messing Up your life? Maybe you're seeing the number on the scale creep higher and higher and you're noticing your usual efforts to lose weight aren't working. Then there's the hot blazes, night sweats, and sleeping fitfully, not to mention that you're fighting tears one moment, raging the next, and then, the shameful guilt sets in because you've just blasted your partner – for nothing…again! Learn how to successfully and holistically navigate perimenopause and full-blown menopause, and even reconcile all the hormonal changes and challenges that go along with it. You'll be feeling energized, healthier, and more in control so you can take on your day confidently and live life joyfully – even in menopause. I've got this FREE solution tool for you. Download my guide here: https://holistic-healthandwellness.com/thrive-through-menopause/ 5 Simple Steps to Gain Energy, Feel Great & Uplevel Your Health: Are you ready to create a Healthier Lifestyle? Would you like to feel lighter, more energized, and even add joy to your life? If it's time to find more balance of mind~body~soul, then I've got the perfect FREE resource to help. In this guide, you'll find my most impactful strategies and I've made applying them in your life as simple as 1-2-3 (plus a couple more) to help you create a healthier, holistic lifestyle. Uplevel your holistic health and wellness and download the 5 Simple Steps to Health here: https://holistic-healthandwellness.com/5-simple-steps-to-a-healthier-you/ Did You Enjoy The Podcast? If you enjoyed this episode please let us know! 5-star reviews for the Living Life Naturally podcast on Apple Podcasts, Spotify, or Pandora are greatly appreciated. This helps us reach more women struggling to live through midlife and beyond. Thank you. Together, we make a difference!
The ADA release their new guidance for the management of diabetes 2024; a new blood collection device obtains lab-quality samples using a the finger; novel treatment for postpartum depression becomes available; a study for a male birth control pill begins; and a Public Benefit Corporation seeks approval for an MDMA-assisted therapy.
Keep going with your gains. Do them consistently, especially if we don't feel like it. And significant, fundamental, permanent changes can be installed. Explore the hero's journey with author and entrepreneur Brian Johnson (@BrianJohnson). Brian is the founder and CEO of Heroic, a Public Benefit Corporation dedicated to propelling humanity towards a flourishing future. "The most latent desire I've always had is how can I live a great life in service to something bigger than myself and help others do that? " - Brian Johnson Key Takeaways: Anti-fragile Confidence: Building unbreakable confidence by keeping commitments to oneself, leveraging challenges to grow stronger rather than breaking. If you lack integrity in your word, trusting yourself is impossible and erodes your confidence. To have trust, you must do what you say in any relationship, whether with yourself or others. The Hero's Journey: The three phases of the heroic journey are the call to adventure, the facing of challenges, and the return to the ordinary world. Each stage evolves the hero's consciousness. This journey isn't a one-time event but a continuous theme in life—answering the call repeatedly to develop and live a life from the heart in service. 8 Virtues: To live your best life and align with your heroic journey, there are 8 virtues to live by: wisdom, discipline, love, courage, gratitude, hope, curiosity, and zest. These 8 virtues come from ancient philosophy and modern science. The Ultimate Game: The ultimate game of life is to choose wisely and to be your best self in service to something bigger than yourself. To do this, you need to forge anti-fragile confidence and recognize that the hero's journey is supposed to be complicated. Sponsors and Promotions: Zbiotics: ZBiotics Pre-Alcohol Probiotic Drink is the world's first genetically engineered probiotic. Go to zbiotics.com/DIVINE to get 15% off your first order, and use code DIVINE Masterclass: Find practical takeaways you can apply to your life and at work with MasterClass. Give One Annual Membership and Get One Free at MasterClass.com/DIVINE. Links for Brian Johnson: Website Instagram LinkedIn
Join host Ned Buskirk in conversation with Katrina Spade, founder & CEO of Recompose, a Public Benefit Corporation based in Seattle & the world's first human composting company, while they talk about her work with Recompose, the history that led to it, & the option of returning our bodies to the earth via composting.katrina spade'sIG: https://www.instagram.com/katrinaspade/ TED talk: https://www.ted.com/talks/katrina_spade_when_i_die_recompose_me recompose'swebsite: https://recompose.life/ IG: https://www.instagram.com/recomposelife/ FB; https://www.facebook.com/recomposelife/ newsletter: https://recompose.life/#signup Produced by Nick JainaSoundscaping by Nick Jaina”YG2D Podcast Theme Song” by Nick JainaTHIS PODCAST IS MADE POSSIBLE WITH SUPPORT FROM LISTENERS LIKE YOU.Become a podcast patron now at https://www.patreon.com/YG2D.
In this episode of Hope Natural Health, Dr. Erin speaks with Alex Fink about nutrition labels. Alex Fink is a Tech Executive, Silicon Valley Expat, and the Founder and CEO of the Otherweb, a Public Benefit Corporation that (among other things) generates a “nutrition label” for media content so people can be more informed about the content they consume online.The Otherweb is also available as an app, a website, a newsletter, or a standalone browser extension. After a long career in Silicon Valley as a tech executive in a variety of startups, Alex decided that instead of contributing to the “Social Dilemma” (a problem created by Silicon Valley with the advent of social media in which clickbait is incentivized), he would rather build the solution using their own methods. He moved to Austin, rolled up his sleeves, and a few years later the Otherweb was born. During this episode you will learn about: Why fake news is so prevalent Whether a computer really can tell the difference between truth and fiction How you could use a “nutrition label” for news content Website: https://otherweb.com/ Social media: Twitter: https://twitter.com/valurank Link to Testing: https://hopenaturalhealth.wellproz.com/ Link to Period Planner: https://www.amazon.com/dp/B0BBYBRT5Q?ref_=pe_3052080_397514860 For more on Dr. Erin and Hope Natural Health: Take the Period Quiz: https://form.jotform.com/230368188751059 Check out my Hormone Reset Program: https://hopenaturalhealth.practicebetter.io/#/619ef36b398033103c7b6bf9/bookings?p=633b5cca8019b9e8d6c3518d&step=package Dr. Erin on Instagram: https://www.instagram.com/dr.erinellis/ Dr. Erin's Website: https://hopenaturalhealth.com/ Hope Natural Health on YouTube: https://www.youtube.com/channel/UChHYVmNEu5tKu91EATHhEiA Follow Hope Natural Health on FB: https://www.facebook.com/hopenaturalhealth
Of the many hopeful developments in psychedelic research in recent years, perhaps the most important is that FDA approval of MDMA-assisted therapy for treating post-traumatic stress disorder appears likely within the next year. That prospect is due in no small part to our Raise the Line guest, Dr. Michael Mithoefer, who has spearheaded clinical trials of MDMA-assisted therapy for more than twenty years and is a senior leader at the Multidisciplinary Association for Psychedelic Studies Public Benefit Corporation which has led this groundbreaking research. Although he notes that FDA approval isn't guaranteed, Mithoefer is contemplating what the practicalities will be of implementing this multi-stage therapy regimen, and he has cause for concern. “I think now the question is going to be, if it's approved, how does it fit into this medical system we have, which I think is quite dysfunctional, especially with mental health. To me, the challenge is going to be not to try to distort the treatment to fit the system,” he tells host Shiv Gaglani. In this enlightening conversation, Shiv and Dr. Mithoefer discuss the need for specialized therapist training, the importance of making the therapy available regardless of ability to pay, and other potential therapeutic uses for MDMA. This is a great opportunity to hear from an important voice about the current and future state of psychedelics as a treatment modality. Mentioned in this episode: https://mapsbcorp.com/
Join us as we chat with Shanaz Hemmati, the COO of ZenBusiness, who gives us a fascinating insight into the workings of a public benefit corporation that generates more than $100M in ARR. Listen to her as she speaks about the various initiatives and programs that ZenBusiness undertakes to give back to the community and society. From a grant program to support their customers to her role evolution from CIO to COO, Shanaz's experiences shed light on the strategies they use to navigate their responsibilities within a small team. In this chat, we also explore the potential of generative AI to increase speed, accuracy, scalability, complexity, and accessibility of structured language, and how ZenBusiness leverages this to expand capacity. From discussing the importance of data protection to understanding the potential threats to businesses, Shanaz provides a comprehensive overview of the operations side of the company, including their customer support in the contact center. Shanaz also talks about the company's organizational structure, with CTO, HR, customer success, legal, program management, procurement, and corporate citizenship all reporting to her. Learn about the process of hiring great people, and how her passion and love for what she does is the key to success. She also shares her experiences with M&A, and how she and her team at HomeAway used the strategy of acquiring existing vacation rental services to make the company global. From tracking the macroeconomic environment to discussing the advantages of remote working, Shanaz provides a wealth of information for anyone interested in understanding the workings of a successful business.
Welcome back, dear listeners, to another enlightening episode of "Healthy Mind, Healthy Life." I'm your host, Avik, and today we have a very special guest joining us. But before we dive into our discussion, let's take a moment to reflect on the impact of media consumption on our mental well-being. Today, we have the privilege of hosting a true pioneer in this field. Please welcome our guest, Alex Fink, a Silicon Valley ex-pat and the founder of The Otherweb, a Public Benefit Corporation dedicated to combating the negative impact of excessive media consumption. Excessive media consumption has become an integral part of our daily lives, but studies have shown that it can have detrimental effects on our mental health. Anxiety, stress, and other mental health issues often arise as we find ourselves increasingly tethered to our screens, consuming vast amounts of information without mindfulness So, grab a cup of tea, find a comfortable spot, and get ready to be inspired by the incredible Alex Fink. Let's dive into the world of "Healthy Mind, Healthy Life." Before we begin, make sure to subscribe to our podcast to receive updates on future episodes, and if you enjoy what you hear today, consider leaving us a review. Your feedback means the world to us! During this captivating conversation, we together, we cover a wide range of topics, including: Can you share with us the key findings from studies that have linked excessive media consumption to anxiety, stress, and other mental health issues? What motivated you to found The Otherweb and dedicate your work to combatting the negative impacts of media consumption? How would you define mindful media consumption, and why is it crucial for nurturing mental wellness in the digital age? What choices can we make in our media consumption to lower stress (but still stay informed) ? Can you provide some practical strategies and insights for creating a more mindful media environment that supports mental well-being? In today's fast-paced digital world, how can individuals strike a balance between staying informed and avoiding information overload or fatigue? How do you envision the future of media consumption and its potential impact on mental wellness? Are there any emerging trends or technologies that we should be aware of? Stay tuned for our future episodes, where we'll continue to explore the connection between a healthy mind and a healthy life, featuring experts, inspiring stories, and practical tips to support your well-being journey. Get full access to Healthy Mind, Healthy Life at healthymindbyavik.substack.com/subscribe