POPULARITY
This week on Upstream, we're releasing an episode of The Riff (originally aired on March 25, 2025) where Erik Torenberg and Byrne Hobart discuss Don Jr.'s leveraging of the Trump brand, Dustin Moskovitz's departure to address AI risks, economic impacts of various policies, and geopolitical considerations with a focus on AI and China's tech ambitions. —
This week, Byrne Hobart and Erik Torenberg explore Donald Trump Jr.'s financial leverage of the Trump brand, Dustin Moskovitz's shift toward AI existential risks, Elon Musk's strategic decisions with Twitter, potential U.S. recessions, political party shifts, China's AI policy, and the broader impact of these developments on Wall Street and Main Street. ---
Este domingo Facebook cumplió 20 años de vida. Fue un 29 de julio de 2004 cuando cuatro estudiantes de Harvard decidieron crear una red social en la que los alumnos de su universidad pudieran relacionarse entre ellos, intentando mejorar las redes sociales existentes. Las luchas de poder entre Mark Zuckerberg, Eduardo Saverin, Andrew McCollum, Dustin Moskovitz y Chris Hughes fueron reflejadas de manera magistral por David Fincher en la película La Red Social. Fue la salida al mercado lo que dinamitó la relación entre Mark Zuckerberg y Eduardo Saverin. Fue a través de los tribunales cuando Saverin se salió con la suya, obteniendo un 7% de Facebook y el derecho a ser mencionado como co-fundador de la red social. Una vez aclarados estos problemas de propiedad, Facebook empezó pronto a ser la opción favorita de la gente para chatear y estar conectado. Desbancó rápidamente a otras redes sociales como MySpace o Friendster. Aquí en España su reinado estuvo ligeramente amenazado por Tuenti, pero cuando Telefónica la adquirió, la convirtió rápidamente en red de telefonía móvil. En 2008 se convierte en la red social más usada, superando a Myspace, contando en España con más de un millón de usuarios. Aunque la historia de Meta también es la historia de grandes adquisiciones. La última gran adquisición fue la de la base de datos en línea Giphy, pero sin duda las que cambian el panorama de la compañía y la hacen ser la reina indiscutible de las redes sociales, fueron las compras de Instagram en 2012 por 1000 millones de dólares y Whatsapp en 2014 por 19.000. De hecho, en 2021 cambian su nombre a META para reflejar el carácter plural de la compañía. Aunque estas también les ha traído varios problemas legales a su creador, Mark Zuckerberg. En 2018 salió el escándalo de Cambridge Analytica: la consultora durante la década de 2010 recopiló datos de millones de personas a través de Facebook. Zuckerberg pidió perdón pero siempre ha negado la influencia de estos datos en las elecciones de 2016. ¿En qué momento se encuentra Facebook ahora mismo? Entre las novedades anunciadas por Meta, Facebook contará en 2025 con notas de la comunidad, al estilo X, diciendo adiós así a su programa de verificación de contenido. Además, Facebook priorizará la calidad de la interacción por sobre los simples clics o impresiones.
Asana es una herramienta de gestión de proyectos y tareas que facilita la planificación, el seguimiento y la colaboración en equipos de trabajo. Fundada en 2008 por Dustin Moskovitz y Justin Rosenstein, Asana se ha consolidado como una plataforma líder en la organización y gestión del trabajo. Características principales: Gestión de tareas: Permite crear tareas con fechas de vencimiento, asignarlas a miembros del equipo y establecer niveles de prioridad. Vistas de proyecto: Ofrece diversas vistas, como lista, tablero Kanban, cronograma y calendario, adaptándose a las preferencias de los usuarios. Automatización: Facilita la automatización de procesos y tareas repetitivas mediante reglas y activadores, optimizando el flujo de trabajo. Integraciones: Se integra con más de 100 aplicaciones, incluyendo Slack, Dropbox y Google Drive, centralizando la información y mejorando la productividad. Inteligencia Artificial: Incorpora funciones basadas en IA para sugerir enfoques, automatizar tareas rutinarias y acelerar la toma de decisiones. Ventajas: Plan gratuito: Ofrece una versión gratuita para equipos de hasta 15 usuarios, sin límites en la cantidad de tareas y proyectos, y con acceso a múltiples vistas y aplicaciones móviles. Interfaz intuitiva: Diseño sencillo y fácil de usar, con menús contextuales y un panel de control intuitivo. Colaboración efectiva: Facilita la comunicación directa dentro de los proyectos, el intercambio de archivos y el seguimiento del progreso, mejorando la colaboración en equipo. Desventajas: Curva de aprendizaje: Algunas funciones pueden ser difíciles de encontrar o configurar para usuarios novatos. Asignación de tareas: No permite asignar una tarea a más de un usuario, lo que puede ser limitante en proyectos colaborativos. Seguimiento de tiempo: Las funciones de control de tiempo están disponibles solo en planes avanzados, lo que puede ser una limitación para algunos equipos. Planes y precios: Asana ofrece varios planes para adaptarse a las necesidades de diferentes equipos: Gratis: Para equipos de hasta 15 usuarios, incluye funciones básicas de gestión de tareas y proyectos. Premium: A partir de 10,99 € por usuario al mes, añade funciones avanzadas como cronogramas y campos personalizados. Business: Desde 24,99 € por usuario al mes, incluye herramientas como portafolios y cargas de trabajo. Enterprise: Ofrece características avanzadas de seguridad y soporte personalizado; el precio varía según las necesidades de la organización. En resumen, Asana es una herramienta versátil que mejora la organización y eficiencia en la gestión de proyectos y tareas, adaptándose a equipos de diversos tamaños y necesidades.Enlaces de descarga Google Play Store App Store de Apple
pWotD Episode 2821: Facebook Welcome to Popular Wiki of the Day, spotlighting Wikipedia's most visited pages, giving you a peek into what the world is curious about today.With 967,374 views on Tuesday, 21 January 2025 our article of the day is Facebook.Facebook is a social media and social networking service owned by the American technology conglomerate Meta. Created in 2004 by Mark Zuckerberg with four other Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name derives from the face book directories often given to American university students. Membership was initially limited to Harvard students, gradually expanding to other North American universities. Since 2006, Facebook allows everyone to register from 13 years old, except in the case of a handful of nations, where the age requirement is 14 years. As of December 2023, Facebook claimed almost 3.07 billion monthly active users worldwide. As of November 2024, Facebook ranked as the third-most-visited website in the world, with 23% of its traffic coming from the United States. It was the most downloaded mobile app of the 2010s.Facebook can be accessed from devices with Internet connectivity, such as personal computers, tablets and smartphones. After registering, users can create a profile revealing personal information about themselves. They can post text, photos and multimedia which are shared with any other users who have agreed to be their friend or, with different privacy settings, publicly. Users can also communicate directly with each other with Messenger, edit messages (within 15 minutes after sending), join common-interest groups, and receive notifications on the activities of their Facebook friends and the pages they follow.The subject of numerous controversies and lawsuits, Facebook has often been criticized over issues such as user privacy (as with the Cambridge Analytica data scandal), political manipulation (as with the 2016 U. S. elections) and mass surveillance. The company has also been subject to criticism over its psychological effects such as addiction and low self-esteem, and over content such as fake news, conspiracy theories, copyright infringement, and hate speech. Commentators have accused Facebook of willingly facilitating the spread of such content, as well as exaggerating its number of users to appeal to advertisers.This recording reflects the Wikipedia text as of 01:22 UTC on Wednesday, 22 January 2025.For the full current version of the article, see Facebook on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm long-form Gregory.
David Callahan is a prolific creator and thinker within Democratic politics. He helped start the progressive think tank Demos in the late 90s, founded the media outlet Inside Philanthropy as a Consumer Reports of sorts into the world of charitable giving, and more recently created Blue Tent - an advisory group to help progressive donors get the most bang for their buck. In this conversation, David talks his early days in politics focused on foreign policy, his next stint as a think-tanker trying to pull the Democratic Party left, and why he's more recently been focused on the world of political giving. David is one of the most informed people on the planet on all facets of the political donor world and provides a tour de force on both the current state of play and future trends to better understand how our politics are funded.IN THIS EPISODEGrowing up in New York as the child of academics...An early experience that showed David he was not cut out to be an activist...A formative year spent at the liberal magazine, The American Prospect...David talks getting his PhD and his recommendations for those considering academia...David helps found the progressive think tank Demos and talks the role of think tanks in American politics...What led David to start Inside Philanthropy, a media outlet dedicated to understanding political fundraising...The disturbing trend in political giving that led David to start Blue Tent, a resource for progressive donors...How David and Blue Tent determine where donors will get the most bang for their buck...Why David is an advocate of giving to organizations instead of candidates...David on the phenomenon of "rage giving"...Are donors pulling Democratic candidates to the left?Has Democratic giving fallen off this cycle?David's concern about too many advocacy groups and donor fragmentation on the left compared to more unanimity on the right...David de-mystifies the world of big "donor advisors"...David on the Soros factor on the left...The rough balance of spending from the right vs. spending from the left...The types of operatives who succeed in the donor advising space...The political novel David wrote in the late 90s that eerily predicted elements of both the 9/11 attacks and the rise of a Donald Trump-like politician...AND AOC, Stacey Abrams, Miriam Adelson, The American Enterprise Institute, The American Liberties Project, The American Prospect Magazine, Arabella Advisors, Joe Biden, bioethics, Michael Bloomberg, bureaucratic machinations, the Cato Institute, the Center for Voter Information, Bill Clinton, The Committee on States, credential firepower, the DLC, The Democracy Alliance, Michael Dukakis, The Economic Policy Institute, effective altruism, Federalist Society, Marcus Flowers, Focus for Democracy, Fredrick Forsyth, Forward Montana, Give Well, giving circles, Al Gore, Lindsey Graham, Stanley Greenberg, Jamie Harrison, Hastings-on-Hudson, the Heritage Foundation, Hezbollah, Indian Point Power Plant, Indivisible, the Koch Brothers, LUCHA, Mitch McConnell, Amy McGrath, Michigan United, Mind the Gap, Dustin Moskovitz, Movement Voter Project, neoliberal mindsets, The New America Foundation, Paul Nitze, NYPIRG, Beto O'Rourke, Open Markets, RCTs, Ronald Reagan, The Roosevelt Institute, Run for Something, saber-rattling, Sandinistas, Adam Schiff, Star Wars, the States Project, Swing Left, Marjorie Taylor Greene, transactional donors, Way to Win, Working America & more!
Welcome to The Eric Ries Show. I sat down with Dustin Moskovitz, founder of not one but two iconic companies: Facebook and the collaborative work platform Asana. Needless to say, he's engaged in the most intense form of entrepreneurship there is. A huge part of what he's chosen to do with the hard-earned knowledge it gave him is dedicate himself and Asana to investing in employees' mental health, communication skills, and more. All of this matters to Dustin on a human level, but he also explains why putting people first is the only way to get the kind of results most founders can only dream of. We talked about how to get into that flow state, why preserving culture is crucial, his leadership style and how he decides when to be hands-on versus when to delegate, and how Asana reflects what he's learned about supporting people at all levels. Dustin sums up the work Asana does this way: “Our individual practices are meant to restore coherence for the individual, our team practices are meant to restore coherence for the team, and Asana, the system, is meant to try and do it for the entire organization.” I'm delighted to share our conversation, which also covers: • How he uses AI and views its future • Why he founded a collaboration platform • How he applied the lessons of Facebook to building Asana • Why taking care of your mental health as a founder is crucial for the company as a whole • His thoughts on the evolution of Facebook • The importance of alignment with investors • His philanthropic work • And so much more — Brought to you by: Mercury – The art of simplified finances. Learn more. DigitalOcean – The cloud loved by developers and founders alike. Sign up. Neo4j – The graph database and analytics leader. Learn more. — Where to find Dustin Moskovitz: • LinkedIn: https://www.linkedin.com/in/dmoskov/ • Threads: https://www.threads.net/@moskov • Asana: https://asana.com/leadership#moskovitz Where to find Eric: • Newsletter: https://ericries.carrd.co/ • Podcast: https://ericriesshow.com/ • X: https://twitter.com/ericries • LinkedIn: https://www.linkedin.com/in/eries/ • YouTube: https://www.youtube.com/@theericriesshow — In This Episode We Cover: (00:00) Welcome to the Eric Ries Show (00:31) Meet our guest Dustin Moskovitz (04:02) How Dustin is using AI for creative projects (05:31) Dustin talks about the social media and SaaS era and his Facebook days (06:52) How Facebook has evolved from its original intention (10:27) The founding of Asana (14:35) Building entrepreneurial confidence (19:22) Making – and fixing – design errors at Asana (20:32) The importance of committing to “soft” values. (25:27) Short-term profit over people and terrible advice from VCs (28:44) Crypto as a caricature of extractive behavior (30:47) The positive impacts of doing things with purpose (34:24) How Asana is ensuring its purpose and mission are permanently enshrined in the company (41:35) Battling entropy and meeting culture (44:31) Being employee-centric, the flow state, and Asana's strategy (47:51) The organizational equivalent of repressing emotions (52:57) Dustin as a Cassandra (56:51) Dustin talks about his philanthropic work and philosophy: Open Philanthropy, Good Ventures (1:02:05) Dustin's thoughts on AI and its future (1:07:20) Ethics, calculated risk, and thinking long-term — Referenced: Asana: https://asana.com/ Conscious Leadership Group: https://conscious.is/ Ben Horowitz on managing your own psychology: https://a16z.com/whats-the-most-difficult-ceo-skill-managing-your-own-psychology/ The Infinite Game, by Simon Sinek Dr. John Sarno The 15 Commitments of Conscious Leadership Awareness: Conversations with the Masters, by Anthony de Mello Brené Brown: Dare to Lead , The Call to Courage (Netflix trailer) Open Philanthropy Good Ventures GiveWell — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email jordan@penname.co Eric may be an investor in the companies discussed.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Claude 3.5 Sonnet, published by Zach Stein-Perlman on June 20, 2024 on LessWrong. we'll be releasing Claude 3.5 Haiku and Claude 3.5 Opus later this year. They made a mini model card. Notably: The UK AISI also conducted pre-deployment testing of a near-final model, and shared their results with the US AI Safety Institute . . . . Additionally, METR did an initial exploration of the model's autonomy-relevant capabilities. It seems that UK AISI only got maximally shallow access, since Anthropic would have said if not, and in particular it mentions "internal research techniques to acquire non-refusal model responses" as internal. This is better than nothing, but it would be unsurprising if an evaluator is unable to elicit dangerous capabilities but users - with much more time and with access to future elicitation techniques - ultimately are. Recall that DeepMind, in contrast, gave "external testing groups . . . . the ability to turn down or turn off safety filters." Anthropic CEO Dario Amodei gave Dustin Moskovitz the impression that Anthropic committed "to not meaningfully advance the frontier with a launch." (Plus Gwern, and others got this impression from Anthropic too.) Perhaps Anthropic does not consider itself bound by this, which might be reasonable - it's quite disappointing that Anthropic hasn't clarified its commitments, particularly after the confusion on this topic around the Claude 3 launch. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Claude 3.5 Sonnet, published by Zach Stein-Perlman on June 20, 2024 on LessWrong. we'll be releasing Claude 3.5 Haiku and Claude 3.5 Opus later this year. They made a mini model card. Notably: The UK AISI also conducted pre-deployment testing of a near-final model, and shared their results with the US AI Safety Institute . . . . Additionally, METR did an initial exploration of the model's autonomy-relevant capabilities. It seems that UK AISI only got maximally shallow access, since Anthropic would have said if not, and in particular it mentions "internal research techniques to acquire non-refusal model responses" as internal. This is better than nothing, but it would be unsurprising if an evaluator is unable to elicit dangerous capabilities but users - with much more time and with access to future elicitation techniques - ultimately are. Recall that DeepMind, in contrast, gave "external testing groups . . . . the ability to turn down or turn off safety filters." Anthropic CEO Dario Amodei gave Dustin Moskovitz the impression that Anthropic committed "to not meaningfully advance the frontier with a launch." (Plus Gwern, and others got this impression from Anthropic too.) Perhaps Anthropic does not consider itself bound by this, which might be reasonable - it's quite disappointing that Anthropic hasn't clarified its commitments, particularly after the confusion on this topic around the Claude 3 launch. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Shop Talk uncovers some odd hiring manager tactics to see if you are right for the job. But first, Caught My Eye announces the end of the road for the Subaru Legacy sedan. Also, the NYC portal to Dublin is closed after an OnlyFans model exposes “her potatoes” to the Irish folk across the pond. Dustin Moskovitz, co-founder of Facebook, is our Business Birthday.We're all business. Except when we're not.Apple Podcasts: apple.co/1WwDBrCSpotify: spoti.fi/2pC19B1iHeart Radio: bit.ly/4aza5LWTunein: bit.ly/1SE3NMbYouTube Music: bit.ly/43T8Y81Pandora: pdora.co/2pEfctjYouTube: bit.ly/1spAF5aAlso follow Tim and John on:Facebook: www.facebook.com/focusgroupradioTwitter: www.twitter.com/focusgroupradioInstagram: www.instagram.com/focusgroupradio
My book Reframe Your Brain, available now on Amazon https://tinyurl.com/3bwr9fm8 Find my "extra" content on Locals: https://ScottAdams.Locals.com Content: Politics, President Biden Lies, General Flynn Documentary, Dustin Moskovitz, Elon Musk, Biden's AI Advisory Board, President Trump, NY Campaigning, Paul Graham, Institutional Drift, Kristi Noem's Dog, Biden Capital Gains Tax, College Protest Funding, TikTok Ban, Tucker Carlson Election Fraud, SCOTUS Presidential Immunity, Scott Adams ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure. --- Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support
Adapt Deez, a brand new season of GateCrashers, is dedicated to appreciating media adaptations in all their many forms! From the classic book-to-movie adaptations to the many iterations associated and in-between, episodes of Adapt Deez will focus on a specific property and its (officially licensed) adaptations. Not simply a recounting of the differences and similarities between each adaptation, Adapt Deez aims to highlight the ways in which each iteration shines and how its individual media-specific properties—such as film scores, casting, and packaging—elevate the material and affect the way each work is received. In today's episode, Amanda, Amir, and Jon discuss the Academy Award-winning movie The Social Network. The film—which received eight nominations at the 83rd Academy Awards, including for Best Picture, Best Director, and Best Actor for leading man Jesse Eisenberg, and won for Best Adapted Screenplay, Best Original Score, and Best Film Editing—released in 2010 from Sony Pictures, and was directed by David Fincher. The screenplay was written by Aaron Sorkin, and was adapted from The Accidental Billionaires: The Founding of Facebook, a Tale of Sex, Money, Genius, and Betrayal, a work of narrative nonfiction by Ben Mezrich that was published in 2009 by Doubleday. The Social Network tells the story of the founding of social media service Facebook in 2004 by Harvard college students Mark Zuckerberg, Eduardo Saverin, Dustin Moskovitz, Chris Hughes, and Andrew McCollum. Focusing primarily on the relationship—and fall out—between Zuckerberg, played by Eisenberg, and Saverin, portrayed by Andrew Garfield in what would become his international breakthrough role, The Social Network spans several years from Facebook's inception to the depositions between Zuckerberg and Saverin, and Zuckerberg and Cameron and Tyler Winklevoss (Armie Hammer/Josh Pence), twins and fellow Harvard students. If you think that sounds dry, just wait until you witness Amanda, Amir, and Jon's dramatic reenactments of iconic scenes—we guarantee you'll be just as riveted by this biographical drama as we were more than a decade ago.
Guest: Anne Raimondi, COO and Head of Business at AsanaAsana COO Anne Raimondi feels pressure to perform in her job “every day, all the time.” But that pressure doesn't come from her fellow executives; she imposes it on herself, trying to think carefully about how much each of her decisions will impact her team. “I have a lot of privilege and choice,” Anne says, “of how I spend my time, the resources available to me, and am I doing enough? ... Am I doing the most with the opportunities I have, and making as positive an impact as I can?”In this episode, Anne and Joubin discuss returning to the office, Scott McNealy, the dotcom bust, Myers-Briggs, Star Trek: The Next Generation, empowering leaders, Blue Nile, Robert, Chatwani, tech leaders with children, Bain Capital, time management, being “in the moment,” Dave Goldberg, Dustin Moskovitz, staying curious, and being prescriptive.Chapters:(01:05) - Hybrid remote policies (05:34) - Employees' emotional journey (09:39) - Thoughtful answers and betazoids (13:17) - Anne's immigrant parents (14:50) - Regrettable feedback (17:46) - Leaders who cast a shadow (19:36) - Company-hopping (24:14) - Startups and stability (28:42) - Pressure to perform (31:08) - Insecurity and parenthood (37:12) - Allocating your time (39:43) - Co-founding One Jackson (45:36) - Amanda Kleha (47:01) - Great founders (52:18) - “It is not glamorous” (54:03) - From board to operating at Asana (57:10) - Feedback for founders (01:00:25) - Recurring meetings (01:03:07) - Who Asana is hiring Links:Connect with AnneLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm
Bill Cohan joins Peter Hamby to break down Donald Trump's financial pickle: Is a declaration of personal bankruptcy truly on the horizon if he can't fork over the $454 million bond payment by Monday? Then Teddy Schleifer and Ben Landy consider how Dustin Moskovitz plans to money bomb D.C. ahead of the '24 election. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
pWotD Episode 2499: Facebook Welcome to popular Wiki of the Day where we read the summary of a popular Wikipedia page every day.With 411,328 views on Tuesday, 5 March 2024 our article of the day is Facebook.Facebook is a social media and social networking service owned by American technology conglomerate Meta Platforms. Created in 2004 by Mark Zuckerberg with four other Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name derives from the face book directories often given to American university students. Membership was initially limited to Harvard students, gradually expanding to other North American universities. Since 2006, Facebook allows everyone to register from 13 years old (or older), except in the case of a handful of nations, where the age limit is 14 years. As of December 2022, Facebook claimed 3 billion monthly active users. As of October 2023, Facebook ranked as the 3rd most visited website in the world, with 22.56% of its traffic coming from the United States. It was the most downloaded mobile app of the 2010s. Facebook can be accessed from devices with Internet connectivity, such as personal computers, tablets and smartphones. After registering, users can create a profile revealing information about themselves. They can post text, photos and multimedia which are shared with any other users who have agreed to be their friend or, with different privacy settings, publicly. Users can also communicate directly with each other with Messenger, join common-interest groups, and receive notifications on the activities of their Facebook friends and the pages they follow.The subject of numerous controversies, Facebook has often been criticized over issues such as user privacy (as with the Cambridge Analytica data scandal), political manipulation (as with the 2016 U. S. elections) and mass surveillance. Facebook has also been subject to criticism over psychological effects such as addiction and low self-esteem, and various controversies over content such as fake news, conspiracy theories, copyright infringement, and hate speech. Commentators have accused Facebook of willingly facilitating the spread of such content, as well as exaggerating its number of users to appeal to advertisers.This recording reflects the Wikipedia text as of 02:08 UTC on Wednesday, 6 March 2024.For the full current version of the article, see Facebook on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm Kimberly Standard.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Works in Progress: The Long Journey to Doing Good Better by Dustin Moskovitz [Linkpost], published by Nathan Young on February 14, 2024 on The Effective Altruism Forum. @Dustin Moskovitz has written a piece on his reflections on doing good, EA, FTX and other stuff. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Today, hosts Michael Sacca and Mike Belsito dive into the intricacies of a product that has sparked both admiration and frustration among its users: Asana. As they unpack Asana's evolution, they explore its inception by co-founders Justin Rosenstein and Dustin Moskovitz, with Dustin's departure from Facebook marking the genesis of a vision for modernizing project management. From its beta days to the 2011 public launch, Asana's rapid growth and intuitive interface captured the hearts of users, boasting a retention rate of over 75% within a year. But it wasn't just about project management; Asana's flexibility saw users employing it for myriad purposes, reflecting the product's adaptability and appeal. Despite stiff competition, Asana's strategic decisions, including offering the product free for teams of up to 30 people, solidified its position in the market. Fast forward to the present day, where the spotlight shines on Asana's latest innovation: Automation. With insights from Anna Marie Clifton, the mastermind behind this transformative feature, listeners gain a behind-the-scenes look at the product development process. Anna's departure from Coinbase underscores her commitment to reshaping the future of work through Asana. Through strategic alignment and narrative development, Anna and her team set out to tackle the age-old problem of "work about work." Focusing on streamlining workflows and addressing pain points such as cumbersome handoffs, Asana's Automation feature promises to revolutionize how teams collaborate and communicate. Join Sacca and Belsito as they navigate the Asana Automation journey, uncovering the driving forces behind this game-changing innovation and its potential to redefine productivity in the modern workplace. Tune in for an insightful discussion that explores the intersection of product development, user experience, and the ever-evolving landscape of work. Learn more about your ad choices. Visit megaphone.fm/adchoices
Oliver Jay is a sales and expansion specialist. Oliver was Chief Revenue Officer at Asana and led the company's global expansion. He grew the team from 20 to 450 people and increased international income to 40% of Asana's total revenue. Prior to this, Oliver built the first business sales team at Dropbox, and led the company's expansion into the Asia-Pacific region while tripling ARR. Oliver is now an advisor and leadership coach focused on assisting founders and executives in scaling their businesses. — In today's episode, we discuss: Common mistakes PLG companies make The “PLG trap” and how to avoid it The playbook for transitioning into enterprise How and when to build an enterprise sales team How PLG companies can break $10 billion market cap Why it's difficult to emulate Atlassian, Slack or Salesforce — Referenced: Airtable: https://www.airtable.com/ Asana: https://asana.com/ Atlassian: https://www.atlassian.com/ Bitbucket: https://bitbucket.org/product/ Confluent: https://www.confluent.io/ Daniel Shapero: https://www.linkedin.com/in/dshapero/ Datadog: https://www.datadoghq.com/ Dennis Woodside: https://www.linkedin.com/in/dennis-woodside-341302/ Dropbox: https://www.dropbox.com/ Dustin Moskovitz: https://www.linkedin.com/in/dmoskov/ Jay Simons: https://www.linkedin.com/in/jaysimons/ Jira: https://www.atlassian.com/software/jira Justin Rosenstein: https://www.linkedin.com/in/justinrosenstein/ Kim Scott: https://www.linkedin.com/in/kimm4/ Salesforce: https://www.salesforce.com/ Slack: https://slack.com/ The PLG Trap: https://www.linkedin.com/pulse/plg-trap-oliver-jay/ The seed, land, and expand framework: https://www.endgame.io/blog/seed-land-expand-framework Zendesk: https://www.zendesk.com/ — Where to find Oliver Jay: LinkedIn: https://www.linkedin.com/in/oliverjayleadership/ Website: https://www.oliverjayleadership.com/ — Where to find Brett Berson: Twitter/X: https://twitter.com/brettberson LinkedIn: https://www.linkedin.com/in/brett-berson-9986094 — Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter: https://twitter.com/firstround YouTube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast — Timestamps: (00:00) Introduction (02:23) Differences between PLG and enterprise companies (05:56) Avoiding the “PLG trap” (07:39) Transitioning to enterprise feels like building two companies (10:57) Thinking about user value versus company value (13:58) The relationship between OKRs and executive champions (14:59) Dropbox had almost no company value (15:33) The strategy PLG companies should avoid (18:30) Why Dropbox is worth $10b, not $50b (19:41) The story of Asana's expansion (21:16) Asana's unique customer success team (23:27) How product strategy relates to finding champions (25:03) How Asana structured its GTM org (27:11) What Oliver would have done differently with Asana's GTM (29:45) Getting executive-level buy-in (31:49) Asana's concept of “selling clarity” (33:18) An inside look at Asana's transition into enterprise (37:59) The champion tree framework (40:43) Structuring Asana's early enterprise sales team (44:27) The impact of company size on GTM (47:20) Common sales mistake (48:29) The seed, land, and expand framework (51:43) Oliver's advice to founders (54:13) Why building horizontally may be a mistake (55:32) Common challenges faced by PLG companies (58:30) How PLG companies can break the $10b market cap (60:17) Why emulating Atlassian's playbook is difficult (63:21) People who had an outsized impact on Oliver
Apple gaat het makkelijker maken om berichten te versturen tussen iPhones en Android en lijkt daarmee toch te zijn bezweken onder druk van Google, Microsoft en de Europese Unie. Apple heeft zich heel lang verzet tegen de software-update met RCS, dat staat voor Rich Communication Services. Apple is altijd heel erg gefocussed geweest op 'hun'iMessage systeem en zij hoefden niet zo nodig hun kunstjes te delen met de androids. Maar het lijkt er dus alsnog van te komen en dat is goed nieuws voor de verschillende gebruikers: Dankzij deze RCS technologie kunnen meer sms-functies worden gedeeld. Apple-gebruikers kunnen bijvoorbeeld Android-gebruikers sms'en via Wi-Fi en niet alleen via mobiele netwerken. Ze kunnen ook grotere video- en fotobestanden verzenden, gemakkelijker groepschats bedienen en bepalen of berichten worden ontvangen en gelezen. Het nieuws komt natuurlijk niet helemaal uit de lucht vallen, er werd al langer druk uitgeoefend op Apple door toezichthouders, maar ook door druk vanuit Google en Samsung. Ook moest Apple voldoen aan de eisen van de Digital Markets Acts van de Europese Unie, die eisen dat diensten van grote bedrijven meer interoperabel zijn met andere platforms. Apple laat weten de RCS software update al in 2024 komt. Google laat in een verklaring weten 'heel blij te zijn om te zien dat Apple de eerste stap heeft gezet door zich aan te sluiten bij het omarmen van RCS'. Verder in de Tech Update: Elon Musk ligt zwaar onder vuur nu bekend is geworden dat op zijn sociale mediaplatform X veel antisemitische berichten staan. Eerder werd al bekend dat IBM zich als adverteerder terugtrekt op X omdat advertenties van grote Amerikaanse merken waaronder IBM te zien waren op X onder berichten waarin de ideeën van Adolf Hitler of het Derde Rijk werden opgehemeld. Facebook-medeoprichter Dustin Moskovitz roept Elon Musk zelfs op om af te treden. De Schaal van Hebben: Een run op de crompouce door het viraal gaan op TikTok: social media wordt steeds belangrijker en groter in het lanceren en 'hypen' van een product. Wat vinden Bas, Nina en Ruth van de kruising tussen de croissant en de tompouce, maar ook van de variaties: de moorsant, do-pouce en de olie-pouce? See omnystudio.com/listener for privacy information.
This episode is brought to you by 5-Bullet Friday, my very own email newsletter.Welcome to another episode of The Tim Ferriss Show, where it is my job to deconstruct world-class performers to tease out the routines, habits, et cetera that you can apply to your own life. This is a special inbetweenisode, which serves as a recap of the episodes from last month. It features a short clip from each conversation in one place so you can easily jump around to get a feel for the episode and guest.Based on your feedback, this format has been tweaked and improved since the first recap episode. For instance, @hypersundays on Twitter suggested that the bios for each guest can slow the momentum, so we moved all the bios to the end. See it as a teaser. Something to whet your appetite. If you like what you hear, you can of course find the full episodes at tim.blog/podcast. Please enjoy! *This episode is brought to you by 5-Bullet Friday, my very own email newsletter that every Friday features five bullet points highlighting cool things I've found that week, including apps, books, documentaries, gadgets, albums, articles, TV shows, new hacks or tricks, and—of course—all sorts of weird stuff I've dug up from around the world.It's free, it's always going to be free, and you can subscribe now at tim.blog/friday.*Timestamps:Dustin Moskovitz: 00:03:08Daniil and David Liberman: 00:10:41Justin Gary: 00:15:27:08Dr. Shirley Sahrmann: 00:20:04Full episode titles:Dustin Moskovitz, Co-Founder of Asana and Facebook — Energy Management, Coaching for Endurance, No Meeting Wednesdays, Understanding the Real Risks of AI, Embracing Frictionless Work with AI, The Value of Holding Stories Loosely, and More (#686)The Brothers Who Live One Life — The Incredible Adventures of David and Daniil Liberman (#689)Justin Gary — Taking the Path Less Traveled, The Phenomenon of “Magic: The Gathering,” How Analytical People Can Become “Creative” People, Finding the Third Right Answer, and How to Escape Your Need for Control (#687)Dr. Shirley Sahrmann — A Legendary PT Does a Deep Dive on Tim's Low-Back Issues, Teaches How to Unlearn Painful Patterns, Talks About Movement as Medicine (or Poison), and More (#685)*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, Margaret Atwood, Mark Zuckerberg, Peter Thiel, Dr. Gabor Maté, Anne Lamott, Sarah Silverman, Dr. Andrew Huberman, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Antonio Garcia Martinez is back from his travels, and joins Dan Romero and Erik Torenberg to dive into his current theory of ‘The End of History'. They also discuss the cancellation of intellectual figures like Richard Hanania, contrast the political situation in Israel with political polarization in the states, and discuss liberalism vs. religion. Towards the end, they talk about social media, crypto, Elon vs Zuck, and Sam Bankman-Fried. We're proudly sponsored by Vanta. Get $1000 off Vanta with https://www.vanta.com/zen RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics Moment of Zen is part of the Turpentine podcast network. Learn more: www.turpentine.co TIMESTAMPS: (00:00) Episode Preview (02:13) Elon Musk vs. Dustin Moskovitz (05:00) Decarceration in SF (07:53) What problems should we focus on for the biggest impact? (12:05) Waiting for Antonio and reflecting on his conversion (15:30) Moment of Zen: Who's listening and why? (18:00) Behind the scenes at Turpentine network (19:43) Updates: Spindl and Antonio's travels to France and Israel (21:27) Sponsors: Vanta | NetSuite (26:04) Mapping the left and the right in Israel vs in the US (34:15) End of history (37:10) Is Richard Hanania canceled? (39:50) The new Right (41:35) Liberalism LARPing (43:20) LindyMan (44:50) The movies (46:40) Why hasn't there been new religions? (52:20) Christian vs Jewish narrative (56:08) Is Modernity a death cult? (57:10) How do we fix the birth rate? (01:04:40) Updates: Farcaster and Dan's take on how we usher in a new era for social media (01:08:00) Crypto infrastructure companies (01:13:00) The underrated impact of Apple's app store (01:15:10) What our interest in Elon Musk vs. Mark Zuckerberg at the Coliseum reveals about us (01:18:10) The story about SBF (01:20:00) Shkreli season (check out In The Arena) LINKS: Bruno Maçães, History Has Begun: https://www.amazon.com.au/History-Has-Begun-Bruno-Macaes/dp/1787383016 Patrick Deneen, Why Liberalism Failed: https://www.amazon.com/Why-Liberalism-Failed-Politics-Culture/dp/0300223447 Dara Horn, People Love Dead Jews: https://www.amazon.com/People-Love-Dead-Jews-Reports/dp/B09CFYVY3F/ X: @antoniogm (Antonio) @dwr (Dan) @eriktorenberg (Erik) @MOZ_Podcast SPONSORS: Vanta | NetSuite Are you building a business? If you're looking for SOC 2, ISO 27001, GDPR or HIPAA compliance, head to Vanta. Achieving compliance can actually unlock major growth for your company and build customer loyalty. Vanta automates up to 90% of Compliance work, getting you audit-ready in weeks instead of months and saving 85% of associated costs. Moment of Zen listeners get $1000 off at www.vanta.com/zen NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform, head to NetSuite: http://netsuite.com/ZEN and download your own customized KPI checklist.
Brought to you by ROKA Eyewear high-quality sunglasses and glasses, Wealthfront high-yield savings account, and Shopify global commerce platform providing tools to start, grow, market, and manage a retail business. Dustin Moskovitz (@moskov) is co-founder and CEO at Asana, a leading work-management platform for teams. Asana's mission is to help humanity thrive by enabling all teams to work together effortlessly. Prior to Asana, he co-founded Facebook and was a key leader within the technical staff, first in the position of CTO and then later as VP of Engineering. Dustin attended Harvard University as an economics major for two years before moving to Palo Alto, California, to work full time at Facebook.Please enjoy!*This episode is brought to you by ROKA Eyewear! ROKA makes the world's most versatile eyewear—packing all the same features used by Olympic gold medalists and world champions into stylish everyday sunglasses and glasses. I'm incredibly impressed with ROKA. The quality is outstanding, and a lot of my friends who are elite athletes wear them. I've been using their Rory blue-light glasses after sunset, and I feel the improvement in my sleep quality.With more than 19,000 five-star reviews, ROKA has created a solution that active people love. Plus, they hand-build their glasses, sunglasses, and reading glasses all in the USA. Check out my favorite frames and get 20% off your first order at Roka.com and use code TIM20. *This episode is also brought to you by Wealthfront! Wealthfront is an app that helps you save and invest your money. Right now, you can earn 4.8% APY—that's the Annual Percentage Yield—with the Wealthfront Cash Account. That's more than eleven times more interest than if you left your money in a savings account at the average bank, according to FDIC.gov. It takes just a few minutes to sign up, and then you'll immediately start earning 4.8% interest on your savings. And when you open an account today, you'll get an extra fifty-dollar bonus with a deposit of five hundred dollars or more. Visit Wealthfront.com/Tim to get started.*This episode is also brought to you by Shopify! Shopify is one of my favorite platforms and one of my favorite companies. Shopify is designed for anyone to sell anywhere, giving entrepreneurs the resources once reserved for big business. In no time flat, you can have a great-looking online store that brings your ideas to life, and you can have the tools to manage your day-to-day and drive sales. No coding or design experience required.Go to shopify.com/Tim to sign up for a one-dollar-per-month trial period. It's a great deal for a great service, so I encourage you to check it out. Take your business to the next level today by visiting shopify.com/Tim.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, Margaret Atwood, Mark Zuckerberg, Peter Thiel, Dr. Gabor Maté, Anne Lamott, Sarah Silverman, Dr. Andrew Huberman, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Preventing Cancer with a Vaccine w/ Stephen Johnston of Calviri - BRT S04 EP17 (179) 4-23-2023 Things We Learned This Week Calviri is working on a Vaccine to PREVENT Cancer, currently largest animal clinical trial Inflammation - starting points of bad cells Cancer - bad cells replicate Could Prevent other diseases & extend longevity of people's lives - ex.- help w/ dementia Tumors make bad proteins Calviri vaccine works on RNA (proteins) kills tumor, & arms immune system Guest: Stephen Johnston Founding CEO, Calviri Inc. LKIN: HERE https://calviri.com/ Bio: Chief Executive Officer & Chairman of the Board Stephen Albert Johnston is the inventor of the Calviri's central technologies. In addition to Calviri, he has been a founder of Eliance, Inc. (Macrogenics), Synbody Biotechnology and HealthTell, Inc. He is Director of the Arizona State University Biodesign Institute's Center for Innovations in Medicine and Professor in the School of Life Sciences. He has published almost 200 peer-reviewed papers and holds 45 patents. Prior to his appointment at ASU he was Professor and Director of the Center for Biomedical Inventions at UT-Southwestern Medical Center and Professor of Biology and Biomedical Engineering at Duke University. He is a member of the National Academy of Inventors. Dr. Johnston received his B.S. and Ph.D. degrees from the University of Wisconsin. Calviri Inc. We are determined to offer humanity a better life, free from cancer. While our goal is hugely ambitious, we are intensely driven to rid the planet of worry from cancer. Calviri's mission is to provide affordable products worldwide that will end deaths from cancer. We are a fully integrated healthcare company developing a broad spectrum of vaccines and companion diagnostics that prevent and treat cancer for those either at risk or diagnosed. We focus on using frameshift neoantigens derived from errors in RNA processing to provide pioneering products against cancer. The company is a spin out of the Biodesign Institute, Arizona State University, located in Phoenix, AZ. We have the largest dog vaccine trial in the world underway at three premier veterinary universities. The five-year trial will assess the performance of a preventative cancer vaccine. Notes: Seg. 1 Calviri working to develop a vaccine to prevent cancer, not cure, but prevent. You would take the vaccine like any other vaccine. Currently testing a largest clinical trial of 800 dogs. They are in year 3 of a 5 year trial. Dr. Johnston has been working on this for 15 years thus far. There will be 2 phases, with Phase 1 being animal testing to create a dog vaccine. This is a $3 – 5 billion industry. Next year, Calviri wants to launch with FDA approval. Phase 2 is Human clinical trials. These will be Therapeutic trials, working on treating cancer. The Best Part of Calviri's research and vaccine is it could prevent other diseases also. In turn give more longevity of life for people, and maybe cure dementia. Inflammation is the starting point. Defined: localized physical condition in which part of the body becomes reddened, swollen, hot, and often painful, especially as a reaction to injury or infection. In a normal inflammatory response, immune cells produce chemicals that can kill a pathogen. These chemicals, known as reactive oxygen species, can also damage the DNA of normal cells, which increases the risk of mutations that could lead to cancer. Bad cells or zombie cells are put out. Bad stuff, don't die, cause inflammation to other cells Cancer is when bad cells replicate uncontrollably. Sometimes this orderly process breaks down, and abnormal or damaged cells grow and multiply when they shouldn't. These cells may form tumors, which are lumps of tissue. Tumors can be cancerous or not cancerous (benign). It can also cause bone dysfunction. Tumor is a normal cell, not part of a community like infection Body is made up of tissues and cells (community or tightly packed group of cells). All cells experience changes with aging. They become larger and are less able to divide and multiply. Among other changes, there is an increase in pigments and fatty substances inside the cell (lipids). Many cells lose their ability to function, or they begin to function abnormally. A person's Immune system fixes the community of cells & wipes out ‘rogue' tumors. Seg. 2 Stephen Johnston's background, trained in science, and also an inventor with a degree in bio chemistry. Was working at UT Southwest Medical Center in Dallas, TX. Started working at ASU about 15 years ago to develop what would become the Calviri research. ASU supporting the Calviri patent. Started Calviri Inc. circa 2018. It was a spun out patent & I.P. to Calviri, with an exclusive license to Calviri. ASU owns no equity which is unusual. Gets % of profits – did not invest Venture Capital or VC wants equity in any deal they invest in. They also do not want any Universities having ownership as it complicates a deal. Calviri Board has 6 members, with a few members also providing funding, this is an uncommon setup. Calviri Stock in all common One of the board members is the former CEO of Humana $2 Bill – Open Philanthropy is a philanthropic funder. Our mission is to help others as much as we can with the resources available to us. Our main funders are Cari Tuna and Dustin Moskovitz, a co-founder of Facebook and Asana. This organization is providing funding for health companies. High risk – 2019 - $6 5 mil Calviri 4/2023 Worldwide MKT – affordable Seg. 3 Clinical trial phase, do hundreds of trials Common treatment for a person who has cancer, create a Personalized cancer vaccine with an estimated cost of $100k DNA sequencing is done, and that treatment is for that person only Calviri mission is to create an off the shelf & affordable product Stepping stone in medicine development, goes from therapeutic to prevention Moderna, one of the big phrama companies that made the Covid vaccine MRNA Vaccine Survey blood w/ stage 1 or 3 cancer, search for commonalities Intel chip system being used in bio chemistry machines. Intel says its processors are behind efforts to find new breakthroughs in life sciences research and healthcare in a number of countries. It took 10 years of work to test blood DNA transcription produces a single-stranded RNA molecule that is complementary to one strand of DNA. In the first step, the information in DNA is transferred to a messenger RNA (mRNA) molecule by way of a process called transcription. Processing mistakes at the RNA level Reduced to pieces to become Proteins – tumor makes bad protein, which looks like a inflection Takes pieces of bad protein & make a vaccine, then Inject the vaccine & kill the tumor Teach your body, arm the immune system before foreign particle are created Pre-emptively give vaccine before there is any cancer cells Seg. 4 The common theme of Calviri is simplicity. There are simple ways to do x. Everyone said it's complicated to treat cancer. There are 200 types of cancer. Calviri is working to have a big impact, plus make their vaccine simple & affordable. They believe the logical solution is vaccines. They have shown to prevent cancer in their trials with dogs. Trial – Find the common traits. As the expression goes: The light is better under lamppost. Common research is done on DNA to learn about Cancer. A person with Cancer has their DNA mapped out and the treatment created is for them only. The harder method is to look at RNA. Found parts in tumor – foreign 1 preventive vaccine working with ‘common' traits in RNA, there are 40 component pieces of proteins. There hundreds (100s) of uncommon traits Immune system has sensor cells Immune system response, it will identify foreign cells & destroy them Auto ‘self' immune disease - attacks all cells Good cells – working too well, and attacking non bad cells When we age, it is common for the immune system to break down. This make people more prone to cancer, or other diseases. Anti-Aging is a topic getting attention and funding. Jeff Bezos has invested in Altos Labs. Altos is pursuing biological reprogramming technology, a way to rejuvenate cells in the lab that some scientists think could be extended to revitalize entire animal bodies, ultimately prolonging human life. Thanks to Joan Kerber-Walker of Az Bio for the intro to Stan. AZ Bio & Life Sciences Innovation w/ Joan Koerber-Walker - BRT S04 EP10 (172) 3-5-2023 FULL Show w/ Joan of AZ Bio: Click HERE AZ Tech Council Shows: HERE *Includes Best of AZ Tech Council show from 2/12/2023 Tech Topic: HERE Best of Tech: HERE ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT Thanks for Listening. Please Subscribe to the BRT Podcast. Business Roundtable with Matt Battaglia The show where Entrepreneurs, High Level Executives, Business Owners, and Investors come to share insight and ideas about the future of business. BRT 2.0 looks at the new trends in business, and how classic industries are evolving. Common Topics Discussed: Business, Entrepreneurship, Investing, Stocks, Cannabis, Tech, Blockchain / Crypto, Real Estate, Legal, Sales, Charity, and more… BRT Podcast Home Page: https://brt-show.libsyn.com/ ‘Best Of' BRT Podcast: Click Here BRT Podcast on Google: Click Here BRT Podcast on Spotify: Click Here More Info: https://www.economicknight.com/podcast-brt-home/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/ Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI ruin mostly rests on strong claims about alignment and deployment, not about society, published by Rob Bensinger on April 24, 2023 on LessWrong. Dustin Moskovitz writes on Twitter: My intuition is that MIRI's argument is almost more about sociology than computer science/security (though there is a relationship). People won't react until it is too late, they won't give up positive rewards to mitigate risk, they won't coordinate, the govt is feckless, etc. And that's a big part of why it seems overconfident to people, bc sociology is not predictable, or at least isn't believed to be. And Stefan Schubert writes: I think it's good @robbensinger wrote a list of reasons he expects AGI ruin. It's well-written. But it's notable and symptomatic that 9/10 reasons relate to the nature of AI systems and only 1/10 (discussed in less detail) to the societal response. Whatever one thinks the societal response will be, it seems like a key determinant of whether there'll be AGI ruin. Imo the debate on whether AGI will lead to ruin systematically underemphasises this factor, focusing on technical issues. It's useful to distinguish between warnings and all-things-considered predictions in this regard. When issuing warnings, it makes sense to focus on the technology itself. Warnings aim to elicit a societal response, not predict it. But when you actually try to predict what'll happen all-things-considered, you need to take the societal response into account in a big way As such I think Rob's list is better as a list of reasons we ought to take AGI risk seriously, than as a list of reasons it'll lead to ruin My reply is: It's true that in my "top ten reasons I expect AGI ruin" list, only one of the sections is about the social response to AGI risk, and it's a short section. But the section links to some more detailed discussions (and quotes from them in a long footnote): Four mindset disagreements behind existential risk disagreements in ML The inordinately slow spread of good AGI conversations in ML Inadequate Equilibria Also, discussing the adequacy of society's response before I've discussed AGI itself at length doesn't really work, I think, because I need to argue for what kind of response is warranted before I can start arguing that humanity is putting insufficient effort into the problem. If you think the alignment problem itself is easy, then I can cite all the evidence in the world regarding "very few people are working on alignment" and it won't matter. If you think a slowdown is unnecessary or counterproductive, then I can point out that governments haven't placed a ceiling on large training runs and you'll just go "So? Why should they?" Society's response can only be inadequate given some model of what's required for adequacy. That's a lot of why I factor out that discussion into other posts. More importantly, contra Dustin, I don't see myself as having strong priors or complicated models regarding the social situation. Eliezer Yudkowsky similarly says he doesn't have strong predictions about what governments or communities will do in this or that situation (beyond anti-predictions like "they probably won't do specific thing X that's wildly different from anything they've done before"): [Ngo][12:26] The other thing is that, for pedagogical purposes, I think it'd be useful for you to express some of your beliefs about how governments will respond to AI I think I have a rough guess about what those beliefs are, but even if I'm right, not everyone who reads this transcript will be [Yudkowsky][12:28] Why would I be expected to know that? I could talk about weak defaults and iterate through an unending list of possibilities. Thinking that Eliezer thinks he knows that to any degree of specificity feels like I'm being weakmanned! [Ngo][12:28] I'm not claiming you have any specifi...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI ruin mostly rests on strong claims about alignment and deployment, not about society, published by Rob Bensinger on April 24, 2023 on LessWrong. Dustin Moskovitz writes on Twitter: My intuition is that MIRI's argument is almost more about sociology than computer science/security (though there is a relationship). People won't react until it is too late, they won't give up positive rewards to mitigate risk, they won't coordinate, the govt is feckless, etc. And that's a big part of why it seems overconfident to people, bc sociology is not predictable, or at least isn't believed to be. And Stefan Schubert writes: I think it's good @robbensinger wrote a list of reasons he expects AGI ruin. It's well-written. But it's notable and symptomatic that 9/10 reasons relate to the nature of AI systems and only 1/10 (discussed in less detail) to the societal response. Whatever one thinks the societal response will be, it seems like a key determinant of whether there'll be AGI ruin. Imo the debate on whether AGI will lead to ruin systematically underemphasises this factor, focusing on technical issues. It's useful to distinguish between warnings and all-things-considered predictions in this regard. When issuing warnings, it makes sense to focus on the technology itself. Warnings aim to elicit a societal response, not predict it. But when you actually try to predict what'll happen all-things-considered, you need to take the societal response into account in a big way As such I think Rob's list is better as a list of reasons we ought to take AGI risk seriously, than as a list of reasons it'll lead to ruin My reply is: It's true that in my "top ten reasons I expect AGI ruin" list, only one of the sections is about the social response to AGI risk, and it's a short section. But the section links to some more detailed discussions (and quotes from them in a long footnote): Four mindset disagreements behind existential risk disagreements in ML The inordinately slow spread of good AGI conversations in ML Inadequate Equilibria Also, discussing the adequacy of society's response before I've discussed AGI itself at length doesn't really work, I think, because I need to argue for what kind of response is warranted before I can start arguing that humanity is putting insufficient effort into the problem. If you think the alignment problem itself is easy, then I can cite all the evidence in the world regarding "very few people are working on alignment" and it won't matter. If you think a slowdown is unnecessary or counterproductive, then I can point out that governments haven't placed a ceiling on large training runs and you'll just go "So? Why should they?" Society's response can only be inadequate given some model of what's required for adequacy. That's a lot of why I factor out that discussion into other posts. More importantly, contra Dustin, I don't see myself as having strong priors or complicated models regarding the social situation. Eliezer Yudkowsky similarly says he doesn't have strong predictions about what governments or communities will do in this or that situation (beyond anti-predictions like "they probably won't do specific thing X that's wildly different from anything they've done before"): [Ngo][12:26] The other thing is that, for pedagogical purposes, I think it'd be useful for you to express some of your beliefs about how governments will respond to AI I think I have a rough guess about what those beliefs are, but even if I'm right, not everyone who reads this transcript will be [Yudkowsky][12:28] Why would I be expected to know that? I could talk about weak defaults and iterate through an unending list of possibilities. Thinking that Eliezer thinks he knows that to any degree of specificity feels like I'm being weakmanned! [Ngo][12:28] I'm not claiming you have any specifi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Criticism Thread: What things should OpenPhil improve on?, published by anonymousEA20 on February 4, 2023 on The Effective Altruism Forum. This post was prompted by three other posts on the EA forum. A recent post raises the alarm about abuse of power and relationships in the EA community. An earlier post suggested that the EA community welcomes shallow critiques but is less receptive to deep critiques. Dustin Moskovitz has recently mentioned that the EA forum functions as a conflict of interest appeals board for Open Philanthropy. Yet on the EA forum, there doesn't seem to be many specific criticisms of Open Philanthropy. For the sake of epistemics, I wanted to create this post and invite individuals to voice any issues they may have with Open Philanthropy or propose potential solutions. I'll start by discussing the funding dynamics within the field of technical alignment (alignment theory, applied alignment), with a particular focus on Open Philanthropy. In the past two years, the technical alignment organisations which have received substantial funding include: Anthropic (The president of Anthropic is the wife of Open Philanthropy's CEO. The CEO of Anthropic is the brother-in-law of Open Philanthropy's CEO.) ARC (The CEO is married to an Open Philanthropy grantmaker, according to facebook.) CHAI SERI MATS (A director/main leader has had a relationship with an Open Philanthropy grantmaker.) Redwood Research (A director/main leader is engaged to an Open Philanthropy grantmaker, according to facebook. Open Philanthropy's main technical alignment funders are also working out of their office.) All of these organisations are situated in the San Francisco Bay Area. Although many people are thinking about the alignment problem, there is much less funding for technical alignment researchers for other locations (e.g., the east coast of the US, the UK, or other parts of Europe). This collectively indicates that, all else being equal, having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary. As a concerned EA, this seems incredibly alarming and in need of significant reform. Residency in the San Francisco Bay Area is also a must. A skeptical perspective would be that Open Philanthropy allocates its resources to those with the most political access. Since it's hard to solve the alignment problem, the only people grantmakers end up trusting to do so are those who are very close to them. This is a problem with Open Philanthropy's design and processes, and points to the biases of the technical alignment grantmakers and decision makers. This seems almost inevitable given (1) community norms around conflicts of interest and (2) Open Philanthropy's strong centralization of power. This is not to say that any specific individual is to blame. Instead processes, structure, and norms are more useful to direct reforms towards. Right now, even if a highly respected alignment researcher thinks what you do is extremely valuable, the decision ultimately can be blocked by an Open Philanthropy grantmaker, which could cause people to leave alignment altogether. One common suggestion involves making the grantmaking process more democratic or less centralised. For example, the “regranting” approach has been successful for other grantmakers. This involves selecting a large pool of grantmakers or regrantors who have the autonomy to make their own decisions. With more grantmakers, there is less potential for Goodharting by individuals and reduces the likelihood of funding only those who are known best by a few Open Philanthropy staff. Additionally, Open Philanthropy can still choose regrantors who are more aligned with EA values or have previously demonstrated good judgement. A smaller thing that could help...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA community does not own its donors' money, published by Nick Whitaker on January 18, 2023 on The Effective Altruism Forum. A number of recent proposals have detailed EA reforms. I have generally been unimpressed with these - they feel highly reactive and too tied to attractive sounding concepts (democratic, transparent, accountable) without well thought through mechanisms. I will try to expand my thoughts on these at a later time. Today I focus on one element that seems at best confused and at worst highly destructive: large-scale, democratic control over EA funds. This has been mentioned in a few proposals: It originated (to my knowledge) in Carla Zoe Cremer's Structural Reforms proposal: Within 5 years: EA funding decisions are made collectively First set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms (Note this is classified as a 'List A' proposal - per Cremer: "ideas I'm pretty sure about and thus believe we should now hire someone full time to work out different implementation options and implement one of them") It was also reiterated in the recent mega-proposal, Doing EA Better: Within 5 years, EA funding decisions should be made collectively Furthermore (from the same post): Donors should commit a large proportion of their wealth to EA bodies or trusts controlled by EA bodies to provide EA with financial stability and as a costly signal of their support for EA ideas And: The big funding bodies (OpenPhil, EA Funds, etc.) should be disaggregated into smaller independent funding bodies within 3 years (See also the Deciding better together section from the same post) How would this happen? One could try to personally convince Dustin Moskovitz that he should turn OpenPhil funds over to an EA Community panel, that it would help OpenPhil distribute its funds better. I suspect this would fail, and proponents would feel very frustrated. But, as with other discourse, these proposals assume that because a foundation called Open Philanthropy is interested in the "EA Community" that the "EA Community" has/deserves/should be entitled to a say in how the foundation spends their money. Yet the fact that someone is interested in listening to the advice of some members of a group on some issues does not mean they have to completely surrender to the broader group on all questions. They may be interested in community input for their funding, via regranting for example, or invest in the Community, but does not imply they would want the bulk of their donations governed by the EA community. (Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks? These details seem incredibly important when this is the set of people given major control of funding, in lieu of current expert funders) So at a basic level, the assumption that EA has some innate claim to the money of its donors is basically incorrect. (I understand that the claim is also normative). But for now, the money possessed by Moskovitz and Tuna, OP, and GoodVentures is not the property of the EA community. So what, then, to do? Can you demand ten billion dollars? Say you can't convince Moskovitz and OpenPhil leadership to turn over their funds to community deliberation. You could try to create a cartel of EA organizations to refuse OpenPhil donations. This seems likely to fail - it would involve asking tens, perhaps hundreds, of people to risk their livelihoods. It would also be an incredibly poor way of managing the relationship between the community and its most generous funder--and...
Holden Karnofsky is the co-CEO of Open Philanthropy and co-founder of GiveWell. He is also the author of one of the most interesting blogs on the internet, Cold Takes.We discuss:* Are we living in the most important century?* Does he regret OpenPhil's 30 million dollar grant to OpenAI in 2016?* How does he think about AI, progress, digital people, & ethics?Highly recommend!Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Timestamps(0:00:00) - Intro(0:00:58) - The Most Important Century(0:06:44) - The Weirdness of Our Time(0:21:20) - The Industrial Revolution (0:35:40) - AI Success Scenario(0:52:36) - Competition, Innovation , & AGI Bottlenecks(1:00:14) - Lock-in & Weak Points(1:06:04) - Predicting the Future(1:20:40) - Choosing Which Problem To Solve(1:26:56) - $30M OpenAI Investment(1:30:22) - Future Proof Ethics(1:37:28) - Integrity vs Utilitarianism(1:40:46) - Bayesian Mindset & Governance(1:46:56) - Career AdviceTranscriptDwarkesh Patel All right, today I have the pleasure of speaking with Holden Karnofsky who is the co-CEO of Open Philanthropy. In my opinion, Holden is one of the most interesting intellectuals alive. Holden, welcome to the Lunar Society. Holden Karnofsky Thanks for having me. The Most Important CenturyDwarkesh PatelLet's start off by talking about The Most Important Century thesis. Do you want to explain what this is for the audience? Holden Karnofsky My story is that I originally co-founded an organization called GiveWell that helps people decide where to give as effectively as possible. While I'm no longer as active as I once was there, I'm on its board. It's a website called GiveWell.org that makes good recommendations about where to give to charity to help a lot of people. As we were working at GiveWell, we met Cari Tuna and Dustin Moskovitz. Dustin is the co-founder of Facebook and Asana and we started a project that became Open Philanthropy to try to help them give away their large fortune and help as many people as possible. So I've spent my career looking for ways to do as much good as possible with a dollar, an hour, or basically whatever resources you have (especially with money).I've developed this professional specialization in looking for ideas that are underappreciated, underrated, and tremendously important because a lot of the time that's where I think you can find what you might call an “outsized return on investment.” There are opportunities to spend money and get an enormous impact because you're doing something very important that's being ignored by others. So it's through that kind of professional specialization that I've actively looked for interesting ideas that are not getting enough attention. Then I encountered the Effective Altruist Community, which is a community of people basically built around the idea of doing as much good as you can. It's through that community that I encountered the idea of the most important century. It's not my idea at all, I reached this conclusion with the help and input of a lot of people. The basic idea is that if we developed the right kind of AI systems this century (and that looks reasonably likely), this could make this century the most important of all time for humanity. So now let's talk about the basic mechanics of why that might be or how you might think about that. One thing is that if you look back at all of economic history ( the rate at which the world economy has grown), you see acceleration. You see that it's growing a lot faster today than it ever was. One theory of why that might be or one way of thinking about it through the lens of basic economic growth theory is that in normal circumstances, you can imagine a feedback loop where you have people coming up with ideas, and then the ideas lead to greater productivity and more resources.When you have more resources, you can also have more people, and then those people have more ideas. So you get this feedback loop that goes people, ideas, resources, people, ideas, resources. If you're starting a couple of hundred years ago and you run a feedback loop like that, standard economic theory says you'll get accelerating growth. You'll get a rate of economic growth that goes faster and faster. Basically, if you take the story of our economy to date and you plot it on a chart and do the kind of simplest thing you can to project it forward, you project that it will go that our economy will reach an infinite growth rate this century. The reason that I currently don't think that's a great thing to expect by default is that one of the steps of that feedback loop broke a couple hundred years ago. So it goes more people, more ideas, more resources, more people, more ideas, more resources. But, a couple hundred years ago, people stopped having more children when they had more resources. They got richer instead of more populous. This is all discussed on the Most Important Century page on my blog, Cold Takes. What happens right now is that when we have more ideas and we have more resources, we don't end up with more people as a result. We don't have that same accelerating feedback loop. If you had AI systems that could do all the things humans do to advance science and technology (meaning the AI systems could fill in that “more ideas” part of the loop), then you could get that feedback loop back. You could get sort of this unbounded, heavily accelerating, explosive growth in science and technology. That's the basic dynamic at the heart of it and a way of putting it that's trying to use familiar concepts from economic growth theory. Another way of putting it might just be, “Gosh, if we had AI systems that could do everything humans do to advance science and technology, that would be insane.” What if we were to take the things that humans do to create new technologies that have transformed the planet so radically and we were able to completely automate them so that every computer we have is potentially another mind working on advancing technology? So either way, when you think about it, you could imagine the world changing incredibly quickly and incredibly dramatically. I argue in the Most Important Century series that it looks reasonably likely, in my opinion, more than 50-50, that this century will see AI systems that can do all of the key tasks that humans do to advance science and technology. If that happens, we'll see explosive progress in science and technology. The world will quickly become extremely different from how it is today. You might think of it as if there was thousands of years of changes packed into a much shorter time period. If that happens, then I argue that you could end up in a deeply unfamiliar future. I give one example of what that might look like using this hypothetical technology idea called digital people. That would be sort of people that live in virtual environments that are kind of simulated, but also realistic and exactly like us. When you picture that kind of advanced world, I think there is a decent reason to think that if we did get that rate of scientific and technological advancement, we could basically hit the limits of science and technology. We could basically find most of what there is to find and end up with a civilization that expands well beyond this planet, has a lot of control over the environment, is very stable for very long periods of time, and looks sort of post-human in a lot of relevant ways. If you think that, then this is basically our last chance to shape how this happens. The most important century hypothesis in a nutshell is that if we develop AI that can do all the things humans do to advance science and technology, we could very quickly reach a very futuristic world that's very different from today's. It could be a very stable, very large world, and this is our last chance to shape it.The Weirdness of Our TimeDwarkesh Patel Gotcha. I and many other people are going to find that very wild. Could you walk us through the process by which you went from working in global development to thinking this way? In 2014, for example, you had an interview or a conversation and this is a quote from there. “I have looked at the situation in Africa, have an understanding of the situation in Africa, and see a path of doing a lot of good in Africa. I don't know how to look into the far future situation, don't understand the far future situation, and don't see a path to doing good on that front I feel good about.” Maybe you can walk me through how you got from there to where you are today.Holden Karnofsky Firstly, I want to connect this back to how this relates to the work I was doing at GiveWell, and why this all falls under one theme. If we are on the cusp for this century of creating these advanced AI systems, then we could be looking at a future that's either very good or very bad. I think there are decent arguments that if we move forward without caution and we develop sloppily designed AI systems, they could end up with goals of their own. We would end up with a universe that contains very little that humans value or a galaxy that does or a world where very powerful technologies are used by ill-meaning governments to create a world that isn't very good. We could also end up with a world where we manage to eliminate a lot of forms of material scarcity and have a planet that's much better off than today's. A lot of what I ask is how can we help the most people possible per dollar spent? If you ask how we can help the most people possible per dollar spent, then funding some work to help shape that transition and make sure that we don't move forward too incautiously, and that we do increase the odds that we do get like a good future world instead of a bad future one, is helping a huge number of people per dollar spent. That's the motivation. You're quoting an argument I was having where we posted a transcript back in 2014–– a time that was part of my journey of getting here. I was talking to people who were saying, “Holden, you want to help a lot of people with your resources. You should be focused on this massive event that could be coming this century that very few people are paying attention to, and there might be a chance to make this go well or poorly for humanity.” So I was saying, “Gosh, like that sure is interesting.” And I did think it was interesting. That's why I was spending the time and having those conversations. But I said, “When I look at global poverty and global health, I see what I can do. I see the evidence. I see the actions I can take. I'm not seeing that with this stuff.” So what changed? I would say a good chunk of what changed is maybe like the most boring answer possible. I just kept at it. I was sitting there in 2014 saying, “Gosh, this is really interesting, but it's all a bit overwhelming. It's all a bit crazy. I don't know how I would even think about this. I don't know how I would come up with a risk from AI that I actually believed was a risk and could do something about today.” Now, I've just been thinking about this for a much longer time period. I do believe that most things you could say about the far future are very unreliable and not worth taking action on, but I think there are a few things one might say about what a transition to very powerful AI systems could look like. There are some things I'm willing to say would be bad if AI systems were poorly designed, had goals of their own, and ended up kind of running the world instead of humans. That seems bad.I am more familiar today than I was then with the research and the work people can do to make that less likely and the actions people can take to make that less likely–– so that's probably more than half the answer. But another thing that would be close to half the answer is that I think there have been big changes in the world of AI since then. 2014 was the beginning of what's sometimes called the “deep learning revolution”. Since then, we've basically seen these very computationally intensive (but fundamentally simple) AI systems achieve a lot of progress on lots of different unrelated tasks. It's not crazy to imagine that the current way people are developing AI systems, cutting-edge AI systems, could take us all the way to the kind of extremely powerful AI systems that automate roughly everything humans do to advance science and technology. It's not so wild to imagine that we could just keep on going with these systems, make them bigger, put more work into them, but basically stay on the same path and you could get there. If you imagine doing that, it becomes a little bit less daunting to imagine the risks that might come up and the things we could do about them. So I don't think it's necessarily the leading possibility, but it's enough to start thinking concretely about the problem. Dwarkesh Patel Another quote from the interview that I found appealing was “Does the upper crust of humanity have a track record of being able to figure out the kinds of things MIRI claims to have figured out?” By the way, for context for the viewers, MIRI is the organization Eliezer was leading, which is who you were talking to at the time. Holden Karnofsky I don't remember exactly what kinds of things MIRI was trying to figure out and I'm not sure that I even understood what they were that well. I definitely think it is true that it is hard to predict the future, no matter who you are, no matter how hard you think, and no matter how much you've studied. I think parts of our “world” or memeplex or whatever you want to call it, overblow this at least a little bit. I think I was buying into that a little bit more than I could. In 2014, I would have said something like, “Gosh, no one's ever done something like making smart statements about what several decades out of our future could look like or making smart statements about what we would be doing today to prepare for it.” Since then, I think a bunch of people have looked into this and looked for historical examples of people making long-term predictions and long-term interventions. I don't think it's amazing, but I think I wrote a recent blog post entitled The Track Record of Future. It seems… fine. “Fine” is how I put it, where I don't think there's anyone who has demonstrated a real ability to predict the future with precision and know exactly what we should do. I also don't think humans' track record of this is so bad and so devastating that we shouldn't think we are capable of at least giving it a shot. If you enter into this endeavor with self-awareness about the fact that everything is less reliable than it appears and feels at first glance and you look for the few things that you would really bet on, I think it's worth doing. I think it's worth the bet. My job is to find 10 things we could do, and have nine of them fail embarrassingly, on the off chance that one of them becomes such a big hit that it makes up for everything else. I don't think it's totally crazy to think we could make meaningful statements about how things we do today could make these future events go better, especially if the future events are crazily far away (especially if they're within the next few decades.) That's something I've changed my mind on, at least to some degree. Dwarkesh Patel Gotcha. Okay, so we'll get to forecasting in a second, but let's continue on the object-level conversation about the most important century. I want to make sure I have the thesis right. Is the argument that because we're living in a weird time, we shouldn't be surprised if something transformative happens in a century or is the argument that something transformative could happen this century? Holden Karnofsky It's a weird time. So something we haven't covered yet, but I think is worth throwing in is that a significant part of the ‘Most Important Century series' is making the case that even if you ignore AI, there's a lot of things that are very strange about the time that our generation lives in. The reason I spent so much effort on this is because back in 2014, my number one objection to these stories about transformative AI wasn't anything about whether the specific claims about AI or economic models or alignment research made sense. This whole thing sounded crazy and was just suspicious. It's suspicious if someone says to you, “You know, this could be the most important century of all time for humanity.” I titled the series that way because I wanted people to know that I was saying something crazy and that I should have to defend it. I didn't want to be backpedaling or soft-pedaling or hiding what a big claim I was making. I think my biggest source of skepticism was how I didn't have any specific objection. It sounds crazy and suspicious to say that we might live in one of the most significant times of the most significant time for humanity ever. So a lot of my series is saying that it is weird to think that, but we already have a lot of evidence that we live in an extraordinarily weird time that would be on the short list of contenders for the most important time ever–– even before you get into anything about AI, and just used completely commonly accepted facts about the world. For example, if you chart the history of economic growth, you'll see that the last couple hundred years have seen faster growth by a lot than anything else in the history of humanity or the world. If you chart anything about scientific and technological developments, you'll see that everything significant is packed together in the recent past. There's almost no way to cut it. I've looked at many different cuts of this. There's almost no way to cut it that won't give you that conclusion. One way to put it is that the universe is something like 11 or 12 billion years old. Life on Earth is three billion years old. Human civilization is a blink of an eye compared to that. We're in this really tiny sliver of time, the couple hundred years when we've seen a huge amount of technological advancement and economic growth. So that's weird. I also talk about the fact that the current rate of economic growth seems high enough that we can't keep it going for that much longer. If it went for another 10,000 years, that's another blink of an eye and galactic time scales. It looks to me like we would run out of atoms in the galaxy and wouldn't have anywhere to go. So I think there are a lot of signs that we just live in a really strange time. One more thing that I'll just throw in there–– I think a lot of people who disagree with my take would say, “Look, I do believe eventually we will develop space colonization abilities. We could go to the stars, fill up the galaxy with life, and maybe have artificial general intelligence, but to say that this will happen in a century is crazy. I think it might be 500 years. I think it might be a thousand years. I think it might be 5000 years.” A big point I make in the series is how I say, “Well, even if it's 100000 years, that's still an extremely crazy time to be in in the scheme of things.” If you make a graphic timeline and you show my view versus yours, they look exactly the same down to the pixel. So there are already a lot of reasons to think we live in a very weird time. We're on this planet where there's no other sign of life anywhere in the galaxy. We believe that we could fill up the galaxy with life. That alone would make us among the earliest life that has ever existed in the galaxy–– a tiny fraction of it. So that's a lot of what the series is about. I'll answer this question explicitly. You ask, “Is this series about whether transformative AI will come and make this century weird?” or is it about “This century could be weird, and therefore transformative AI will come?” The central claim is that transformative AI could be developed in this century and the sections about ‘how weird a time we live in' are just a response to an objection. It's a response to a point of skepticism. It's a way of saying there are already a lot of reasons to think we live in a very weird time. So actually, this thing about AI is only a moderate quantitative update, not a complete revolution in the way you're thinking about things. Dwarkesh Patel There's a famous comedian who has a bit where he's imagining what it must have been like to live in 10BC. Let's say somebody comes with the proof that current deep learning techniques are not scalable for some reason and that transformative AI is very unlikely this century. I don't know if this is a hypothetical where that would happen, but let's just say that it is. Even if this is a weird time in terms of economic growth, does that have any implications other than transformative AI? Holden Karnofsky I encourage people to go to my series because I have a bunch of charts illustrating this and it could be a little bit hard to do concisely now. But having learned about just how strange the time we live in is when you look at it in context, I think the biggest thing I take away from this is how we should really look for the next big thing. If you'd been living 300 years ago and you'd been talking about the best way to help people, a lot of people might have been talking about various forms of helping low-income people. They probably would have been talking about spreading various religious beliefs. It would have seemed crazy to think that what you should be thinking about, for example, was the steam engine and how that might change the world, but I think the Industrial Revolution was actually an enormous deal and was probably the right thing to be thinking about if there's any way to be thinking about it how that would change the world and what one might do to make that a world that could be better.So that's basically where I'm at. I just think that as a world, as a global civilization, we should place a really high priority on saying that we live in a weird time. Growth has been exploding, accelerating over the last blink of an eye. We really need to be nervous and vigilant about what comes next and think about all the things that could radically transform the world. We should make a list of all the things that might radically transform the world, make sure we've done everything we can to think about them and identify the ways we might be able to do something today that would actually help. Maybe after we're done doing all that, we can have a lot of the world's brightest minds doing their best to think of stuff and when can't think of any more, then we can go back to all the other things that we worry aboutRight now the world invests so little in that kind of speculative, “Hey, what's the next big thing?” Even if it's not super productive to do so, even if there's not that much to learn, I feel the world should be investing more in that because the stakes are extremely high. I think it's a reasonable guess that we're living in a world that's recently been incredibly transformed by the Industrial Revolution and the future could be incredibly transformed by the next thing. I just don't think this gets a lot of discussion in basically any circles. If it got some, I would feel a lot more comfortable. I don't think the whole world should just obsess over what the next transformative event is, but I think right now there's so little attention to it. The Industrial Revolution Dwarkesh Patel I'm glad you brought up the Industrial Revolution because I feel like there are two implicit claims within the most important century thesis that don't seem perfectly compatible. One is that we live in an extremely wild time and that the transition here is potentially wilder than any other transition there has been before. The second is we have some sense of what we can be doing to make sure this transition goes well. Do you think that somebody at the beginning of the Industrial Revolution, knowing what they knew then, could have done something significant to make sure that it went as favorably as possible? Or do you think that that's a bad analogy for some reason? Holden Karnofsky It's a pretty good analogy for being thought-provoking and for thinking, “Gosh, if you had seen the Industrial Revolution coming in advance and this is when economic growth really reached a new level back in the 1700s and 1800s, what could you have done?” I think part of the answer is it's not that clear and I think that is a bit of an argument we shouldn't get too carried away today by thinking that we know exactly what we can do. But I don't think the answer is quite nothing. I have a goofy cold-taste post that I never published and may never publish because I lost track of it. What it basically says is “What if you'd been in that time and you had known the Industrial Revolution was coming or you had thought it might be?” You would ask yourself what you could be doing. One answer you might have given is you might have said, “Well, gosh, if this happens, whatever country it happens in might be disproportionately influential. What would be great is if I could help transform the thinking and the culture in that country to have a better handle on human rights and more value on human rights and individual liberties and a lot of other stuff–– and gosh, it kind of looks like people were doing that and it looks like it worked out.” So this is the Enlightenment.I even give this goofy example–– I could look it up and it's all kind of a trollish post. But the example is someone's thinking, “Hey, I'm thinking about this esoteric question about what a government owes to its citizens” or, “When does a citizen have a right to overthrow a government or when is it acceptable to enforce certain beliefs and not?” Then the other person in the dialogue is just like, “This is the weirdest, most esoteric question. Why does this matter? Why aren't you helping poor people?” But these are the questions that the Enlightenment thinkers were thinking about. I think there is a good case that they came up with a lot of stuff that really shaped the whole world since then because of the fact that the UK became so influential and really laid the groundwork for a lot of stuff about the rights of the governed, free speech, individual rights, and human rights. Then I go to the next analogy. It's like we're sitting here today and someone is saying, “Well, instead of working on global poverty, I'm studying this esoteric question about how you get an AI system to do what you want it to do instead of doing its own thing. I think it's not completely crazy to see them as analogous.” Now, I don't think this is what the Enlightenment thinkers are actually doing. I don't think they were saying this could be the most important millennium, but it is interesting that it doesn't look like there was nothing to be had there. It doesn't look like there's nothing you could have come up with. In many ways, it looks like what the Enlightenment thinkers were up to had the same esoteric, strange, overly cerebral feel at the time and ended up mattering a huge amount. So it doesn't feel like there's zero precedent either.Dwarkesh Patel Maybe I'm a bit more pessimistic about that because I think the people who are working on individual rights frameworks weren't anticipating an industrial revolution. I feel like the type of person who'd actually anticipate the industrial revolution would have a political philosophy that was actually probably a negative given, you know… Karl Marx. If you saw something like this happening, I don't think it would be totally not obvious. Holden Karnofsky I mean, I think my basic position here is that I'm not sitting here highly confident. I'm not saying there's tons of precedent and we know exactly what to do. That's not what I believe. I believe we should be giving it a shot. I think we should be trying and I don't think we should be totally defeatist and say, “Well, it's so obvious that there's never anything you could have come up with throughout history and humans have been helpless to predict the future.” I don't think that is true. I think that's enough of an example to kind of illustrate that. I mean, gosh, you could make the same statement today and say, “Look, doing research on how to get AI systems to behave as intended is a perfectly fine thing to do at any period in time.” It's not like a bad thing to do. I think John Locke was doing his stuff because he felt it was a good thing to do at any period in time, but the thing is that if we are at this crucial period of time, it becomes an even better thing to do and it becomes magnified to the point where it could be more important than other things. Dwarkesh Patel The one reason I might be skeptical of this theory is that I could say, “Oh, gosh, if you look throughout history, people were often convinced that they were living in the most important time,” or at least an especially important time. If you go back, everybody could be right about living in the most important time. Should you just have a very low prior that anybody is right about this kind of thing? How do you respond to that kind of logic?Holden Karnofsky First of all, I don't know if it's really true that it's that common for people to say that they're living in the most important time in history. This would be an interesting thing to look at. But just from stuff I've read about past works on political philosophy and stuff, I don't exactly see this claim all over the place. It definitely happens. It's definitely happened. I think a way of thinking about it is that there are two reasons you might think you are especially important. One is that you actually are and you've made reasonable observations about it. Another is that you want to be or you want to think you are so you're self-deceiving. So over the long sweep of history, a lot of people will come to this conclusion for the second reason. Most of the people who think they're the most important will be wrong. So that's all true. That certainly could apply to me and it certainly could apply to others. But I think that's just completely fine and completely true. I think we should have some skepticism when we find ourselves making these kinds of observations. At the same time, I think it would be a really bad rule or a really bad norm that every time you find yourself thinking the stakes are really high or that you're in a really important position, you just decide to ignore the thought. I think that would be very bad.If you imagine a universe where there actually are some people who live in an especially important time, and there are a bunch of other people who tell stories to themselves about whether they do, how would you want all those people to behave? To me, the worst possible rule is that all those people should just be like, “No, this is crazy, and forget about it.” I think that's the worst possible rule because the people who are living at the important time will then do the wrong thing. I think another bad rule would be that everyone should take themselves completely seriously and just promote their own interests ahead of everyone else's. A rule I would propose over either of them is that all these people should take their beliefs reasonably seriously and try to do the best thing according to their beliefs, but should also adhere to common sense standards of ethical conduct and not do too much “ends justify the means' reasoning.” It's totally good and fine to do research on alignment, but people shouldn't be telling lies or breaking the law in order to further their ends. That would be my proposed rules. When we have these high stake, crazy thoughts, we should do what we can about them and not go so crazy about them that we break all the rules of society. That seems like a better rule. That's the rule I'm trying to follow. Dwarkesh Patel Can you talk more about that? If for some reason, we can be convinced that the expected value calculation was immense, and you had to break some law in order to increase the odds that the AI goes well, I don't know how hypothetical this would be. Is it just that you're not sure whether you would be right and so you'd just want to err on the side of caution? Holden Karnofsky Yeah, I'm really not a fan of ends justify the means' reasoning. The thing that looks really, really bad is people saying it's worth doing horrible things and coercing each other and using force to accomplish these things because the ends we're trying to get to are more important than everything else. I'm against that stuff. I think that stuff looks a lot worse historically than people trying to break the future and do helpful things. So I see my main role in the world as trying to break the future and do helpful things. I can do that without doing a bunch of harmful, common sense, unethical stuff. Maybe someday there will be one of these intense tradeoffs. I haven't really felt like I've run into them yet. If I ever ran into one of those intense tradeoffs, I'd have to ask myself how confident I really am. The current level of information and confidence I have is, in my opinion, not enough to really justify the means. Dwarkesh Patel Okay, so let's talk about the potential implausibility of continued high growth. One thing somebody might think is, “OK, maybe 2 percent growth can't keep going on forever, but maybe the growth slows down to point five percent a year.” As you know, small differences in growth rates have big effects on the end result. So by the point that we've exhausted all the possible growth in the galaxy, we'll probably be able to expand to other galaxies. What's wrong with that kind of logic where there's point five percent growth that still doesn't imply a lock-in or would it be weird if that implied a lock-in?Holden Karnofsky I think we might want to give a little bit more context here. One of the key arguments of the most important century is that it's just part of one of the arguments that we live in a strange time. I'm also arguing that the current level of economic growth just looks too high to go on for another 10,000 years or so. One of the points I make, which is a point I got from Robin Hanson, is that if you just take the current level of economic growth and extrapolate it out 10,000 years, you end up having to conclude that we would need multiple stuff that is worth as much as the whole world economy is today–– multiple times that per atom in the galaxy. If you believe we can't break the speed of light, then we can't get further than that. We can't get outside the galaxy. So in some sense, we run out of material. So you're saying, “Alright but what if the growth rate falls to 0.5 percent?” Then I'm kind of like, “OK, well, so the growth rate now I ballparked it in the post is around 2 percent. That's the growth rate generally in the most developed countries. Let's say it falls to 0.5 percent.” Just like for how long? Did you calculate how long it would take to get to the same place? Dwarkesh Patel I think it was like 25,000 years. 0.5 percent gets you like one world-size economy. It's 10,000 versus 25,000, but 25,000 is the number of light years between us and like the next galaxy. Holden Karnofsky That doesn't sound right. I don't think this galaxy calculation is very close. There's also going to be a bunch of dead space. As you get to the outer reach of the galaxy, there's not going to be as much there. That doesn't sound super right, but let's just roll with it. I mean, sure, let's just say that you had 2 percent today and then growth went down to 0.5 percent and stayed there forever. I'm pretty sure that's still too big. I'm pretty sure you're still going to hit limits in some reasonable period of time, but that would still be weird on its own. It would just be like, “Well, we lived in the 200-year period when we had 2 percent growth and then we had 0.5 percent growth forever.” That would still make this a kind of an interesting time. It would be the most dynamic, fastest-changing time in all of human history. Not by a ton, but it's also like you pick the number that's the closest and the most perfectly optimized here. So if it went down to 0.1 percent or even down to 0.01 percent, then it would take longer to run out of stuff. But it would be even stranger with the 2 percent versus the 0.01 percent. So I don't really think there's any way out of “Gosh, this looks like this looks like it's probably going to end up looking like a very special time or a very weird time.”Dwarkesh Patel This is not worth getting hung up on, but from that perspective, then the century where we had 8 percent growth because of the Industrial Revolution–– would you say that maybe that's the most important century?Holden Karnofsky Oh, sure. Yeah, totally. No, the thing about rapid growth is it's not supposed to be on its own. By growth standards, this century looks less special than the last one or two. It's saying that the century is one of a handful or I think when I say “One of 80 of the most significant centuries,” or something by economic growth standards. That's only one argument, but then I look at a lot of other ways in which this century looks unusual. To say that something is the most important century of all time sounds totally nuts because there are so many centuries in the history of humanity, especially if you want to think about it on galactic time scales. Even once you narrow it down to 80, it's just much way less weird. If I've already convinced you using kind of non-controversial reasoning that we're one of the 80 most important centuries, it shouldn't take me nearly as much further evidence to say, actually, this one might be number one out of 80 because you're starting odds are more than 1 percent. So to get you up to 10 percent or 20 percent or 30 percent doesn't necessarily require a massive update the way that it would if we're just starting from nowhere. Dwarkesh Patel I guess I'm still not convinced that just because this is a weird century, this has any implications for why or whether we should see transformative AI this century. If we have a model about when transformative AI happens, is one of the variables that goes into that “What is the growth rate in 2080?” It just feels weird to have this as a parameter for when the specific technological development is going to happen. Holden Karnofsky It's just one argument in the series. I think the way that I would come at it is I would just say, “Hey, look at AI systems. Look at what they're doing. Look at how fast the rate of progress is. Look at these five different angles on imagining when I might be able to do all the things humans do to advance science and technology.” Just imagine that we get there this century. Wouldn't it be crazy to have AI that could do all the things humans do to advance science and technology? Wouldn't that lead to just a lot of crazy stuff happening? There's only ever been one species in the history of the universe that we know of that can do the kinds of things humans do. Wouldn't it be weird if there were two? That would be crazy. One of them was a new one we built that could be copied at will, and run at different speeds on any hardware you have. That would be crazy. Then you might come back and say, “Yeah, that would be crazy. This is too crazy. Like I'm ruling this out because this is too crazy.” Then I would say, “OK, well, we have a bunch of evidence that we live in an unusual, crazy time.” And you actually should think that there's a lot of signs that this century is not just a random century picked from a sample of millions of centuries. So that's the basic structure of the argument. As far as the growth rate in zero AD, I think it matters. I think you're asking the question, why do the dynamics of growth in zero AD matter at all for this argument? I think it's because it's just a question of, “How does economic growth work generally and what is the trend that we're on, and what happens if that trend continues?” If around zero AD growth was very low but accelerating, and if that was also true at one hundred AD and a thousand AD and negative a thousand or, you know, a thousand BC, then it starts to point to a general pattern. Growth is accelerating and maybe accelerating for a particular reason, and therefore you might expect more acceleration.AI Success ScenarioDwarkesh Patel Alright, let's talk about transformative AI then. Can you describe what success looks like concretely? Are humans part of the post-transformative AI world? Are we hoping that these AIs become enslaved gods that help us create a utopia? What does the concrete success scenario look like? Holden Karnofsky I mean, I think we've talked a lot about the difficulty of predicting the future, and I think I do want to emphasize that I really do believe in that. My attitude to the most important century is not at all, “Hey, I know exactly what's going to happen and I'm making a plan to get us through it.” It's much more like there's a general fuzzy outline of a big thing that might be approaching us. There are maybe two or three things we can come up with that seem good to do. Everything else we think about, we're not going to know if it's good to do or bad to do. So I'm just trying to find the things that are good to do so that I can make things go a little bit better or help things go a little bit better. That is my general attitude. It's like if you were on a ship in a storm and you saw some very large, fuzzy object obscured by the clouds, you might want to steer away from it. You might not want to say, “Well, I think that is an island and I think there's probably a tiger on it. So if we go and train the tiger in the right way, blah, blah, blah, blah, blah,” you don't want to get into that. Right? So that is the general attitude I'm taking.What does success look like to me? Success could look like a lot of things, but one thing success would look like to me would frankly just be that we get something not too different from the trajectory we're already on. So, in other words, if we can have systems that behaved as intended, acted as tools and amplifiers of humans, and did the things they're supposed to do. If we could avoid a world where those systems got sort of all controlled by one government or one person, we could avoid a world where that caused a huge concentration of power. If we could have a world where AI systems are just another technology that helps us do a lot of stuff, and we'd invent lots of other technologies and everything is relatively broadly distributed and everything works roughly as it's supposed to work, then you might be in a world where we continue the trend we've seen over the last couple of hundred years, which is that we're all getting richer. We're all getting more tools. We all hopefully get an increasing ability to understand ourselves, study ourselves, and understand what makes us happy, what makes us thrive. Hopefully, the world just gets better over time and we have more and more new ideas that thus hopefully make us wiser. I do think that in most respects, the world of today is a heck of a lot better than the world of 200 years ago. I don't think the only reason for that is wealth and technology, but I think they played a role. I think that if you'd gone back to 200 years ago and said, “Holden, how would you like the world to develop a bunch of new technologies as long as they're sort of evenly distributed and they behave roughly as intended and people mostly just get richer and discover new stuff?” I'd be like, “That sounds great!” I don't know exactly where we're going to land. I can't predict in advance whether we're going to decide that we want to treat our technologies as having their own rights. That's stuff that the world will figure out. But I'd like to avoid massive disasters that are identifiable because I think if we can, we might end up in a world where the future is wiser than we are and is able to do better things. Dwarkesh Patel The way you put it, AI enabling humans doesn't sound like something that could last for thousands of years. It almost sounds as weird as chimps saying “What we would like is humans to be our tools.” At best, maybe they could hope we would give them nice zoos. What is the role of humans in this in this future? Holden Karnofsky A world I could easily imagine, although that doesn't mean it's realistic at all, is a world where we build these AI systems. They do what they're supposed to do, and we use them to gain more intelligence and wisdom. I've talked a little bit about this hypothetical idea of digital people–– maybe we develop something like that. Then, after 100 years of this, we've been around and people have been having discussions in the public sphere, and people kind of start to talk about whether the AIs themselves do have rights of their own and should be sharing the world with us. Maybe then they do get rights. Maybe some AI systems end up voting or maybe we decide they shouldn't and they don't. Either way, you have this kind of world where there's a bunch of different beings that all have rights and interests that matter. They vote on how to set up the world so that we can all hopefully thrive and have a good time. We have less and less material scarcity. Fewer and fewer tradeoffs need to be made. That would be great. I don't know exactly where it ends or what it looks like. But I don't know. Does anything strike you as unimaginable about that? Dwarkesh Patel Yeah, the fact that you can have beings that can be copied at will, but also there's some method of voting..Holden Karnofsky Oh, yeah. That's a problem that would have to be solved. I mean, we have a lot of attention paid to how the voting system works, who gets to vote, and how we avoid things being unfair. I mean, it's definitely true that if we decided there was some kind of digital entity and it had the right to vote and that digital entity was able to copy itself–– you could definitely wreak some havoc right there. So you'd want to come up with some system that restricts how many copies you can make of yourself or restricts how many of those copies can vote. These are problems that I'm hoping can be handled in a way that, while not perfect, could be non-catastrophic by a society that hasn't been derailed by some huge concentration of power or misaligned systems. Dwarkesh Patel That sounds like that might take time. But let's say you didn't have time. Let's say you get a call and somebody says, “Holden, next month, my company is developing or deploying a model that might plausibly lead to AGI.” What does Open Philanthropy do? What do you do? Holden Karnofsky Well, I need to distinguish. You may not have time to avoid some of these catastrophes. A huge concentration of power or AI systems don't behave as intended and have their own goals. If you can prevent those catastrophes from happening, you might then get more time after you build the AIs to have these tools that help us invent new technologies and help us perhaps figure things out better and ask better questions. You could have a lot of time or you could figure out a lot in a little time if you had those things. But if someone said–– wait how long did you give me?Dwarkesh Patel A month. Let's say three months. So it's a little bit more. Holden Karnofsky Yeah, I would find that extremely scary. I kind of feel like that's one of the worlds in which I might not even be able to offer an enormous amount. My job is in philanthropy (and a lot of what philanthropists do historically or have done well historically), is we help fields grow. We help do things that operate on very long timescales. So an example of something Open Philanthropy does a lot of right now is we fund people who do research on alignment and we fund people who are thinking about what it would look like to get through the most important century successfully. A lot of these people right now are very early in their careers and just figuring stuff out. So a lot of the world I picture is like 10 years from now, 20 years from now, or 50 years from now. There's this whole field of expertise that got support when traditional institutions wouldn't support it. That was because of us. Then you come to me and you say, “We've got one week left. What do we do?” I'd be like, “I don't know. We did what we could do. We can't go back in time and try to prepare for this better.” So that would be an answer. I could say more specific things about what I'd say in the one to three-month time frame, but a lot of it would be flailing around and freaking out, frankly. Dwarkesh Patel Gotcha. Okay. Maybe we can reverse the question. Let's say you found out that AI actually is going to take much longer than you thought, and you have more than five decades. What changes? What are you able to do that you might not otherwise be able to do? Holden Karnofsky I think the further things are, the more I think it's valid to say that humans have trouble making predictions on long time frames. The more I'd be interested in focusing on other causes of very broad things we do, such as trying to grow the set of people who think about issues like this, rather than trying to specifically study how to get AI systems like today's to behave as intended. So I think that's a general shift, but I would say that I tend to feel a bit more optimistic on longer time frames because I do think that the world just isn't ready for this and isn't thinking seriously about this. A lot of what we're trying to do at Open Philanthropy is create support that doesn't exist in traditional institutions for people to think about these topics. That includes doing AI alignment research. That also includes thinking through what we want politically, and what regulations we might want to prevent disaster. I think those are a lot of things. It's kind of a spectrum. I would say, if it's in three months, I would probably be trying to hammer out a reasonable test of whether we can demonstrate that the AI system is either safe or dangerous.If we can demonstrate it's dangerous, use that demonstration to really advocate for a broad slowing of AI research to buy more time to figure out how to make it less dangerous. I don't know that I feel that much optimism. If this kind of AI is 500 years off, then I'm kind of inclined to just ignore it and just try and make the world better and more robust, and wiser. But I think if we've got 10 years, 20 years, 50 years, 80 years, something in that range, I think that is kind of the place where supporting early careers and supporting people who are going to spend their lives thinking about this would be beneficial. Then we flash forward to this crucial time and there are a lot more people who spent their lives thinking about it. I think that would be a big deal. Dwarkesh Patel Let's talk about the question of whether we can expect the AI to be smart enough to disempower humanity, but dumb enough to have that kind of goal. When I look out at smart people in the world, it seems like a lot of them have very complex, nuanced goals that they've thought a lot about what is good and how to do good. Holden Karnofsky A lot of them don't. Dwarkesh Patel Does that overall make you more optimistic about AIs? Holden Karnofsky I am not that comforted by that. This is a very, very old debate in the world of AI alignment. Eliezer Yudkowsky has something called the orthogonality thesis. I don't remember exactly what it says, but it's something like “You could be very intelligent about any goal. You could have the stupidest goal and be very intelligent about how to get it.” In many ways, a lot of human goals are pretty silly. A lot of the things that make me happy are not things that are profound or wonderful. They're just things that happen to make me happy. You could very intelligently try to get those things, but it doesn't give me a lot of comfort. I think basically my picture of how modern AI works is that you're basically training these systems by trial and error. You're basically taking an AI system, and you're encouraging some behaviors, while discouraging other behaviors. So you might end up with a system that's being encouraged to pursue something that you didn't mean to encourage. It does it very intelligently. I don't see any contradiction there. I think that if you were to design an AI system and you were kind of giving it encouragement every time it was getting more money into your bank account, you might get something that's very, very good at getting money into your bank account to the point where you'd going to disrupt the whole world to do that. You will not automatically get something that thinks, “Gosh, is this a good thing to do?” I think with a lot of human goals, there's not really a right answer about whether our goals actually make sense. They're just the goals we have.Dwarkesh Patel You've written elsewhere about how moral progress is something that's real, that's historically happened, and it corresponds to what actually counts as moral progress. Do you think there's a reason to think the same thing might happen with AI? Whatever the process is that creates moral progress?Holden Karnofsky I kind of don't in particular. I've used the term moral progress as just a term to refer to changes in morality that are good. I think there has been moral progress, but I don't think that means moral progress is something inevitable or something that happens every time you are intelligent. An example I use a lot is attitudes toward homosexuality. It's a lot more accepted today than it used to be. I call that moral progress because I think it's good. Some people will say, “Well, you know, I don't believe that morality is objectively good or bad. I don't believe there is any such thing as moral progress. I just think things change randomly.” That will often be an example I'll pull out and I'll say, “But do you think that was a neutral change?” I just think it was good. I think it was good, but that's not because I believe there's some underlying objective reality. It's just my way of tagging or using language to talk about moral changes that seem like they were positive to me. I don't particularly expect that an AI system would have the same evolution that I've had in reflecting on morality or would come to the same conclusions I've come to or would come up with moralities that seem good to me. I don't have any reason to think any of that. I do think that historically there have been some cases of moral progress. Dwarkesh Patel What do you think is the explanation for historical progress? Holden Karnofsky One thing that I would say is that humans have a lot in common with each other. I think some of history contains cases of humans learning more about the world, learning more about themselves, and debating each other. I think a lot of moral progress has just come from humans getting to know other humans who they previously were stereotyping and judging negatively and afraid of. So I think there's some way in which humans learning about the world and learning about themselves leads them to have kind of conclusions that are more reflective and more intelligent for their own goals. But, if you brought in something into the picture that was not a human at all, it might be very intelligent and reflective about its goals, but those goals might have zero value from our point of view. Dwarkesh Patel Recent developments in AI have made many people think that AI could happen much sooner than they otherwise thought. Has the release of these new models impacted your timelines? Holden Karnofsky Yeah, I definitely think that recent developments in AI have made me a bit more freaked out. Ever since I wrote The Most Important Century series and before that, there were years when Open Philanthropy was very interested in AI risk, but it's become more so as we've seen progress in AI. I think what we're seeing is we're seeing these very generic, simple systems that are able to do a lot of different tasks. I think people are interested in this. There are a lot of compilations of what GPT-3 is–– a very simple language model that, by the way, my wife and brother-in-law both worked on. This very simple language model just predicts the next word it's going to see in a stream of text. People have gotten it to tell stories. People got similar (though not identical) models to analyze and explain jokes. People have gotten it to play role-playing games, write poetry, write lyrics, answer multiple-choice questions, and answer trivia questions. One of the results that I found most ridiculous, strange and weird was this thing called Minerva, where people took one of these language models and with very little special intervention, they got it to do these difficult math problems and explain its reasoning and get them right about half the time. It wasn't really trained in a way that was very specialized for these math problems, so we just see AI systems having all these unpredictable human-like abilities just from having this very simple training procedure. That is something I find kind of wild and kind of scary. I don't know exactly where it's going or how fast. Dwarkesh Patel So if you think transformative AI might happen this century, what implications does that have for the traditional global health and well-being stuff that OpenPhilanthropy does? Will that have persistent effects of AI if it gets aligned? Will it create a utopia for us anyway? Holden Karnofsky I don't know about utopia. My general take is that anything could happen. I think my general take on this most important century stuff, and the reason it's so important is because it's easy to imagine a world that is really awesome and is free from scarcity and we see more of the progress we've seen over the last 200 years and we end up in a really great place. It's also easy to imagine a horrible dystopia. But my take is that the more likely you think all this is, the more likely you think transformative AI is, the more you should think that that should be the top priority, that we should be trying to make that go well instead of trying to solve more direct problems that are more short term. I'm not an extremist on this. So, OpenPhilanthropy does both.OpenPhilanthropy works on speculative far-off future risks and OpenPhil also does a bunch of more direct work. Again, we do direct and recommend a lot of money to give to those top charities, which do things like distributing bed nets in Africa to help prevent malaria and treat children for intestinal parasites. OpenPhilanthropy does a lot of advocacy for more money going to foreign aid or for better land use policies to have a stronger economy. We do a bunch of scientific research work that is more aimed at direct medical applications, especially in poor countries. So I support all that stuff. I'm glad we're doing it. It's just a matter of how real and how imminent do you think this transformative AI stuff is? The more real and more imminent, the more of our resources should go into it. Dwarkesh Patel Yeah, that makes sense to me. I'm curious, whatever work you do elsewhere, do those still have persistent effects after transformative AI comes? Or do you think they'll basically wash out in comparison to the really big stuff? Holden Karnofsky I mean, I think in some sense, the effects are permanent in the sense that if you cause someone to live a healthier, better life, that's a significant thing that happened. Nothing will ever erase that life or make that life unimportant, but I think in terms of the effects on the future, I do expect it mostly to wash out. I expect that mostly whatever we do to make the world better in that way will not persist in any kind of systematic, predictable manner past these crazy changes. I think that's probably how things look pre and post-industrial revolution. There are probably some exceptions, but that's my guess.Competition, Innovation , & AGI BottlenecksDwarkesh Patel You've expressed skepticism towards the competition frame around AI or you try to make capabilities go faster for the countries or companies you favor most. But elsewhere, you've used the “innovation as mining metaphor,” and maybe you can explain that when you're giving the answer. It seems like this frame should imply that the second most powerful AI company is probably right on the heels of the first most powerful. So if you think the first most powerful is going to take safety more seriously, you should try to boost them. How do you think about how these two different frames interact? Holden Karnofsky I think it's common for people who become convinced that AI could be really important to just jump straight to, “Well, I want to make sure that people I trust build it first.” That could mean my country, that could mean my friends, people I'm investing in. I have generally called that the competition frame which is “I want to win a competition to develop AI”, and I've contrasted it with a frame that I also think is important, called the caution frame, which is that we need to all work together to be careful to not build something that spins out of control and has all these properties and behaves in all these ways we didn't intend. I do think that if we do develop these very powerful AI systems, we're likely to end up in a world where there are multiple players trying to develop it and they're all hot on each other's heels. I am very interested in ways for us all to work together to avoid disaster as we're doing that. I am maybe less excited than the average person who first learns about this is and is like “I'm picking the one I like best and helping them race ahead.”Dwarkesh Patel Although I am someone interested in both, if you take the innovation as mining metaphor seriously, doesn't that imply that actually the competition is really a big factor here?Holden Karnofsky The innovation mining metaphor is from another bit of Cold Takes. It's an argument I make that you should think of ideas as being somewhat like natural resources in the sense of once someone discovers a scientific hypothesis or once someone writes a certain great symphony, that's something that can only be done once. That's an innovation that can only be done once. So it gets harder and harder over time to have revolutionary ideas because the most revolutionary, easiest-to-find ideas have already been found. So there's an analogy to mining. I don't think it applies super importantly to the AI thing because all I'm saying is that success by person one makes success by person two harder. I'm not saying that it has no impact or that it doesn't speed things up. Just to use a literal mining metaphor, let's say there's a bunch of gold in the ground. It is true that if you rush and go get all that gold, it'll be harder for me to now come in and find a bunch of gold. That is true. What's not true is that it doesn't matter if you do it. I mean, you might do it a lot faster than me. You might do it a lot ahead of me. Dwarkesh Patel Fair enough. Maybe one piece of skepticism that somebody could have about transformative AI is that all this is going to be bottlenecked by the non-automatable steps in the innovation sequence. So there won't be these feedback loops that speed up. What is your reaction? Holden Karnofsky I think the single best criticism and my biggest point of skepticism on this most important century stuff is the idea that you could build an AI system that's very impressive and could do pretty much everything humans can do. There might be one step that you still have to have humans do, and that could bottleneck everything. Then you could have the world not speed up that much and science and technology not advance that fast because they are doing almost everything. But humans are still slowing down this one step or the real world is slowing down one step. Let's say real-world experiments to invent new technologies take how long they take. I think this is the best objection to this whole thing and the one that I'd most like to look into more. I do ultimately think that there's enough reason to think that if you had AI systems that had human-like reasoning and analysis capabilities, you shouldn't count on this kind of bottleneck causing everything to go really slow. I write about that in this piece called Weak Point in the Most Important Century: Full Automation. Part of this is how you don't need to automate the entire economy to get this crazy growth loop. You can automate just a part of it that specifically has to do with very important tech like energy and AI itself. Those actually seem, in many ways, less bottlenecked than a lot of other parts of the economy. So you could be developing better AI algorithms and AI chips, manufacturing them, mostly using robots, and using those to come up with even better designs. Then you could also be designing more and more efficient solar panels, and using those to collect more and more energy to power your eyes. So a lot of the crucial pieces here just actually don't seem all that likely to be bottlenecked. You can be at the point wher
Podcast: The Lunar Society (LS 37 · TOP 2.5% )Episode: Nadia Asparouhova - Tech Elites, Democracy, Open Source, & PhilanthropyRelease date: 2022-12-15Nadia Asparouhova is currently researching what the new tech elite will look like at nadia.xyz. She is also the author of Working in Public: The Making and Maintenance of Open Source Software.We talk about how:* American philanthropy has changed from Rockefeller to Effective Altruism* SBF represented the Davos elite rather than the Silicon Valley elite,* Open source software reveals the limitations of democratic participation,* & much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Timestamps(0:00:00) - Intro(0:00:26) - SBF was Davos elite(0:09:38) - Gender sociology of philanthropy(0:16:30) - Was Shakespeare an open source project?(0:22:00) - Need for charismatic leaders(0:33:55) - Political reform(0:40:30) - Why didn't previous wealth booms lead to new philanthropic movements?(0:53:35) - Creating a 10,000 year endowment(0:57:27) - Why do institutions become left wing?(1:02:27) - Impact of billionaire intellectual funding(1:04:12) - Value of intellectuals(1:08:53) - Climate, AI, & Doomerism(1:18:04) - Religious philanthropyTranscriptThis transcript was autogenerated and thus may contain errors.Nadia Asparouhova 0:00:00You start with this idea that like democracy is green and like we should have tons of tons of people participating tons of people participate and then it turns out that like most participation is actually just noise and not that useful. That really squarely puts SPF into like the finance crowd much more so than startups or crypto. Founders will always talk about like building and like startups are like so important or whatever and like what are all of them doing in their spare time? They're like reading books. They're reading essays and like and then those like books and essays influence how they think about stuff. Dwarkesh Patel 0:00:26Okay, today I have the pleasure of talking with Nadia Asperova. She is previously the author of Working in Public, the Making and Maintenance of Open Source Software and she is currently researching what the new tech elite will look like. Nadia, welcome to the podcast. Thanks for having me. Yeah, okay, so this is a perfect timing obviously given what's been happening with SPF. How much do you think SPF was motivated by effective altruism? Where do you place them in the whole dimensionality of idea machines and motivations? Nadia Asparouhova 0:01:02Yeah, I mean, I know there's sort of like conflicting accounts going around. Like, I mean, just from my sort of like character study or looking at SPF, it seems pretty clear to me that he is sort of inextricably tied to the concepts of utilitarianism that then motivate effective altruism. The difference for me in sort of like where I characterize effective altruism is I think it's much closer to sort of like finance Wall Street elite mindset than it is to startup mindset, even though a lot of people associate effective altruism with tech people. So yeah, to me, like that really squarely puts SPF in sort of like the finance crowd much more so than startups or crypto. And I think that's something that gets really misunderstood about him. Dwarkesh Patel 0:01:44Interesting. Yeah, I find that interesting because if you think of Jeff Bezos, when he started Amazon, he wasn't somebody like John Perry Barlow, who was just motivated by the free philosophy of the internet. You know, he saw a graph of internet usage going up into the right and he's like, I should build a business on top of this. And in a sort of loopholy way, try to figure out like, what is the thing that is that is the first thing you would want to put a SQL database on top of to ship and produce? And I think that's what books was the answer. So and obviously, he also came from a hedge fund, right? Would you play somebody like him also in the old finance crowd rather than as a startup founder? Nadia Asparouhova 0:02:22Yeah, it's kind of a weird one because he's both associated with the early computing revolution, but then also AWS was sort of like what kicked off all of the 2010s sort of startup. And I think in the way that he's started thinking about his public legacy and just from sort of his public behavior, I think he fits much more squarely now in that sort of tech startup elite mindset of the 2010s crowd more so than the Davos elite crowd of the 2000s. Dwarkesh Patel 0:02:47What in specific are you referring to? Nadia Asparouhova 0:02:49Well, he's come out and been like sort of openly critical about a lot of like Davos type institutions. He kind of pokes fun at mainstream media and for not believing in him not believing in AWS. And I think he's because he sort of like spans across like both of these generations, he's been able to see the evolution of like how maybe like his earlier peers function versus the sort of second cohort of peers that he came across. But to me, he seems much more like, much more of the sort of like startup elite mindset. And I can kind of back up a little bit there. But what I associate with the Davos Wall Street kind of crowd is much more of this focus on quantitative thinking, measuring efficiency. And then also this like globalist mindset, like I think that the vision that they want to ensure for the world is this idea of like a very interconnected world where we, you know, sort of like the United Nations kind of mindset. And that is really like literally what the Davos gathering is. Whereas Bezos from his actions today feels much closer to the startup, like Y Combinator post AWS kind of mindset of founders that were really made their money by taking these non-obvious bets on talented people. So they were much less focused on credentialism. They were much more into this idea of meritocracy. I think we sort of forget like how commonplace this trope is of like, you know, the young founder in a dorm room. And that was really popularized by the 2010s cohort of the startup elite of being someone that may have like absolutely no skills, no background in industry, but can somehow sort of like turn the entire industry over on its head. And I think that was sort of like the unique insight of the tech startup crowd. And yeah, when I think about just sort of like some of the things that Bezos is doing now, it feels like she identifies with that much more strongly of being this sort of like lone cowboy or having this like one talented person with really great ideas who can sort of change the world. I think about the, what is it called? The Altos Institute or the new like science initiative that he put out where he was recruiting these like scientists from academic institutions and paying them really high salaries just to attract like the very best top scientists around the world. That's much more of that kind of mindset than it is about like putting faith in sort of like existing institutions, which is what we would see from more of like a Davos kind of mindset. Dwarkesh Patel 0:05:16Interesting. Do you think that in the future, like the kids of today's tech billionaires will be future aristocrats? So effective altruism will be a sort of elite aristocratic philosophy. They'll be like tomorrow's Rockefellers. Nadia Asparouhova 0:05:30Yeah, I kind of worry about that actually. I think of there as being like within the US, we were kind of lucky in that we have these two different types of elites. We have the aristocratic elites and we have meritocratic elites. Most other countries I think basically just have aristocratic elites, especially comparing like the US to Britain in this way. And so in the aristocratic model, your wealth and your power is sort of like conferred to you by previous generations. You just kind of like inherit it from your parents or your family or whomever. And the upside of that, if there is an upside, is that you get really socialized into this idea of what does it mean to be a public steward? What does it mean to think of yourself and your responsibility to the rest of society as a privileged elite person? In the US, we have this really great thing where you can kind of just, you know, we have the American dream, right? So lots of people that didn't grow up with money can break into the elite ranks by doing something that makes them really successful. And that's like a really special thing about the US. So we have this whole class of meritocratic elites who may not have aristocratic backgrounds, but ended up doing something within their lifetimes that made them successful. And so, yeah, I think it's a really cool thing. The downside of that being that you don't really get like socialized into what does it mean to have this fortune and do something interesting with your money. You don't have this sort of generational benefit that the aristocratic elites have of presiding over your land or whatever you want to call it, where you're sort of learning how to think about yourself in relation to the rest of society. And so it's much easier to just kind of like hoard your wealth or whatever. And so when you think about sort of like what are the next generations, the children of the meritocratic elites going to look like or what are they going to do, it's very easy to imagine kind of just becoming aristocratic elites in the sense of like, yeah, they're just going to like inherit the money from their families. And they haven't also really been socialized into like how to think about their role in society. And so, yeah, all the meritocratic elites eventually turn into aristocratic elites, which is where I think you start seeing this trend now towards people wanting to sort of like spend down their fortunes within their lifetime or within a set number of decades after they die because they kind of see what happened in previous generations and are like, oh, I don't want to do that. Dwarkesh Patel 0:07:41Yeah, yeah, yeah. Well, it's interesting. You mentioned that the aristocratic elites have the feel that they have the responsibility to give back, I guess, more so than the meritocratic elites. But I believe that in the U.S., the amount of people who give to philanthropy and the total amount they give is higher than in Europe, right, where they probably have a higher ratio of aristocratic elites. Wouldn't you expect the opposite if the aristocratic elites are the ones that are, you know, inculcated to give back? Nadia Asparouhova 0:08:11Well, I assume like most of the people that are the figures about sort of like Americans giving back is spread across like all Americans, not just the wealthiest. Dwarkesh Patel 0:08:19Yeah. So you would predict that among the top 10 percent of Americans, there's less philanthropy than the top 10 percent of Europeans? Uh, there's... Sorry, I'm not sure I understand the question. I guess, does the ratio of meritocratic to aristocratic elites change how much philanthropy there is among the elites? Nadia Asparouhova 0:08:45Yeah, I mean, like here we have much more of a culture of like even among aristocratic elites, this idea of like institution building or like large donations to like build institutions, whereas in Europe, a lot of the public institutions are created by government. And there's sort of this mentality of like private citizens don't experiment with public institutions. That's the government's job. And you see that sort of like pervasively throughout all of like European cultures. Like when we want something to change in public society, we look to government to like regulate or change it. Whereas in the U.S., it's kind of much more like choose your own adventure. And we don't really see the government as like the sole provider or shaper of public institutions. We also look at private citizens and like there's so many things that like public institutions that we have now that were not started by government, but were started by private philanthropists. And that's like a really unusual thing about the U.S. Dwarkesh Patel 0:09:39There's this common pattern in philanthropy where a guy will become a billionaire, and then his wife will be heavily involved with or even potentially in charge of, you know, the family's philanthropic efforts. And there's many examples of this, right? Like Bill and Melinda Gates, you know, Mark Zuckerberg. Yeah, yeah, exactly. And Dustin Moskovitz. So what is the consequence of this? How is philanthropy, the causes and the foundations, how are they different because of this pattern? Nadia Asparouhova 0:10:15Well, I mean, I feel like we see that pattern, like the problem is that what even is philanthropy is changing very quickly. So we can say historically that, not even historically, in recent history, in recent decades, that has probably been true. That wasn't true in say like late 1800s, early 1900s. It was, you know, Carnegie and Rockefeller were the ones that were actually doing their own philanthropy, not their spouses. So I'd say it's a more recent trend. But now I think we're also seeing this thing where like a lot of wealthy people are not necessarily doing their philanthropic activities through foundations anymore. And that's true both within like traditional philanthropy sector and sort of like the looser definition of what we might consider to be philanthropy, depending on how you define it, which I kind of more broadly want to define as like the actions of elites that are sort of like, you know, public facing activities. But like even within sort of traditional philanthropy circles, we have like, you know, the 5.1c3 nonprofit, which is, you know, traditionally how people, you know, house all their money in a foundation and then they do their philanthropic activities out of that. But in more recent years, we've seen this trend towards like LLCs. So Emerson Collective, I think, might have been maybe the first one to do it. And that was Steve Jobs' Philanthropic Foundation. And then Mark Zuckerberg with Chan Zuckerberg Initiative also used an LLC. And then since then, a lot of other, especially within sort of like tech wealth, we've seen that move towards people using LLCs instead of 5.1c3s because they, it just gives you a lot more flexibility in the kinds of things you can fund. You don't just have to fund other nonprofits. And they also see donor advised funds. So DAFs, which are sort of this like hacky workaround to foundations as well. So I guess point being that like this sort of mental model of like, you know, one person makes a ton of money and then their spouse kind of directs these like nice, feel good, like philanthropic activities, I think is like, may not be the model that we continue to move forward on. And I'm kind of hopeful or curious to see like, what does a return to like, because we've had so many new people making a ton of money in the last 10 years or so, we might see this return to sort of like the Gilded Age style of philanthropy where people are not necessarily just like forming a philanthropic foundation and looking for the nicest causes to fund, but are actually just like thinking a little bit more holistically about like, how do I help build and create like a movement around a thing that I really care about? How do I think more broadly around like funding companies and nonprofits and individuals and like doing lots of different, different kinds of activities? Because I think like the broader goal that like motivates at least like the new sort of elite classes to want to do any of this stuff at all. I don't really think philanthropy is about altruism. I just, I think like the term philanthropy is just totally fraud and like refers to too many different things and it's not very helpful. But I think like the part that I'm interested in at least is sort of like what motivates elites to go from just sort of like making a lot of money and then like thinking about themselves to them thinking about sort of like their place in broader public society. And I think that starts with thinking about how do I control like media, academia, government are sort of like the three like arms of the public sector. And we think of it in that way a little bit more broadly where it's really much more about sort of like maintaining control over your own power, more so than sort of like this like altruistic kind of, you know, whitewash. Dwarkesh Patel 0:13:41Yeah. Nadia Asparouhova 0:13:42Then it becomes like, you know, there's so many other like creative ways to think about like how that might happen. Dwarkesh Patel 0:13:49That's, that's, that's really interesting. That's a, yeah, that's a really interesting way of thinking about what it is you're doing with philanthropy. Isn't the word noble descended from a word that basically means to give alms to people like if you're in charge of them, you will give alms to them. And in a way, I mean, it might have been another word I'm thinking of, but in a way, yeah, a part of what motivates altruism, not obviously all of it, but part of it is that, yeah, you influence and power. Not even in a necessarily negative connotation, but that's definitely what motivates altruism. So having that put square front and center is refreshing and honest, actually. Nadia Asparouhova 0:14:29Yeah, I don't, I really don't see it as like a negative thing at all. And I think most of the like, you know, writing and journalism and academia that focuses on philanthropy tends to be very wealth critical. I'm not at all, like I personally don't feel wealth critical at all. I think like, again, sort of returning to this like mental model of like aristocratic and meritocratic elites, aristocratic elites are able to sort of like pass down, like encode what they're supposed to be doing in each generation because they have this kind of like familial ties. And I think like on the meritocratic side, like if you didn't have any sort of language around altruism or public stewardship, then like, it's like, you need to kind of create that narrative for the meritocratically or else, you know, there's just like nothing to hold on to. So I think like, it makes sense to talk in those terms. Andrew Carnegie being sort of the father of modern philanthropy in the US, like, wrote these series of essays about wealth that were like very influential and where he sort of talks about this like moral obligation. And I think like, really, it was kind of this like, a quiet way for him to, even though it was ostensibly about sort of like giving back or, you know, helping lift up the next generation of people, the next generation of entrepreneurs. Like, I think it really was much more of a protective stance of saying, like, if he doesn't frame it in this way, then people are just going to knock down the concept of wealth altogether. Dwarkesh Patel 0:15:50Yeah, yeah, yeah. No, that's really interesting. And it's interesting, in which cases this kind of influence has been successful and worse not. When Jeff Bezos bought the Washington Post, has there been any counterfactual impact on how the Washington Post has run as a result? I doubt it. But you know, when Musk takes over Twitter, I guess it's a much more expensive purchase. We'll see what the influence is negative or positive. But it's certainly different than what Twitter otherwise would have been. So control over media, it's, I guess it's a bigger meme now. Let me just take a digression and ask about open source for a second. So based on your experience studying these open source projects, do you find the theory that Homer and Shakespeare were basically container words for these open source repositories that stretched out through centuries? Do you find that more plausible now, rather than them being individuals, of course? Do you find that more plausible now, given your, given your study of open source? Sorry, what did? Nadia Asparouhova 0:16:49Less plausible. What did? Dwarkesh Patel 0:16:51Oh, okay. So the idea is that they weren't just one person. It was just like a whole bunch of people throughout a bunch of centuries who composed different parts of each story or composed different stories. Nadia Asparouhova 0:17:02The Nicholas Berbaki model, same concept of, you know, a single mathematician who's actually comprised of like lots of different. I think it's actually the opposite would be sort of my conclusion. We think of open source as this very like collective volunteer effort. And I think, use that as an excuse to not really contribute back to open source or not really think about like how open source projects are maintained. Because we were like, you know, you kind of have this bystander effect where you're like, well, you know, someone's taking care of it. It's volunteer oriented. Like, of course, there's someone out there taking care of it. But in reality, it actually turns out it is just one person. So maybe it's a little bit more like a Wizard of Oz type model. It's actually just like one person behind the curtain that's like, you know, doing everything. And you see this huge, you know, grandeur and you think there must be so many people that are behind it. It's one person. Yeah, and I think that's sort of undervalued. I think a lot of the rhetoric that we have about open source is rooted in sort of like early 2000s kind of starry eyed idea about like the power of the internet and the idea of like crowdsourcing and Wikipedia and all this stuff. And then like in reality, like we kind of see this convergence from like very broad based collaborative volunteer efforts to like narrowing down to kind of like single creators. And I think a lot of like, you know, single creators are the people that are really driving a lot of the internet today and a lot of cultural production. Dwarkesh Patel 0:18:21Oh, that's that's super fascinating. Does that in general make you more sympathetic towards the lone genius view of accomplishments in history? Not just in literature, I guess, but just like when you think back to how likely is it that, you know, Newton came up with all that stuff on his own versus how much was fed into him by, you know, the others around him? Nadia Asparouhova 0:18:40Yeah, I think so. I feel I've never been like a big, like, you know, great founder theory kind of person. I think I'm like, my true theory is, I guess that ideas are maybe some sort of like sentient, like, concept or virus that operates outside of us. And we are just sort of like the vessels through which like ideas flow. So in that sense, you know, it's not really about any one person, but I do think I think I tend to lean like in terms of sort of like, where does creative, like, creative effort come from? I do think a lot of it comes much more from like a single individual than it does from with some of the crowds. But everything just serves like different purposes, right? Like, because I think like, within open source, it's like, not all of open source maintenance work is creative. In fact, most of it is pretty boring and dredgerous. And that's the stuff that no one wants to do. And that, like, one person kind of got stuck with doing and that's really different from like, who created a certain open source projects, which is a little bit more of that, like, creative mindset. Dwarkesh Patel 0:19:44Yeah, yeah, that's really interesting. Do you think more projects in open source, so just take a popular repository, on average, do you think that these repositories would be better off if, let's say a larger percentage of them where pull requests were closed and feature requests were closed? You can look at the code, but you can't interact with it or its creators anyway? Should more repositories have this model? Yeah, I definitely think so. I think a lot of people would be much happier that way. Yeah, yeah. I mean, it's interesting to think about the implications of this for other areas outside of code, right? Which is where it gets really interesting. I mean, in general, there's like a discussion. Sorry, go ahead. Yeah. Nadia Asparouhova 0:20:25Yeah, I mean, that's basically what's for the writing of my book, because I was like, okay, I feel like whatever's happening open source right now, you start with this idea that like democracy is green, and like, we should have tons and tons of people participating, tons of people participate, and then it turns out that like, most participation is actually just noise and not that useful. And then it ends up like scaring everyone away. And in the end, you just have like, you know, one or a small handful of people that are actually doing all the work while everyone else is kind of like screaming around them. And this becomes like a really great metaphor for what happens in social media. And the reason I wrote, after I wrote the book, I went and worked at Substack. And, you know, part of it was because I was like, I think the model is kind of converging from like, you know, Twitter being this big open space to like, suddenly everyone is retreating, like, the public space is so hostile that everyone must retreat into like, smaller private spaces. So then, you know, chats became a thing, Substack became a thing. And yeah, I just feel sort of like realistic, right? Dwarkesh Patel 0:21:15That's really fascinating. Yeah, the Straussian message in that book is very strong. But in general, there's, when you're thinking about something like corporate governance, right? There's a big question. And I guess even more interestingly, when you think if you think DAOs are going to be a thing, and you think that we will have to reinvent corporate governance from the ground up, there's a question of, should these be run like monarchy? Should they be sort of oligarchies where the board is in control? Should they be just complete democracies where everybody gets one vote on what you do at the next, you know, shareholder meeting or something? And this book and that analysis is actually pretty interesting to think about. Like, how should corporations be run differently, if at all? What does it inform how you think the average corporation should be run? Nadia Asparouhova 0:21:59Yeah, definitely. I mean, I think we are seeing a little bit, I'm not a corporate governance expert, but I do feel like we're seeing a little of this like, backlash against, like, you know, shareholder activism and like, extreme focus on sort of like DEI and boards and things like that. And like, I think we're seeing a little bit of people starting to like take the reins and take control again, because they're like, ah, that doesn't really work so well, it turns out. I think DAOs are going to learn this hard lesson as well. It's still maybe just too early to say what is happening in DAOs right now. But at least the ones that I've looked at, it feels like there is a very common failure mode of people saying, you know, like, let's just have like, let's have this be super democratic and like, leave it to the crowd to kind of like run this thing and figure out how it works. And it turns out you actually do need a strong leader, even the beginning. And this, this is something I learned just from like, open source projects where it's like, you know, very rarely, or if at all, do you have a strong leader? If at all, do you have a project that starts sort of like leaderless and faceless? And then, you know, usually there is some strong creator, leader or influential figure that is like driving the project forward for a certain period of time. And then you can kind of get to the point when you have enough of an active community that maybe that leader takes a step back and lets other people take over. But it's not like you can do that off day one. And that's sort of this open question that I have for, for crypto as an industry more broadly, because I think like, if I think about sort of like, what is defining each of these generations of people that are, you know, pushing forward new technological paradigms, I mentioned that like Wall Street finance mindset is very focused on like globalism and on this sort of like efficiency quantitative mindset. You have the tech Silicon Valley Y company or kind of generation that is really focused on top talent. And the idea this sort of like, you know, founder mindset, the power of like individuals breaking institutions, and then you have like the crypto mindset, which is this sort of like faceless leaderless, like governed by protocol and by code mindset, which is like intriguing to me. But I have a really hard time squaring it with seeing like, in some sense, open source was the experiment that started playing out, you know, 20 years before then. And some things are obviously different in crypto, because tokenization completely changes the incentive system for contributing and maintaining crypto projects versus like traditional open source projects. But in the end, also like humans are humans. And like, I feel like there are a lot of lessons to be learned from open source of like, you know, they also started out early on as being very starry eyed about the power of like, hyper democratic regimes. And it turned out like, that just like doesn't work in practice. And so like, how is CryptoGhost or like Square that? I'm just, yeah, very curious to see what happened. Dwarkesh Patel 0:24:41Yeah, super fascinating. That raises an interesting question, by the way, you've written about idea machines, and you can explain that concept while you answer this question. But do you think that movements can survive without a charismatic founder who is both alive and engaged? So once Will McCaskill dies, would you be shorting effective altruism? Or if like Tyler Cowen dies, would you be short progress studies? Or do you think that, you know, once you get a movement off the ground, you're like, okay, I'm gonna be shorting altruism. Nadia Asparouhova 0:25:08Yeah, I think that's a good question. I mean, like, I don't think there's some perfect template, like each of these kind of has its own sort of unique quirks and characteristics in them. I guess, yeah, back up a little bit. Idea machines is this concept I have around what the transition from we were talking before about, so like traditional 5.1c3 foundations as vehicles for philanthropy, what does the modern version of that look like that is not necessarily encoded in institution? And so I had this term idea machines, which is sort of this different way of thinking about like, turning ideas into outcomes where you have a community that forms around a shared set of values and ideas. So yeah, you mentioned like progress studies is an example of that, or effective altruism example, eventually, that community gets capitalized by some funders, and then it starts to be able to develop an agenda and then like, actually start building like, you know, operational outcomes and like, turning those ideas into real world initiatives. And remind me of your question again. Dwarkesh Patel 0:26:06Yeah, so once the charismatic founder dies of a movement, is a movement basically handicapped in some way? Like, maybe it'll still be a thing, but it's never going to reach the heights it could have reached if that main guy had been around? Nadia Asparouhova 0:26:20I think there are just like different shapes and classifications of like different, different types of communities here. So like, and I'm just thinking back again to sort of like different types of open source projects where it's not like they're like one model that fits perfectly for all of them. So I think there are some communities where it's like, yeah, I mean, I think effective altruism is maybe a good example of that where, like, the community has grown so much that I like if all their leaders were to, you know, knock on wood, disappear tomorrow or something that like, I think the movement would still keep going. There are enough true believers, like even within the community. And I think that's the next order of that community that like, I think that would just continue to grow. Whereas you have like, yeah, maybe it's certain like smaller or more nascent communities that are like, or just like communities that are much more like oriented around, like, a charismatic founder that's just like a different type where if you lose that leader, then suddenly, you know, the whole thing falls apart because they're much more like these like cults or religions. And I don't think it makes one better, better or worse. It's like the right way to do is probably like Bitcoin, where you have a charismatic leader for life because that leader is more necessarily, can't go away, can't ever die. But you still have the like, you know, North Stars and like that. Dwarkesh Patel 0:27:28Yeah. It is funny. I mean, a lot of prophets have this property of you're not really sure what they believed in. So people with different temperaments can project their own preferences onto him. Somebody like Jesus, right? It's, you know, you can be like a super left winger and believe Jesus did for everything you believe in. You can be a super right winger and believe the same. Yeah. Go ahead. Nadia Asparouhova 0:27:52I think there's value in like writing cryptically more. Like I think about like, I think Curtis Yarvin has done a really good job of this where, you know, intentionally or not, but because like his writing is so cryptic and long winded. And like, it's like the Bible where you can just kind of like pour over endlessly being like, what does this mean? What does this mean? And in a weird, you know, you're always told to write very clearly, you're told to write succinctly, but like, it's actually in a weird way, you can be much more effective by being very long winded and not obvious in what you're saying. Dwarkesh Patel 0:28:20Yes, which actually raises an interesting question that I've been wondering about. There have been movements, I guess, if I did altruism is a good example that have been focused on community building in a sort of like explicit way. And then there's other movements where they have a charismatic founder. And moreover, this guy, he doesn't really try to recruit people. I'm thinking of somebody like Peter Thiel, for example, right? He goes on, like once every year or two, he'll go on a podcast and have this like really cryptic back and forth. And then just kind of go away in a hole for a few months or a few years. And I'm curious, which one you think is more effective, given the fact that you're not really competing for votes. So absolute number of people is not what you care about. It's not clear what you care about. But you do want to have more influence among the elites who matter in like politics and tech as well. So anyways, which just your thoughts on those kinds of strategies, explicitly trying to community build versus just kind of projecting out there in a sort of cryptic way? Nadia Asparouhova 0:29:18Yeah, I mean, I definitely being somewhat cryptic myself. I favor the cryptic methodology. But I mean, yeah, I mean, you mentioned Peter Thiel. I think like the Thielverse is probably like the most, like one of the most influential things. In fact, that is hard. It is partly so effective, because it is hard to even define what it is or wrap your head around that you just know that sort of like, every interesting person you meet somehow has some weird connection to, you know, Peter Thiel. And it's funny. But I think this is sort of that evolution from the, you know, 5163 Foundation to the like idea machine implicit. And that is this this switch from, you know, used to start the, you know, Nadia Asparova Foundation or whatever. And it was like, you know, had your name on it. And it was all about like, what do I as a funder want to do in the world, right? And you spend all this time doing this sort of like classical, you know, research, going out into the field, talking to people and you sit and you think, okay, like, here's a strategy I'm going to pursue. And like, ultimately, it's like, very, very donor centric in this very explicit way. And so within traditional philanthropy, you're seeing this sort of like, backlash against that. In like, you know, straight up like nonprofit land, where now you're seeing the locus of power moving from being very donor centric to being sort of like community centric and people saying like, well, we don't really want the donors telling us what to do, even though it's also their money. Like, you know, instead, let's have this be driven by the community from the ground up. That's maybe like one very literal reaction against that, like having the donor as sort of the central power figure. But I think idea machines are kind of like the like, maybe like the more realistic or effective answer in that like, the donor is still like without the presence of a funder, like, community is just a community. They're just sitting around and talking about ideas of like, what could possibly happen? Like, they don't have any money to make anything happen. But like, I think like really effective funders are good at being sort of like subtle and thoughtful about like, like, you know, no one wants to see like the Peter Thiel foundation necessarily. That's just like, it's so like, not the style of how it works. But you know, you meet so many people that are being funded by the same person, like just going out and sort of aggressively like arming the rebels is a more sort of like, yeah, just like distributed decentralized way of thinking about like spreading one's power, instead of just starting a fund. Instead of just starting a foundation. Dwarkesh Patel 0:31:34Yeah, yeah. I mean, even if you look at the life of influential politicians, somebody like LBJ, or Robert Moses, it's how much of it was like calculated and how much of it was just like decades of building up favors and building up connections in a way that had no definite and clear plan, but it just you're hoping that someday you can call upon them and sort of like Godfather way. Yeah. Yeah, that's interesting. And by the way, this is also where your work on open source comes in, right? Like, there's this idea that in the movement, you know, everybody will come in with their ideas, and you can community build your way towards, you know, what should be funded. And, yeah, I'm inclined to believe that it's probably like a few people who have these ideas about what should be funded. And the rest of it is either just a way of like building up engagement and building up hype. Or, or I don't know, or maybe just useless, but what are your thoughts on it? Nadia Asparouhova 0:32:32You know, I decided I was like, I am like, really very much a tech startup person and not a crypto person, even though I would very much like to be fun, because I'm like, ah, this is the future. And there's so many interesting things happening. And I'm like, for the record, not at all like down in crypto, I think it is like the next big sort of movement of things that are happening. But when I really come down to like the mindset, it's like I am so in that sort of like, top talent founder, like power of the individual to break institutions mindset, like that just resonates with me so much more than the like, leaderless, faceless, like, highly participatory kind of thing. And again, like I am very open to that being true, like I maybe I'm so wrong on that. I just like, I have not yet seen evidence that that works in the world. I see a lot of rhetoric about how that could work or should work. We have this sort of like implicit belief that like, direct democracy is somehow like the greatest thing to aspire towards. But like, over and over we see evidence that like that doesn't that just like doesn't really work. It doesn't mean we have to throw out the underlying principles or values behind that. Like I still really believe in meritocracy. I really believe in like access to opportunity. I really believe in like pursuit of happiness. Like to me, those are all like very like American values. But like, I think that where that breaks is the idea that like that has to happen through these like highly participatory methods. I just like, yeah, I haven't seen really great evidence of that being that working. Dwarkesh Patel 0:33:56What does that imply about how you think about politics or at least political structures? You think it would you you elect a mayor, but like, just forget no participation. He gets to do everything he wants to do for four years and you can get rid of in four years. But until then, no community meetings. Well, what does that imply about how you think cities and states and countries should be run? Nadia Asparouhova 0:34:17Um, that's a very complicated thoughts on that. I mean, I, I think it's also like, everyone has the fantasy of when it'd be so nice if there were just one person in charge. I hate all this squabbling. It would just be so great if we could just, you know, have one person just who has exactly the views that I have and put them in charge and let them run things. That would be very nice. I just, I do also think it's unrealistic. Like, I don't think I'm, you know, maybe like modernity sounds great in theory, but in practice just doesn't like I really embrace and I think like there is no perfect governance design either in the same way that there's no perfect open source project designer or whatever else we're talking about. Um, uh, like, yeah, it really just depends like what is like, what is your population comprised of? There are some very small homogenous populations that can be very easily governed by like, you know, a small government or one person or whatever, because there isn't that much dissent or difference. Everyone is sort of on the same page. America is the extreme opposite in that angle. And I'm always thinking about America because like, I'm American and I love America. But like, everyone is trying to solve the governance question for America. And I think like, yeah, I don't know. I mean, we're an extremely heterogeneous population. There are a lot of competing world views. I may not agree with all the views of everyone in America, but like I also, like, I don't want just one person that represents my personal views. I would focus more like effectiveness in governance than I would like having like, you know, just one person in charge or something that like, I don't mind if someone disagrees with my views as long as they're good at what they do, if that makes sense. So I think the questions are like, how do we improve the speed at which like our government works and the efficacy with which it works? Like, I think there's so much room to be made room for improvement there versus like, I don't know how much like I really care about like changing the actual structure of our government. Dwarkesh Patel 0:36:27Interesting. Going back to open source for a second. Why do these companies release so much stuff in open source for free? And it's probably literally worth trillions of dollars of value in total. And they just release it out and free and many of them are developer tools that other developers use to build competitors for these big tech companies that are releasing these open source tools. Why did they do it? What explains it? Nadia Asparouhova 0:36:52I mean, I think it depends on the specific project, but like a lot of times, these are projects that were developed internally. It's the same reason of like, I think code and writing are not that dissimilar in this way of like, why do people spend all this time writing, like long posts or papers or whatever, and then just release them for free? Like, why not put everything behind a paywall? And I think the answer is probably still in both cases where like mindshare is a lot more interesting than, you know, your literal IP. And so, you know, you put out, you write these like long reports or you tweet or whatever, like you spend all this time creating content for free and putting it out there because you're trying to capture mindshare. Same thing with companies releasing open source projects. Like a lot of times they really want like other developers to come in and contribute to them. They want to increase their status as like an open source friendly kind of company or company or show like, you know, here's the type of code that we write internally and showing that externally. They want to like recruiting is, you know, the hardest thing for any company, right? And so being able to attract the right kinds of developers or people that, you know, might fit really well into their developer culture just matters a lot more. And they're just doing that instead of with words or doing that with code. Dwarkesh Patel 0:37:57You've talked about the need for more idea machines. You're like dissatisfied with the fact that effective altruism is a big game in town. Is there some idea or nascent movement where I mean, other than progress ideas, but like something where you feel like this could be a thing, but it just needs some like charismatic founder to take it to the next level? Or even if it doesn't exist yet, it just like a set of ideas around this vein is like clearly something there is going to exist. You know what I mean? Is there anything like that that you notice? Nadia Asparouhova 0:38:26I only had a couple of different possibilities in that post. Yeah, I think like the progress sort of meme is probably the largest growing contender that I would see right now. I think there's another one right now around sort of like the new right. That's not even like the best term necessarily for it, but there's sort of like a shared set of values there that are maybe starting with like politics, but like ideally spreading to like other areas of public influence. So I think like those are a couple of like the bigger movements that I see right now. And then there's like smaller stuff too. Like I mentioned, like tools for thought in that post where like that's never going to be a huge idea machine. But it's one where you have a lot of like interesting, talented people that are thinking about sort of like future of computing. And until maybe more recently, like there just hasn't been a lot of funding available and the funding is always really uneven and unpredictable. And so that's to me an example of like, you know, a smaller community that like just needs that sort of like extra influx to turn a bunch of abstract ideas into practice. But yeah, I mean, I think like, yeah, there's some like the bigger ones that I see right now. I think there is just so much more potential to do more, but I wish people would just think a little bit more creatively because, yeah, I really do think like effective altruism kind of becomes like the default option for a lot of people. Then they're kind of vaguely dissatisfied with it and they don't like think about like, well, what do I actually really care about in the world and how do I want to put that forward? Dwarkesh Patel 0:39:53Yeah, there's also the fact that effective altruism has this like very fit memeplex in the sense that it's like a polytheistic religion where if you have a cause area, then you don't have your own movement. You just have a cause area within our broader movement, right? It just like adopts your gods into our movement. Nadia Asparouhova 0:40:15Yeah, that's the same thing I see like people trying to lobby for effective altruism to care about their cause area, but then it's like you could just start a separate. Like if you can't get EA to care about, then why not just like start another one somewhere else? Dwarkesh Patel 0:40:28Yeah, so, you know, it's interesting to me that the wealth boom in Silicon Valley and then tech spheres has led to the sound growth of philanthropy, but that hasn't always been the case. Even in America, like a lot of people became billionaires after energy markets were deregulated in the 80s and the 90s. And then there wasn't, and obviously the hub of that was like the Texas area or, you know, and as far as I'm aware, there wasn't like a boom of philanthropy motivated by the ideas that people in that region had. What's different about Silicon Valley? Why are they, or do you actually think that these other places have also had their own booms of philanthropic giving? Nadia Asparouhova 0:41:11I think you're right. Yeah, I would make the distinction between like being wealthy is not the same as being elite or whatever other term you want to use there. And so yeah, there are definitely like pockets of what's called like more like local markets of wealth, like, yeah, Texas oil or energy billionaires that tend to operate kind of just more in their own sphere. And a lot of, if you look at any philanthropic, like a lot of them will be philanthropically active, but they only really focus on their geographic area. But there's sort of this difference. And I think this is part of where it comes from the question of like, you know, like what forces someone to actually like do something more public facing with their power. And I think that comes from your power being sort of like threatened. That's like one aspect I would say of that. So like tech has only really become a lot more active in the public sphere outside of startups after the tech backlash of the mid 2010s. And you can say a similar thing kind of happened with the Davos elite as well. And also for the Gilded Age cohort of wealth. And so yeah, when you have sort of, you're kind of like, you know, building in your own little world. And like, you know, we had literally like Silicon Valley where everyone was kind of like sequestered off and just thinking about startups and thinking themselves of like, tech is essentially like an industry, just like any other sort of, you know, entertainment or whatever. And we're just kind of happy building over here. And then it was only when sort of like the Panopticon like turned its head towards tech and started and they had this sort of like onslaught of critiques coming from sort of like mainstream discourse where they went, oh, like what is my place in this world? And, you know, if I don't try to like defend that, then I'm going to just kind of, yeah, we're going to lose all that power. So I think that that need to sort of like defend one's power can kind of like prompt that sort of action. The other aspect I'd highlight is just like, I think a lot of elites are driven by these like technological paradigm shifts. So there's this scholar, Carlotta Perrins, who writes about technological revolutions and financial capital. And she identifies like a few different technological revolutions over the last, whatever, hundred plus years that like drove this cycle of, you know, a new technology is invented. It's people are kind of like working on it in this smaller industry sort of way. And then there is some kind of like crazy like public frenzy and then like a backlash. And then from after that, then you have this sort of like focus on public institution building. But she really points out that like not all technology fits into that. Like, not all technology is a paradigm shift. Sometimes technology is just technology. And so, yeah, I think like a lot of wealth might just fall into that category. My third example, by the way, is the Koch family because you had, you know, the Koch brothers, but then like their father was actually the one who like kind of initially made their wealth, but was like very localized in sort of like how he thought about philanthropy. He had his own like, you know, family foundation was just sort of like doing that sort of like, you know, Texas billionaire mindset that we're talking about of, you know, I made a bunch of money. I'm going to just sort of like, yeah, do my local funder activity. It was only the next generation of his children that then like took that wealth and started thinking about like how do we actually like move that onto like a more elite stage and thinking about like their influence in the media. But like you can see there's like two clear generations within the same family. Like one has this sort of like local wealth mindset and one of them has the more like elite wealth mindset. And yeah, you can kind of like ask yourself, why did that switch happen? But yeah, it's clearly about more than just money. It's also about intention. Dwarkesh Patel 0:44:51Yeah, that's really interesting. Well, it's interesting because there's, if you identify the current mainstream media as affiliated with like that Davos aristocratic elite, or maybe not aristocratic, but like the Davos groups. Yeah, exactly. There is a growing field of independent media, but you would not identify somebody like Joe Rogan as in the Silicon Valley sphere, right? So there is a new media. I just, I guess these startup people don't have that much influence over them yet. And they feel like, yeah. Nadia Asparouhova 0:45:27I think they're trying to like take that strategy, right? So you have like a bunch of founders like Palmer Luckey and Mark Zuckerberg and Brian Armstrong and whoever else that like will not really talk to mainstream media anymore. They will not get an interview to the New York Times, but they will go to like an individual influencer or an individual creator and they'll do an interview with them. So like when Mark Zuckerberg announced Meta, like he did not get grant interviews to mainstream publications, but he went and talked to like Ben Thompson at Strategory. And so I think there is like, it fits really well with that. Like probably mindset of like, we're not necessarily institution building. We're going to like focus on power of individuals who sort of like defy institutions. And that is kind of like an open question that I have about like, what will the long term influence of the tech elite look like? Because like, yeah, the human history tells us that eventually all individual behaviors kind of get codified into institutions, right? But we're obviously living in a very different time now. And I think like the way that the Davos elite managed to like really codify and extend their influence across all these different sectors was by taking that institutional mindset and, you know, like thinking about sort of like academic institutions and media institutions, all that stuff. If the startup mindset is really inherently like anti-institution and says like, we don't want to build the next Harvard necessarily. We just want to like blow apart the concept of universities whatsoever. Or, you know, we don't want to create a new CNN or a new Fox News. We want to just like fund like individual creators to do that same sort of work, but in this very decentralized way. Like, will that work long term? I don't know. Like, is that just sort of like a temporary state that we're in right now where no one really knows what the next institutions will look like? Or is that really like an important part of this generation where like, we shouldn't be asking the question of like, how do you build a new media network? We should just be saying like, the answer is there is no media network. We just go to like all these individuals instead. Dwarkesh Patel 0:47:31Yeah, that's interesting. What do you make of this idea that I think, let's say, that these idea machines might be limited by the fact that if you're going to start some sort of organization in them, you're very much depending on somebody who has made a lot of money independently to fund you and to grant you approval. And I just have a hard time seeing somebody who is like a Napoleon-like figure being willing long term to live under that arrangement. And that so there'll just be the people who are just have this desire to dominate and be recognized who are probably pretty important to any movement you want to create. They'll just want to go off and just like build a company or something that gives them an independent footing first. And they just won't fall under any umbrella. You know what I mean? Nadia Asparouhova 0:48:27Yeah, I mean, like Dustin Moskovitz, for example, has been funding EA for a really long time and hasn't hasn't walked away necessarily. Yeah. I mean, on the flip side, you can see like SPF carried a lot of a lot of risk because it's your point, I guess, like, you know, you end up relying on this one funder, the one funder disappears and everything else kind of falls apart. I mean, I think like, I don't have any sort of like preciousness attached to the idea of like communities, you know, lasting forever. I think this is like, again, if we're trying to solve for the problem of like what did not work well about 5.1c3 foundations for most of recent history, like part of it was that they're, you know, just meant to live on to perpetuity. Like, why do we still have like, you know, Rockefeller Foundation, there are now actually many different Rockefeller Foundations, but like, why does that even exist? Like, why did that money not just get spent down? And actually, when John D. Rockefeller was first proposing the idea of foundations, he wanted them to be like, to have like a finite end state. So he wanted them to last only like 50 years or 100 years when he was proposing this like federal charter, but that federal charter failed. And so now we have these like state charters and foundations can just exist forever. But like, I think if we want to like improve upon this idea of like, how do we prevent like meritocratic elites from turning into aristocratic elites? How do we like, yeah, how do we actually just like try to do a lot of really interesting stuff in our lifetimes? It's like a very, it's very counterintuitive, because you think about like, leaving a legacy must mean like creating institutions or creating a foundation that lasts forever. And, you know, 200 years from now, there's still like the Nadia Asparuva Foundation out there. But like, if I really think about it, it's like, I would almost rather just do really, really, really good, interesting work in like, 50 years or 20 years or 10 years, and have that be the legacy versus your name kind of getting, you know, submerged over a century of institutional decay and decline. So yeah, I don't like if you know, you have a community that lasts for maybe only last 10 years or something like that, and it's funded for that amount of time, and then it kind of elbows its usefulness and it winds down or becomes less relevant. Like, I don't necessarily see it as a bad thing. Of course, like in practice, you know, nothing ever ends that that neatly and that quietly. But, but yeah, I don't think that's a bad thing. Dwarkesh Patel 0:50:44Yeah, yeah. Who are some ethnographers or sociologists from a previous era that have influenced your work? So was there somebody writing about, you know, what it was like to be in a Roman Legion? Or what it was like to work in a factory floor? And you're like, you know what, I want to do that for open source? Or I want to do that for the New Tech Elite? Nadia Asparouhova 0:51:02For open source, I was definitely really influenced by Jane Jacobs and Eleanor Ostrom. I think both had this quality of, so yeah, Eleanor Ostrom was looking at examples of common pool resources, like fisheries or forests or whatever. And just like, going and visiting them and spending a lot of time with them and then saying like, actually, I don't think tragedy of the commons is like a real thing, or it's not the only outcome that we can possibly have. And so sometimes commons can be managed, like perfectly sustainably. And it's not necessarily true that everyone just like treats them very extractively. And just like wrote about what she saw. And same with Jane Jacobs sort of looking at cities as someone who lives in one, right? Like she didn't have any fancy credentials or anything like that. She was just like, I live in the city and I'm looking around and this idea of like, top down urban planning, where you have like someone trying to design this perfect city that like, doesn't change and doesn't yield to its people. It just seems completely unrealistic. And the style that both of them take in their writing is very, it just it starts from them just like, observing what they see and then like, trying to write about it. And I just, yeah, that's, that's the style that I really want to emulate. Dwarkesh Patel 0:52:12Interesting. Nadia Asparouhova 0:52:13Yeah. I think for people to just be talking to like, I don't know, like Chris just like just talking to like open source developers, turns out you can learn a lot more from that than just sitting around like thinking about what open source developers might be thinking about. But... Dwarkesh Patel 0:52:25I have this, I have had this idea of not even for like writing it out loud, but just to understand how the world works. Just like shadowing people who are in just like a random position, they don't have to be a lead in any way, but just like a person who's the personal assistant to somebody influential, how to decide whose emails they forward, how they decide what's the priority, or somebody who's just like an accountant for a big company, right? It's just like, what is involved there? Like, what kinds of we're gonna, you know what I mean? Just like, random people, the line manager at the local factory. I just have no idea how these parts of the world work. And I just want to like, yeah, just shadow them for a day and see like, what happens there. Nadia Asparouhova 0:53:05This is really interesting, because everyone else focuses on sort of like, you know, the big name figure or whatever, but you know, who's the actual gatekeeper there? But yeah, I mean, I've definitely found like, if you just start cold emailing people and talking to them, people are often like, surprisingly, very, very open to being talked to because I don't know, like, most people do not get asked questions about what they do and how they think and stuff. So, you know, you want to realize that dream. Dwarkesh Patel 0:53:33So maybe I'm not like John Rockefeller, and that I only want my organization to last for 50 years. I'm sure you've come across these people who have this idea that, you know, I'll let my money compound for like 200 years. And if it just compounds at some reasonable rate, I'll be, it'll be like the most wealthy institution in the world, unless somebody else has the same exact idea. If somebody wanted to do that, but they wanted to hedge for the possibility that there's a war or there's a revolt, or there's some sort of change in law that draws down this wealth. How would you set up a thousand year endowment, basically, is what I'm asking, or like a 500 year endowment? Would you just put it in like a crypto wallet with us? And just, you know what I mean? Like, how would you go about that organizationally? How would you like, that's your goal? I want to have the most influence in 500 years. Nadia Asparouhova 0:54:17Well, I'd worry much less. The question for me is not about how do I make sure that there are assets available to distribute in a thousand years? Because I don't know, just put in stock marketers. You can do some pretty boring things to just like, you know, ensure your assets grow over time. The more difficult question is, how do you ensure that whoever is deciding how to distribute the funds, distributes them in a way that you personally want them to be spent? So Ford Foundation is a really interesting example of this, where Henry Ford created a Ford Foundation shortly before he died, and just pledged a lot of Ford stock to create this foundation and was doing it basically for tax reasons, had no philanthropic. It's just like, this is what we're doing to like, house this wealth over here. And then, you know, passed away, son passed away, and grandson ended up being on the board. But the board ended up being basically like, you know, a bunch of people that Henry Ford certainly would not have ever wanted to be on his board. And so, you know, and you end up seeing like, the Ford Foundation ended up becoming huge influential. I like, I have received money from them. So it's not at all an indictment of sort of like their views or anything like that. It's just much more of like, you know, you had the intent of the original donor, and then you had like, who are all these people that like, suddenly just ended up with a giant pool of capital and then like, decided to spend it however they felt like spending it and the grandson at the time sort of like, famously resigned because he was like, really frustrated and was just like, this is not at all what my family wanted and like, basically like, kicked off the board. So anyway, so that is the question that I would like figure out if I had a thousand year endowment is like, how do I make sure that whomever manages that endowment actually shares my views? One, shares my views, but then also like, how do I even know what we need to care about in a thousand years? Because like, I don't even know what the problems are in a thousand years. And this is why like, I think like, very long term thinking can be a little bit dangerous in this way, because you're sort of like, presuming that you know what even matters then. Whereas I think like, figure out the most impactful things to do is just like, so contextually dependent on like, what is going on at the time. So I can't, I don't know. And there are also foundations where you know, the dono
歡迎留言告訴我你對這一集的想法: https://open.firstory.me/user/cl81kivnk00dn01wffhwxdg2s/comments 每日英語跟讀 Ep.K491: FTX's Collapse Casts a Pall on a Philanthropy Movement In short order, the extraordinary collapse of the cryptocurrency exchange FTX has vaporized billions of dollars of customer deposits, prompted investigations by law enforcement and destroyed the fortune and reputation of the company's founder and CEO, Sam Bankman-Fried. 加密貨幣交易所FTX崩盤,短時間內讓客戶數十億美元存款蒸發,促使執法部門展開調查,並摧毀該公司創辦人兼執行長班克曼佛瑞德的財富與名聲。 It has also dealt a significant blow to the corner of philanthropy known as effective altruism, a philosophy that advocates applying data and evidence to doing the most good for the many and that is deeply tied to Bankman-Fried, one of its leading proponents and donors. Now nonprofits are scrambling to replace millions in grant commitments from Bankman-Fried's charitable vehicles, including the FTX Future Fund, whose grant recipients' work includes pandemic preparedness and artificial intelligence safety. 此事還嚴重打擊被稱為「有效利他主義」的慈善領域,這種哲學主張利用數據與證據為大多數人帶來最大益處,與其主要支持者和捐贈者之一班克曼佛瑞德密切相關。現在,非營利組織正忙著找錢,取代班克曼佛瑞德旗下慈善工具的數百萬美元贈款承諾,其中包括FTX未來基金。該基金的贈款接受者工作項目包括流行病防範與人工智慧安全。 Through a nonprofit called Building a Stronger Future, Bankman-Fried also gave to groups including news organizations ProPublica, Vox and the Intercept. 透過一個名叫「建設更強大未來」的非營利組織,班克曼佛瑞德還捐款給ProPublica、Vox和「攔截」等新聞機構。 In a note to staff members Friday, ProPublica's president, Robin Sparkman, and editor-in-chief, Stephen Engelberg, wrote that the remaining two-thirds of a $5 million grant for reporting on pandemic preparedness and biothreats were on hold. “Building a Stronger Future is assessing its finances and, concurrently, talking to other funders about taking on some of its grant portfolio,” they wrote. 在周五寫給員工的一份通知中,ProPublica總裁史帕克曼及總編輯恩格伯格寫道,用於報導疫情防範和生物威脅的500萬美元贈款,剩下三分之二已遭擱置,「『建設更強大未來』正評估其財務狀況,並跟其他金主討論承擔其部分資助組合」。 Bankman-Fried's fall from grace may have cost effective altruist causes billions of dollars in future donations. For a relatively young movement that was wrestling over its growth and focus, such a high-profile scandal implicating one of the group's most famous proponents represents a significant setback. 班克曼佛瑞德的沉淪,恐讓有效利他主義事業未來的捐款損失數十億美元。對一個相對年輕的運動來說,它正努力發展壯大和集中精力,如此引人矚目的醜聞牽涉到該團體最知名支持者之一,這代表著一個重大的挫折。 Effective altruism focuses on the question of how individuals can do as much good as possible with the money and time available to them. 有效利他主義關注的是,個人如何利用自己的金錢和時間盡可能多多行善。 In a few short years, effective altruism went from a somewhat obscure corner of charity favored by philosophy students and social workers to a leading approach to philanthropy for an increasingly powerful cohort of millennial and Gen-Z givers, including Silicon Valley programmers and hedge fund analysts. 有效利他主義在短短幾年內,從哲學系學生跟社會工作者青睞的慈善領域中一個鮮為人知角落,變成一個日益強大的千禧世代與Z世代捐贈群體做慈善的主要方式,包括矽谷程式工程師與避險基金分析師在內。 Facebook and Asana co-founder Dustin Moskovitz and his wife, Cari Tuna, have said they are devoting much of their fortune to effective altruist causes. 臉書與Asana共同創辦人莫斯科維茨及妻子卡莉.圖納表示,他們將把大部分財富用於有效利他主義事業。 “I don't know yet how we'll repair the damage Sam did and harden EA against other bad actors,” Moskovitz wrote in a tweet Saturday. “But I know that we're going to try, because the stakes remain painfully high.” 莫斯科維茨周六發推文說:「我還不知道我們將怎樣補救山姆造成的傷害,並強化有效利他主義對抗其他壞人,但我知道我們會嘗試,因為一旦失敗風險高到令人痛苦。」Source article: https://udn.com/news/story/6904/6793867 Powered by Firstory Hosting
Nadia Asparouhova is currently researching what the new tech elite will look like at nadia.xyz. She is also the author of Working in Public: The Making and Maintenance of Open Source Software.We talk about how:* American philanthropy has changed from Rockefeller to Effective Altruism* SBF represented the Davos elite rather than the Silicon Valley elite,* Open source software reveals the limitations of democratic participation,* & much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Timestamps(0:00:00) - Intro(0:00:26) - SBF was Davos elite(0:09:38) - Gender sociology of philanthropy(0:16:30) - Was Shakespeare an open source project?(0:22:00) - Need for charismatic leaders(0:33:55) - Political reform(0:40:30) - Why didn't previous wealth booms lead to new philanthropic movements?(0:53:35) - Creating a 10,000 year endowment(0:57:27) - Why do institutions become left wing?(1:02:27) - Impact of billionaire intellectual funding(1:04:12) - Value of intellectuals(1:08:53) - Climate, AI, & Doomerism(1:18:04) - Religious philanthropyTranscriptThis transcript was autogenerated and thus may contain errors.Nadia Asparouhova 0:00:00You start with this idea that like democracy is green and like we should have tons of tons of people participating tons of people participate and then it turns out that like most participation is actually just noise and not that useful. That really squarely puts SPF into like the finance crowd much more so than startups or crypto. Founders will always talk about like building and like startups are like so important or whatever and like what are all of them doing in their spare time? They're like reading books. They're reading essays and like and then those like books and essays influence how they think about stuff. Dwarkesh Patel 0:00:26Okay, today I have the pleasure of talking with Nadia Asperova. She is previously the author of Working in Public, the Making and Maintenance of Open Source Software and she is currently researching what the new tech elite will look like. Nadia, welcome to the podcast. Thanks for having me. Yeah, okay, so this is a perfect timing obviously given what's been happening with SPF. How much do you think SPF was motivated by effective altruism? Where do you place them in the whole dimensionality of idea machines and motivations? Nadia Asparouhova 0:01:02Yeah, I mean, I know there's sort of like conflicting accounts going around. Like, I mean, just from my sort of like character study or looking at SPF, it seems pretty clear to me that he is sort of inextricably tied to the concepts of utilitarianism that then motivate effective altruism. The difference for me in sort of like where I characterize effective altruism is I think it's much closer to sort of like finance Wall Street elite mindset than it is to startup mindset, even though a lot of people associate effective altruism with tech people. So yeah, to me, like that really squarely puts SPF in sort of like the finance crowd much more so than startups or crypto. And I think that's something that gets really misunderstood about him. Dwarkesh Patel 0:01:44Interesting. Yeah, I find that interesting because if you think of Jeff Bezos, when he started Amazon, he wasn't somebody like John Perry Barlow, who was just motivated by the free philosophy of the internet. You know, he saw a graph of internet usage going up into the right and he's like, I should build a business on top of this. And in a sort of loopholy way, try to figure out like, what is the thing that is that is the first thing you would want to put a SQL database on top of to ship and produce? And I think that's what books was the answer. So and obviously, he also came from a hedge fund, right? Would you play somebody like him also in the old finance crowd rather than as a startup founder? Nadia Asparouhova 0:02:22Yeah, it's kind of a weird one because he's both associated with the early computing revolution, but then also AWS was sort of like what kicked off all of the 2010s sort of startup. And I think in the way that he's started thinking about his public legacy and just from sort of his public behavior, I think he fits much more squarely now in that sort of tech startup elite mindset of the 2010s crowd more so than the Davos elite crowd of the 2000s. Dwarkesh Patel 0:02:47What in specific are you referring to? Nadia Asparouhova 0:02:49Well, he's come out and been like sort of openly critical about a lot of like Davos type institutions. He kind of pokes fun at mainstream media and for not believing in him not believing in AWS. And I think he's because he sort of like spans across like both of these generations, he's been able to see the evolution of like how maybe like his earlier peers function versus the sort of second cohort of peers that he came across. But to me, he seems much more like, much more of the sort of like startup elite mindset. And I can kind of back up a little bit there. But what I associate with the Davos Wall Street kind of crowd is much more of this focus on quantitative thinking, measuring efficiency. And then also this like globalist mindset, like I think that the vision that they want to ensure for the world is this idea of like a very interconnected world where we, you know, sort of like the United Nations kind of mindset. And that is really like literally what the Davos gathering is. Whereas Bezos from his actions today feels much closer to the startup, like Y Combinator post AWS kind of mindset of founders that were really made their money by taking these non-obvious bets on talented people. So they were much less focused on credentialism. They were much more into this idea of meritocracy. I think we sort of forget like how commonplace this trope is of like, you know, the young founder in a dorm room. And that was really popularized by the 2010s cohort of the startup elite of being someone that may have like absolutely no skills, no background in industry, but can somehow sort of like turn the entire industry over on its head. And I think that was sort of like the unique insight of the tech startup crowd. And yeah, when I think about just sort of like some of the things that Bezos is doing now, it feels like she identifies with that much more strongly of being this sort of like lone cowboy or having this like one talented person with really great ideas who can sort of change the world. I think about the, what is it called? The Altos Institute or the new like science initiative that he put out where he was recruiting these like scientists from academic institutions and paying them really high salaries just to attract like the very best top scientists around the world. That's much more of that kind of mindset than it is about like putting faith in sort of like existing institutions, which is what we would see from more of like a Davos kind of mindset. Dwarkesh Patel 0:05:16Interesting. Do you think that in the future, like the kids of today's tech billionaires will be future aristocrats? So effective altruism will be a sort of elite aristocratic philosophy. They'll be like tomorrow's Rockefellers. Nadia Asparouhova 0:05:30Yeah, I kind of worry about that actually. I think of there as being like within the US, we were kind of lucky in that we have these two different types of elites. We have the aristocratic elites and we have meritocratic elites. Most other countries I think basically just have aristocratic elites, especially comparing like the US to Britain in this way. And so in the aristocratic model, your wealth and your power is sort of like conferred to you by previous generations. You just kind of like inherit it from your parents or your family or whomever. And the upside of that, if there is an upside, is that you get really socialized into this idea of what does it mean to be a public steward? What does it mean to think of yourself and your responsibility to the rest of society as a privileged elite person? In the US, we have this really great thing where you can kind of just, you know, we have the American dream, right? So lots of people that didn't grow up with money can break into the elite ranks by doing something that makes them really successful. And that's like a really special thing about the US. So we have this whole class of meritocratic elites who may not have aristocratic backgrounds, but ended up doing something within their lifetimes that made them successful. And so, yeah, I think it's a really cool thing. The downside of that being that you don't really get like socialized into what does it mean to have this fortune and do something interesting with your money. You don't have this sort of generational benefit that the aristocratic elites have of presiding over your land or whatever you want to call it, where you're sort of learning how to think about yourself in relation to the rest of society. And so it's much easier to just kind of like hoard your wealth or whatever. And so when you think about sort of like what are the next generations, the children of the meritocratic elites going to look like or what are they going to do, it's very easy to imagine kind of just becoming aristocratic elites in the sense of like, yeah, they're just going to like inherit the money from their families. And they haven't also really been socialized into like how to think about their role in society. And so, yeah, all the meritocratic elites eventually turn into aristocratic elites, which is where I think you start seeing this trend now towards people wanting to sort of like spend down their fortunes within their lifetime or within a set number of decades after they die because they kind of see what happened in previous generations and are like, oh, I don't want to do that. Dwarkesh Patel 0:07:41Yeah, yeah, yeah. Well, it's interesting. You mentioned that the aristocratic elites have the feel that they have the responsibility to give back, I guess, more so than the meritocratic elites. But I believe that in the U.S., the amount of people who give to philanthropy and the total amount they give is higher than in Europe, right, where they probably have a higher ratio of aristocratic elites. Wouldn't you expect the opposite if the aristocratic elites are the ones that are, you know, inculcated to give back? Nadia Asparouhova 0:08:11Well, I assume like most of the people that are the figures about sort of like Americans giving back is spread across like all Americans, not just the wealthiest. Dwarkesh Patel 0:08:19Yeah. So you would predict that among the top 10 percent of Americans, there's less philanthropy than the top 10 percent of Europeans? Uh, there's... Sorry, I'm not sure I understand the question. I guess, does the ratio of meritocratic to aristocratic elites change how much philanthropy there is among the elites? Nadia Asparouhova 0:08:45Yeah, I mean, like here we have much more of a culture of like even among aristocratic elites, this idea of like institution building or like large donations to like build institutions, whereas in Europe, a lot of the public institutions are created by government. And there's sort of this mentality of like private citizens don't experiment with public institutions. That's the government's job. And you see that sort of like pervasively throughout all of like European cultures. Like when we want something to change in public society, we look to government to like regulate or change it. Whereas in the U.S., it's kind of much more like choose your own adventure. And we don't really see the government as like the sole provider or shaper of public institutions. We also look at private citizens and like there's so many things that like public institutions that we have now that were not started by government, but were started by private philanthropists. And that's like a really unusual thing about the U.S. Dwarkesh Patel 0:09:39There's this common pattern in philanthropy where a guy will become a billionaire, and then his wife will be heavily involved with or even potentially in charge of, you know, the family's philanthropic efforts. And there's many examples of this, right? Like Bill and Melinda Gates, you know, Mark Zuckerberg. Yeah, yeah, exactly. And Dustin Moskovitz. So what is the consequence of this? How is philanthropy, the causes and the foundations, how are they different because of this pattern? Nadia Asparouhova 0:10:15Well, I mean, I feel like we see that pattern, like the problem is that what even is philanthropy is changing very quickly. So we can say historically that, not even historically, in recent history, in recent decades, that has probably been true. That wasn't true in say like late 1800s, early 1900s. It was, you know, Carnegie and Rockefeller were the ones that were actually doing their own philanthropy, not their spouses. So I'd say it's a more recent trend. But now I think we're also seeing this thing where like a lot of wealthy people are not necessarily doing their philanthropic activities through foundations anymore. And that's true both within like traditional philanthropy sector and sort of like the looser definition of what we might consider to be philanthropy, depending on how you define it, which I kind of more broadly want to define as like the actions of elites that are sort of like, you know, public facing activities. But like even within sort of traditional philanthropy circles, we have like, you know, the 5.1c3 nonprofit, which is, you know, traditionally how people, you know, house all their money in a foundation and then they do their philanthropic activities out of that. But in more recent years, we've seen this trend towards like LLCs. So Emerson Collective, I think, might have been maybe the first one to do it. And that was Steve Jobs' Philanthropic Foundation. And then Mark Zuckerberg with Chan Zuckerberg Initiative also used an LLC. And then since then, a lot of other, especially within sort of like tech wealth, we've seen that move towards people using LLCs instead of 5.1c3s because they, it just gives you a lot more flexibility in the kinds of things you can fund. You don't just have to fund other nonprofits. And they also see donor advised funds. So DAFs, which are sort of this like hacky workaround to foundations as well. So I guess point being that like this sort of mental model of like, you know, one person makes a ton of money and then their spouse kind of directs these like nice, feel good, like philanthropic activities, I think is like, may not be the model that we continue to move forward on. And I'm kind of hopeful or curious to see like, what does a return to like, because we've had so many new people making a ton of money in the last 10 years or so, we might see this return to sort of like the Gilded Age style of philanthropy where people are not necessarily just like forming a philanthropic foundation and looking for the nicest causes to fund, but are actually just like thinking a little bit more holistically about like, how do I help build and create like a movement around a thing that I really care about? How do I think more broadly around like funding companies and nonprofits and individuals and like doing lots of different, different kinds of activities? Because I think like the broader goal that like motivates at least like the new sort of elite classes to want to do any of this stuff at all. I don't really think philanthropy is about altruism. I just, I think like the term philanthropy is just totally fraud and like refers to too many different things and it's not very helpful. But I think like the part that I'm interested in at least is sort of like what motivates elites to go from just sort of like making a lot of money and then like thinking about themselves to them thinking about sort of like their place in broader public society. And I think that starts with thinking about how do I control like media, academia, government are sort of like the three like arms of the public sector. And we think of it in that way a little bit more broadly where it's really much more about sort of like maintaining control over your own power, more so than sort of like this like altruistic kind of, you know, whitewash. Dwarkesh Patel 0:13:41Yeah. Nadia Asparouhova 0:13:42Then it becomes like, you know, there's so many other like creative ways to think about like how that might happen. Dwarkesh Patel 0:13:49That's, that's, that's really interesting. That's a, yeah, that's a really interesting way of thinking about what it is you're doing with philanthropy. Isn't the word noble descended from a word that basically means to give alms to people like if you're in charge of them, you will give alms to them. And in a way, I mean, it might have been another word I'm thinking of, but in a way, yeah, a part of what motivates altruism, not obviously all of it, but part of it is that, yeah, you influence and power. Not even in a necessarily negative connotation, but that's definitely what motivates altruism. So having that put square front and center is refreshing and honest, actually. Nadia Asparouhova 0:14:29Yeah, I don't, I really don't see it as like a negative thing at all. And I think most of the like, you know, writing and journalism and academia that focuses on philanthropy tends to be very wealth critical. I'm not at all, like I personally don't feel wealth critical at all. I think like, again, sort of returning to this like mental model of like aristocratic and meritocratic elites, aristocratic elites are able to sort of like pass down, like encode what they're supposed to be doing in each generation because they have this kind of like familial ties. And I think like on the meritocratic side, like if you didn't have any sort of language around altruism or public stewardship, then like, it's like, you need to kind of create that narrative for the meritocratically or else, you know, there's just like nothing to hold on to. So I think like, it makes sense to talk in those terms. Andrew Carnegie being sort of the father of modern philanthropy in the US, like, wrote these series of essays about wealth that were like very influential and where he sort of talks about this like moral obligation. And I think like, really, it was kind of this like, a quiet way for him to, even though it was ostensibly about sort of like giving back or, you know, helping lift up the next generation of people, the next generation of entrepreneurs. Like, I think it really was much more of a protective stance of saying, like, if he doesn't frame it in this way, then people are just going to knock down the concept of wealth altogether. Dwarkesh Patel 0:15:50Yeah, yeah, yeah. No, that's really interesting. And it's interesting, in which cases this kind of influence has been successful and worse not. When Jeff Bezos bought the Washington Post, has there been any counterfactual impact on how the Washington Post has run as a result? I doubt it. But you know, when Musk takes over Twitter, I guess it's a much more expensive purchase. We'll see what the influence is negative or positive. But it's certainly different than what Twitter otherwise would have been. So control over media, it's, I guess it's a bigger meme now. Let me just take a digression and ask about open source for a second. So based on your experience studying these open source projects, do you find the theory that Homer and Shakespeare were basically container words for these open source repositories that stretched out through centuries? Do you find that more plausible now, rather than them being individuals, of course? Do you find that more plausible now, given your, given your study of open source? Sorry, what did? Nadia Asparouhova 0:16:49Less plausible. What did? Dwarkesh Patel 0:16:51Oh, okay. So the idea is that they weren't just one person. It was just like a whole bunch of people throughout a bunch of centuries who composed different parts of each story or composed different stories. Nadia Asparouhova 0:17:02The Nicholas Berbaki model, same concept of, you know, a single mathematician who's actually comprised of like lots of different. I think it's actually the opposite would be sort of my conclusion. We think of open source as this very like collective volunteer effort. And I think, use that as an excuse to not really contribute back to open source or not really think about like how open source projects are maintained. Because we were like, you know, you kind of have this bystander effect where you're like, well, you know, someone's taking care of it. It's volunteer oriented. Like, of course, there's someone out there taking care of it. But in reality, it actually turns out it is just one person. So maybe it's a little bit more like a Wizard of Oz type model. It's actually just like one person behind the curtain that's like, you know, doing everything. And you see this huge, you know, grandeur and you think there must be so many people that are behind it. It's one person. Yeah, and I think that's sort of undervalued. I think a lot of the rhetoric that we have about open source is rooted in sort of like early 2000s kind of starry eyed idea about like the power of the internet and the idea of like crowdsourcing and Wikipedia and all this stuff. And then like in reality, like we kind of see this convergence from like very broad based collaborative volunteer efforts to like narrowing down to kind of like single creators. And I think a lot of like, you know, single creators are the people that are really driving a lot of the internet today and a lot of cultural production. Dwarkesh Patel 0:18:21Oh, that's that's super fascinating. Does that in general make you more sympathetic towards the lone genius view of accomplishments in history? Not just in literature, I guess, but just like when you think back to how likely is it that, you know, Newton came up with all that stuff on his own versus how much was fed into him by, you know, the others around him? Nadia Asparouhova 0:18:40Yeah, I think so. I feel I've never been like a big, like, you know, great founder theory kind of person. I think I'm like, my true theory is, I guess that ideas are maybe some sort of like sentient, like, concept or virus that operates outside of us. And we are just sort of like the vessels through which like ideas flow. So in that sense, you know, it's not really about any one person, but I do think I think I tend to lean like in terms of sort of like, where does creative, like, creative effort come from? I do think a lot of it comes much more from like a single individual than it does from with some of the crowds. But everything just serves like different purposes, right? Like, because I think like, within open source, it's like, not all of open source maintenance work is creative. In fact, most of it is pretty boring and dredgerous. And that's the stuff that no one wants to do. And that, like, one person kind of got stuck with doing and that's really different from like, who created a certain open source projects, which is a little bit more of that, like, creative mindset. Dwarkesh Patel 0:19:44Yeah, yeah, that's really interesting. Do you think more projects in open source, so just take a popular repository, on average, do you think that these repositories would be better off if, let's say a larger percentage of them where pull requests were closed and feature requests were closed? You can look at the code, but you can't interact with it or its creators anyway? Should more repositories have this model? Yeah, I definitely think so. I think a lot of people would be much happier that way. Yeah, yeah. I mean, it's interesting to think about the implications of this for other areas outside of code, right? Which is where it gets really interesting. I mean, in general, there's like a discussion. Sorry, go ahead. Yeah. Nadia Asparouhova 0:20:25Yeah, I mean, that's basically what's for the writing of my book, because I was like, okay, I feel like whatever's happening open source right now, you start with this idea that like democracy is green, and like, we should have tons and tons of people participating, tons of people participate, and then it turns out that like, most participation is actually just noise and not that useful. And then it ends up like scaring everyone away. And in the end, you just have like, you know, one or a small handful of people that are actually doing all the work while everyone else is kind of like screaming around them. And this becomes like a really great metaphor for what happens in social media. And the reason I wrote, after I wrote the book, I went and worked at Substack. And, you know, part of it was because I was like, I think the model is kind of converging from like, you know, Twitter being this big open space to like, suddenly everyone is retreating, like, the public space is so hostile that everyone must retreat into like, smaller private spaces. So then, you know, chats became a thing, Substack became a thing. And yeah, I just feel sort of like realistic, right? Dwarkesh Patel 0:21:15That's really fascinating. Yeah, the Straussian message in that book is very strong. But in general, there's, when you're thinking about something like corporate governance, right? There's a big question. And I guess even more interestingly, when you think if you think DAOs are going to be a thing, and you think that we will have to reinvent corporate governance from the ground up, there's a question of, should these be run like monarchy? Should they be sort of oligarchies where the board is in control? Should they be just complete democracies where everybody gets one vote on what you do at the next, you know, shareholder meeting or something? And this book and that analysis is actually pretty interesting to think about. Like, how should corporations be run differently, if at all? What does it inform how you think the average corporation should be run? Nadia Asparouhova 0:21:59Yeah, definitely. I mean, I think we are seeing a little bit, I'm not a corporate governance expert, but I do feel like we're seeing a little of this like, backlash against, like, you know, shareholder activism and like, extreme focus on sort of like DEI and boards and things like that. And like, I think we're seeing a little bit of people starting to like take the reins and take control again, because they're like, ah, that doesn't really work so well, it turns out. I think DAOs are going to learn this hard lesson as well. It's still maybe just too early to say what is happening in DAOs right now. But at least the ones that I've looked at, it feels like there is a very common failure mode of people saying, you know, like, let's just have like, let's have this be super democratic and like, leave it to the crowd to kind of like run this thing and figure out how it works. And it turns out you actually do need a strong leader, even the beginning. And this, this is something I learned just from like, open source projects where it's like, you know, very rarely, or if at all, do you have a strong leader? If at all, do you have a project that starts sort of like leaderless and faceless? And then, you know, usually there is some strong creator, leader or influential figure that is like driving the project forward for a certain period of time. And then you can kind of get to the point when you have enough of an active community that maybe that leader takes a step back and lets other people take over. But it's not like you can do that off day one. And that's sort of this open question that I have for, for crypto as an industry more broadly, because I think like, if I think about sort of like, what is defining each of these generations of people that are, you know, pushing forward new technological paradigms, I mentioned that like Wall Street finance mindset is very focused on like globalism and on this sort of like efficiency quantitative mindset. You have the tech Silicon Valley Y company or kind of generation that is really focused on top talent. And the idea this sort of like, you know, founder mindset, the power of like individuals breaking institutions, and then you have like the crypto mindset, which is this sort of like faceless leaderless, like governed by protocol and by code mindset, which is like intriguing to me. But I have a really hard time squaring it with seeing like, in some sense, open source was the experiment that started playing out, you know, 20 years before then. And some things are obviously different in crypto, because tokenization completely changes the incentive system for contributing and maintaining crypto projects versus like traditional open source projects. But in the end, also like humans are humans. And like, I feel like there are a lot of lessons to be learned from open source of like, you know, they also started out early on as being very starry eyed about the power of like, hyper democratic regimes. And it turned out like, that just like doesn't work in practice. And so like, how is CryptoGhost or like Square that? I'm just, yeah, very curious to see what happened. Dwarkesh Patel 0:24:41Yeah, super fascinating. That raises an interesting question, by the way, you've written about idea machines, and you can explain that concept while you answer this question. But do you think that movements can survive without a charismatic founder who is both alive and engaged? So once Will McCaskill dies, would you be shorting effective altruism? Or if like Tyler Cowen dies, would you be short progress studies? Or do you think that, you know, once you get a movement off the ground, you're like, okay, I'm gonna be shorting altruism. Nadia Asparouhova 0:25:08Yeah, I think that's a good question. I mean, like, I don't think there's some perfect template, like each of these kind of has its own sort of unique quirks and characteristics in them. I guess, yeah, back up a little bit. Idea machines is this concept I have around what the transition from we were talking before about, so like traditional 5.1c3 foundations as vehicles for philanthropy, what does the modern version of that look like that is not necessarily encoded in institution? And so I had this term idea machines, which is sort of this different way of thinking about like, turning ideas into outcomes where you have a community that forms around a shared set of values and ideas. So yeah, you mentioned like progress studies is an example of that, or effective altruism example, eventually, that community gets capitalized by some funders, and then it starts to be able to develop an agenda and then like, actually start building like, you know, operational outcomes and like, turning those ideas into real world initiatives. And remind me of your question again. Dwarkesh Patel 0:26:06Yeah, so once the charismatic founder dies of a movement, is a movement basically handicapped in some way? Like, maybe it'll still be a thing, but it's never going to reach the heights it could have reached if that main guy had been around? Nadia Asparouhova 0:26:20I think there are just like different shapes and classifications of like different, different types of communities here. So like, and I'm just thinking back again to sort of like different types of open source projects where it's not like they're like one model that fits perfectly for all of them. So I think there are some communities where it's like, yeah, I mean, I think effective altruism is maybe a good example of that where, like, the community has grown so much that I like if all their leaders were to, you know, knock on wood, disappear tomorrow or something that like, I think the movement would still keep going. There are enough true believers, like even within the community. And I think that's the next order of that community that like, I think that would just continue to grow. Whereas you have like, yeah, maybe it's certain like smaller or more nascent communities that are like, or just like communities that are much more like oriented around, like, a charismatic founder that's just like a different type where if you lose that leader, then suddenly, you know, the whole thing falls apart because they're much more like these like cults or religions. And I don't think it makes one better, better or worse. It's like the right way to do is probably like Bitcoin, where you have a charismatic leader for life because that leader is more necessarily, can't go away, can't ever die. But you still have the like, you know, North Stars and like that. Dwarkesh Patel 0:27:28Yeah. It is funny. I mean, a lot of prophets have this property of you're not really sure what they believed in. So people with different temperaments can project their own preferences onto him. Somebody like Jesus, right? It's, you know, you can be like a super left winger and believe Jesus did for everything you believe in. You can be a super right winger and believe the same. Yeah. Go ahead. Nadia Asparouhova 0:27:52I think there's value in like writing cryptically more. Like I think about like, I think Curtis Yarvin has done a really good job of this where, you know, intentionally or not, but because like his writing is so cryptic and long winded. And like, it's like the Bible where you can just kind of like pour over endlessly being like, what does this mean? What does this mean? And in a weird, you know, you're always told to write very clearly, you're told to write succinctly, but like, it's actually in a weird way, you can be much more effective by being very long winded and not obvious in what you're saying. Dwarkesh Patel 0:28:20Yes, which actually raises an interesting question that I've been wondering about. There have been movements, I guess, if I did altruism is a good example that have been focused on community building in a sort of like explicit way. And then there's other movements where they have a charismatic founder. And moreover, this guy, he doesn't really try to recruit people. I'm thinking of somebody like Peter Thiel, for example, right? He goes on, like once every year or two, he'll go on a podcast and have this like really cryptic back and forth. And then just kind of go away in a hole for a few months or a few years. And I'm curious, which one you think is more effective, given the fact that you're not really competing for votes. So absolute number of people is not what you care about. It's not clear what you care about. But you do want to have more influence among the elites who matter in like politics and tech as well. So anyways, which just your thoughts on those kinds of strategies, explicitly trying to community build versus just kind of projecting out there in a sort of cryptic way? Nadia Asparouhova 0:29:18Yeah, I mean, I definitely being somewhat cryptic myself. I favor the cryptic methodology. But I mean, yeah, I mean, you mentioned Peter Thiel. I think like the Thielverse is probably like the most, like one of the most influential things. In fact, that is hard. It is partly so effective, because it is hard to even define what it is or wrap your head around that you just know that sort of like, every interesting person you meet somehow has some weird connection to, you know, Peter Thiel. And it's funny. But I think this is sort of that evolution from the, you know, 5163 Foundation to the like idea machine implicit. And that is this this switch from, you know, used to start the, you know, Nadia Asparova Foundation or whatever. And it was like, you know, had your name on it. And it was all about like, what do I as a funder want to do in the world, right? And you spend all this time doing this sort of like classical, you know, research, going out into the field, talking to people and you sit and you think, okay, like, here's a strategy I'm going to pursue. And like, ultimately, it's like, very, very donor centric in this very explicit way. And so within traditional philanthropy, you're seeing this sort of like, backlash against that. In like, you know, straight up like nonprofit land, where now you're seeing the locus of power moving from being very donor centric to being sort of like community centric and people saying like, well, we don't really want the donors telling us what to do, even though it's also their money. Like, you know, instead, let's have this be driven by the community from the ground up. That's maybe like one very literal reaction against that, like having the donor as sort of the central power figure. But I think idea machines are kind of like the like, maybe like the more realistic or effective answer in that like, the donor is still like without the presence of a funder, like, community is just a community. They're just sitting around and talking about ideas of like, what could possibly happen? Like, they don't have any money to make anything happen. But like, I think like really effective funders are good at being sort of like subtle and thoughtful about like, like, you know, no one wants to see like the Peter Thiel foundation necessarily. That's just like, it's so like, not the style of how it works. But you know, you meet so many people that are being funded by the same person, like just going out and sort of aggressively like arming the rebels is a more sort of like, yeah, just like distributed decentralized way of thinking about like spreading one's power, instead of just starting a fund. Instead of just starting a foundation. Dwarkesh Patel 0:31:34Yeah, yeah. I mean, even if you look at the life of influential politicians, somebody like LBJ, or Robert Moses, it's how much of it was like calculated and how much of it was just like decades of building up favors and building up connections in a way that had no definite and clear plan, but it just you're hoping that someday you can call upon them and sort of like Godfather way. Yeah. Yeah, that's interesting. And by the way, this is also where your work on open source comes in, right? Like, there's this idea that in the movement, you know, everybody will come in with their ideas, and you can community build your way towards, you know, what should be funded. And, yeah, I'm inclined to believe that it's probably like a few people who have these ideas about what should be funded. And the rest of it is either just a way of like building up engagement and building up hype. Or, or I don't know, or maybe just useless, but what are your thoughts on it? Nadia Asparouhova 0:32:32You know, I decided I was like, I am like, really very much a tech startup person and not a crypto person, even though I would very much like to be fun, because I'm like, ah, this is the future. And there's so many interesting things happening. And I'm like, for the record, not at all like down in crypto, I think it is like the next big sort of movement of things that are happening. But when I really come down to like the mindset, it's like I am so in that sort of like, top talent founder, like power of the individual to break institutions mindset, like that just resonates with me so much more than the like, leaderless, faceless, like, highly participatory kind of thing. And again, like I am very open to that being true, like I maybe I'm so wrong on that. I just like, I have not yet seen evidence that that works in the world. I see a lot of rhetoric about how that could work or should work. We have this sort of like implicit belief that like, direct democracy is somehow like the greatest thing to aspire towards. But like, over and over we see evidence that like that doesn't that just like doesn't really work. It doesn't mean we have to throw out the underlying principles or values behind that. Like I still really believe in meritocracy. I really believe in like access to opportunity. I really believe in like pursuit of happiness. Like to me, those are all like very like American values. But like, I think that where that breaks is the idea that like that has to happen through these like highly participatory methods. I just like, yeah, I haven't seen really great evidence of that being that working. Dwarkesh Patel 0:33:56What does that imply about how you think about politics or at least political structures? You think it would you you elect a mayor, but like, just forget no participation. He gets to do everything he wants to do for four years and you can get rid of in four years. But until then, no community meetings. Well, what does that imply about how you think cities and states and countries should be run? Nadia Asparouhova 0:34:17Um, that's a very complicated thoughts on that. I mean, I, I think it's also like, everyone has the fantasy of when it'd be so nice if there were just one person in charge. I hate all this squabbling. It would just be so great if we could just, you know, have one person just who has exactly the views that I have and put them in charge and let them run things. That would be very nice. I just, I do also think it's unrealistic. Like, I don't think I'm, you know, maybe like modernity sounds great in theory, but in practice just doesn't like I really embrace and I think like there is no perfect governance design either in the same way that there's no perfect open source project designer or whatever else we're talking about. Um, uh, like, yeah, it really just depends like what is like, what is your population comprised of? There are some very small homogenous populations that can be very easily governed by like, you know, a small government or one person or whatever, because there isn't that much dissent or difference. Everyone is sort of on the same page. America is the extreme opposite in that angle. And I'm always thinking about America because like, I'm American and I love America. But like, everyone is trying to solve the governance question for America. And I think like, yeah, I don't know. I mean, we're an extremely heterogeneous population. There are a lot of competing world views. I may not agree with all the views of everyone in America, but like I also, like, I don't want just one person that represents my personal views. I would focus more like effectiveness in governance than I would like having like, you know, just one person in charge or something that like, I don't mind if someone disagrees with my views as long as they're good at what they do, if that makes sense. So I think the questions are like, how do we improve the speed at which like our government works and the efficacy with which it works? Like, I think there's so much room to be made room for improvement there versus like, I don't know how much like I really care about like changing the actual structure of our government. Dwarkesh Patel 0:36:27Interesting. Going back to open source for a second. Why do these companies release so much stuff in open source for free? And it's probably literally worth trillions of dollars of value in total. And they just release it out and free and many of them are developer tools that other developers use to build competitors for these big tech companies that are releasing these open source tools. Why did they do it? What explains it? Nadia Asparouhova 0:36:52I mean, I think it depends on the specific project, but like a lot of times, these are projects that were developed internally. It's the same reason of like, I think code and writing are not that dissimilar in this way of like, why do people spend all this time writing, like long posts or papers or whatever, and then just release them for free? Like, why not put everything behind a paywall? And I think the answer is probably still in both cases where like mindshare is a lot more interesting than, you know, your literal IP. And so, you know, you put out, you write these like long reports or you tweet or whatever, like you spend all this time creating content for free and putting it out there because you're trying to capture mindshare. Same thing with companies releasing open source projects. Like a lot of times they really want like other developers to come in and contribute to them. They want to increase their status as like an open source friendly kind of company or company or show like, you know, here's the type of code that we write internally and showing that externally. They want to like recruiting is, you know, the hardest thing for any company, right? And so being able to attract the right kinds of developers or people that, you know, might fit really well into their developer culture just matters a lot more. And they're just doing that instead of with words or doing that with code. Dwarkesh Patel 0:37:57You've talked about the need for more idea machines. You're like dissatisfied with the fact that effective altruism is a big game in town. Is there some idea or nascent movement where I mean, other than progress ideas, but like something where you feel like this could be a thing, but it just needs some like charismatic founder to take it to the next level? Or even if it doesn't exist yet, it just like a set of ideas around this vein is like clearly something there is going to exist. You know what I mean? Is there anything like that that you notice? Nadia Asparouhova 0:38:26I only had a couple of different possibilities in that post. Yeah, I think like the progress sort of meme is probably the largest growing contender that I would see right now. I think there's another one right now around sort of like the new right. That's not even like the best term necessarily for it, but there's sort of like a shared set of values there that are maybe starting with like politics, but like ideally spreading to like other areas of public influence. So I think like those are a couple of like the bigger movements that I see right now. And then there's like smaller stuff too. Like I mentioned, like tools for thought in that post where like that's never going to be a huge idea machine. But it's one where you have a lot of like interesting, talented people that are thinking about sort of like future of computing. And until maybe more recently, like there just hasn't been a lot of funding available and the funding is always really uneven and unpredictable. And so that's to me an example of like, you know, a smaller community that like just needs that sort of like extra influx to turn a bunch of abstract ideas into practice. But yeah, I mean, I think like, yeah, there's some like the bigger ones that I see right now. I think there is just so much more potential to do more, but I wish people would just think a little bit more creatively because, yeah, I really do think like effective altruism kind of becomes like the default option for a lot of people. Then they're kind of vaguely dissatisfied with it and they don't like think about like, well, what do I actually really care about in the world and how do I want to put that forward? Dwarkesh Patel 0:39:53Yeah, there's also the fact that effective altruism has this like very fit memeplex in the sense that it's like a polytheistic religion where if you have a cause area, then you don't have your own movement. You just have a cause area within our broader movement, right? It just like adopts your gods into our movement. Nadia Asparouhova 0:40:15Yeah, that's the same thing I see like people trying to lobby for effective altruism to care about their cause area, but then it's like you could just start a separate. Like if you can't get EA to care about, then why not just like start another one somewhere else? Dwarkesh Patel 0:40:28Yeah, so, you know, it's interesting to me that the wealth boom in Silicon Valley and then tech spheres has led to the sound growth of philanthropy, but that hasn't always been the case. Even in America, like a lot of people became billionaires after energy markets were deregulated in the 80s and the 90s. And then there wasn't, and obviously the hub of that was like the Texas area or, you know, and as far as I'm aware, there wasn't like a boom of philanthropy motivated by the ideas that people in that region had. What's different about Silicon Valley? Why are they, or do you actually think that these other places have also had their own booms of philanthropic giving? Nadia Asparouhova 0:41:11I think you're right. Yeah, I would make the distinction between like being wealthy is not the same as being elite or whatever other term you want to use there. And so yeah, there are definitely like pockets of what's called like more like local markets of wealth, like, yeah, Texas oil or energy billionaires that tend to operate kind of just more in their own sphere. And a lot of, if you look at any philanthropic, like a lot of them will be philanthropically active, but they only really focus on their geographic area. But there's sort of this difference. And I think this is part of where it comes from the question of like, you know, like what forces someone to actually like do something more public facing with their power. And I think that comes from your power being sort of like threatened. That's like one aspect I would say of that. So like tech has only really become a lot more active in the public sphere outside of startups after the tech backlash of the mid 2010s. And you can say a similar thing kind of happened with the Davos elite as well. And also for the Gilded Age cohort of wealth. And so yeah, when you have sort of, you're kind of like, you know, building in your own little world. And like, you know, we had literally like Silicon Valley where everyone was kind of like sequestered off and just thinking about startups and thinking themselves of like, tech is essentially like an industry, just like any other sort of, you know, entertainment or whatever. And we're just kind of happy building over here. And then it was only when sort of like the Panopticon like turned its head towards tech and started and they had this sort of like onslaught of critiques coming from sort of like mainstream discourse where they went, oh, like what is my place in this world? And, you know, if I don't try to like defend that, then I'm going to just kind of, yeah, we're going to lose all that power. So I think that that need to sort of like defend one's power can kind of like prompt that sort of action. The other aspect I'd highlight is just like, I think a lot of elites are driven by these like technological paradigm shifts. So there's this scholar, Carlotta Perrins, who writes about technological revolutions and financial capital. And she identifies like a few different technological revolutions over the last, whatever, hundred plus years that like drove this cycle of, you know, a new technology is invented. It's people are kind of like working on it in this smaller industry sort of way. And then there is some kind of like crazy like public frenzy and then like a backlash. And then from after that, then you have this sort of like focus on public institution building. But she really points out that like not all technology fits into that. Like, not all technology is a paradigm shift. Sometimes technology is just technology. And so, yeah, I think like a lot of wealth might just fall into that category. My third example, by the way, is the Koch family because you had, you know, the Koch brothers, but then like their father was actually the one who like kind of initially made their wealth, but was like very localized in sort of like how he thought about philanthropy. He had his own like, you know, family foundation was just sort of like doing that sort of like, you know, Texas billionaire mindset that we're talking about of, you know, I made a bunch of money. I'm going to just sort of like, yeah, do my local funder activity. It was only the next generation of his children that then like took that wealth and started thinking about like how do we actually like move that onto like a more elite stage and thinking about like their influence in the media. But like you can see there's like two clear generations within the same family. Like one has this sort of like local wealth mindset and one of them has the more like elite wealth mindset. And yeah, you can kind of like ask yourself, why did that switch happen? But yeah, it's clearly about more than just money. It's also about intention. Dwarkesh Patel 0:44:51Yeah, that's really interesting. Well, it's interesting because there's, if you identify the current mainstream media as affiliated with like that Davos aristocratic elite, or maybe not aristocratic, but like the Davos groups. Yeah, exactly. There is a growing field of independent media, but you would not identify somebody like Joe Rogan as in the Silicon Valley sphere, right? So there is a new media. I just, I guess these startup people don't have that much influence over them yet. And they feel like, yeah. Nadia Asparouhova 0:45:27I think they're trying to like take that strategy, right? So you have like a bunch of founders like Palmer Luckey and Mark Zuckerberg and Brian Armstrong and whoever else that like will not really talk to mainstream media anymore. They will not get an interview to the New York Times, but they will go to like an individual influencer or an individual creator and they'll do an interview with them. So like when Mark Zuckerberg announced Meta, like he did not get grant interviews to mainstream publications, but he went and talked to like Ben Thompson at Strategory. And so I think there is like, it fits really well with that. Like probably mindset of like, we're not necessarily institution building. We're going to like focus on power of individuals who sort of like defy institutions. And that is kind of like an open question that I have about like, what will the long term influence of the tech elite look like? Because like, yeah, the human history tells us that eventually all individual behaviors kind of get codified into institutions, right? But we're obviously living in a very different time now. And I think like the way that the Davos elite managed to like really codify and extend their influence across all these different sectors was by taking that institutional mindset and, you know, like thinking about sort of like academic institutions and media institutions, all that stuff. If the startup mindset is really inherently like anti-institution and says like, we don't want to build the next Harvard necessarily. We just want to like blow apart the concept of universities whatsoever. Or, you know, we don't want to create a new CNN or a new Fox News. We want to just like fund like individual creators to do that same sort of work, but in this very decentralized way. Like, will that work long term? I don't know. Like, is that just sort of like a temporary state that we're in right now where no one really knows what the next institutions will look like? Or is that really like an important part of this generation where like, we shouldn't be asking the question of like, how do you build a new media network? We should just be saying like, the answer is there is no media network. We just go to like all these individuals instead. Dwarkesh Patel 0:47:31Yeah, that's interesting. What do you make of this idea that I think, let's say, that these idea machines might be limited by the fact that if you're going to start some sort of organization in them, you're very much depending on somebody who has made a lot of money independently to fund you and to grant you approval. And I just have a hard time seeing somebody who is like a Napoleon-like figure being willing long term to live under that arrangement. And that so there'll just be the people who are just have this desire to dominate and be recognized who are probably pretty important to any movement you want to create. They'll just want to go off and just like build a company or something that gives them an independent footing first. And they just won't fall under any umbrella. You know what I mean? Nadia Asparouhova 0:48:27Yeah, I mean, like Dustin Moskovitz, for example, has been funding EA for a really long time and hasn't hasn't walked away necessarily. Yeah. I mean, on the flip side, you can see like SPF carried a lot of a lot of risk because it's your point, I guess, like, you know, you end up relying on this one funder, the one funder disappears and everything else kind of falls apart. I mean, I think like, I don't have any sort of like preciousness attached to the idea of like communities, you know, lasting forever. I think this is like, again, if we're trying to solve for the problem of like what did not work well about 5.1c3 foundations for most of recent history, like part of it was that they're, you know, just meant to live on to perpetuity. Like, why do we still have like, you know, Rockefeller Foundation, there are now actually many different Rockefeller Foundations, but like, why does that even exist? Like, why did that money not just get spent down? And actually, when John D. Rockefeller was first proposing the idea of foundations, he wanted them to be like, to have like a finite end state. So he wanted them to last only like 50 years or 100 years when he was proposing this like federal charter, but that federal charter failed. And so now we have these like state charters and foundations can just exist forever. But like, I think if we want to like improve upon this idea of like, how do we prevent like meritocratic elites from turning into aristocratic elites? How do we like, yeah, how do we actually just like try to do a lot of really interesting stuff in our lifetimes? It's like a very, it's very counterintuitive, because you think about like, leaving a legacy must mean like creating institutions or creating a foundation that lasts forever. And, you know, 200 years from now, there's still like the Nadia Asparuva Foundation out there. But like, if I really think about it, it's like, I would almost rather just do really, really, really good, interesting work in like, 50 years or 20 years or 10 years, and have that be the legacy versus your name kind of getting, you know, submerged over a century of institutional decay and decline. So yeah, I don't like if you know, you have a community that lasts for maybe only last 10 years or something like that, and it's funded for that amount of time, and then it kind of elbows its usefulness and it winds down or becomes less relevant. Like, I don't necessarily see it as a bad thing. Of course, like in practice, you know, nothing ever ends that that neatly and that quietly. But, but yeah, I don't think that's a bad thing. Dwarkesh Patel 0:50:44Yeah, yeah. Who are some ethnographers or sociologists from a previous era that have influenced your work? So was there somebody writing about, you know, what it was like to be in a Roman Legion? Or what it was like to work in a factory floor? And you're like, you know what, I want to do that for open source? Or I want to do that for the New Tech Elite? Nadia Asparouhova 0:51:02For open source, I was definitely really influenced by Jane Jacobs and Eleanor Ostrom. I think both had this quality of, so yeah, Eleanor Ostrom was looking at examples of common pool resources, like fisheries or forests or whatever. And just like, going and visiting them and spending a lot of time with them and then saying like, actually, I don't think tragedy of the commons is like a real thing, or it's not the only outcome that we can possibly have. And so sometimes commons can be managed, like perfectly sustainably. And it's not necessarily true that everyone just like treats them very extractively. And just like wrote about what she saw. And same with Jane Jacobs sort of looking at cities as someone who lives in one, right? Like she didn't have any fancy credentials or anything like that. She was just like, I live in the city and I'm looking around and this idea of like, top down urban planning, where you have like someone trying to design this perfect city that like, doesn't change and doesn't yield to its people. It just seems completely unrealistic. And the style that both of them take in their writing is very, it just it starts from them just like, observing what they see and then like, trying to write about it. And I just, yeah, that's, that's the style that I really want to emulate. Dwarkesh Patel 0:52:12Interesting. Nadia Asparouhova 0:52:13Yeah. I think for people to just be talking to like, I don't know, like Chris just like just talking to like open source developers, turns out you can learn a lot more from that than just sitting around like thinking about what open source developers might be thinking about. But... Dwarkesh Patel 0:52:25I have this, I have had this idea of not even for like writing it out loud, but just to understand how the world works. Just like shadowing people who are in just like a random position, they don't have to be a lead in any way, but just like a person who's the personal assistant to somebody influential, how to decide whose emails they forward, how they decide what's the priority, or somebody who's just like an accountant for a big company, right? It's just like, what is involved there? Like, what kinds of we're gonna, you know what I mean? Just like, random people, the line manager at the local factory. I just have no idea how these parts of the world work. And I just want to like, yeah, just shadow them for a day and see like, what happens there. Nadia Asparouhova 0:53:05This is really interesting, because everyone else focuses on sort of like, you know, the big name figure or whatever, but you know, who's the actual gatekeeper there? But yeah, I mean, I've definitely found like, if you just start cold emailing people and talking to them, people are often like, surprisingly, very, very open to being talked to because I don't know, like, most people do not get asked questions about what they do and how they think and stuff. So, you know, you want to realize that dream. Dwarkesh Patel 0:53:33So maybe I'm not like John Rockefeller, and that I only want my organization to last for 50 years. I'm sure you've come across these people who have this idea that, you know, I'll let my money compound for like 200 years. And if it just compounds at some reasonable rate, I'll be, it'll be like the most wealthy institution in the world, unless somebody else has the same exact idea. If somebody wanted to do that, but they wanted to hedge for the possibility that there's a war or there's a revolt, or there's some sort of change in law that draws down this wealth. How would you set up a thousand year endowment, basically, is what I'm asking, or like a 500 year endowment? Would you just put it in like a crypto wallet with us? And just, you know what I mean? Like, how would you go about that organizationally? How would you like, that's your goal? I want to have the most influence in 500 years. Nadia Asparouhova 0:54:17Well, I'd worry much less. The question for me is not about how do I make sure that there are assets available to distribute in a thousand years? Because I don't know, just put in stock marketers. You can do some pretty boring things to just like, you know, ensure your assets grow over time. The more difficult question is, how do you ensure that whoever is deciding how to distribute the funds, distributes them in a way that you personally want them to be spent? So Ford Foundation is a really interesting example of this, where Henry Ford created a Ford Foundation shortly before he died, and just pledged a lot of Ford stock to create this foundation and was doing it basically for tax reasons, had no philanthropic. It's just like, this is what we're doing to like, house this wealth over here. And then, you know, passed away, son passed away, and grandson ended up being on the board. But the board ended up being basically like, you know, a bunch of people that Henry Ford certainly would not have ever wanted to be on his board. And so, you know, and you end up seeing like, the Ford Foundation ended up becoming huge influential. I like, I have received money from them. So it's not at all an indictment of sort of like their views or anything like that. It's just much more of like, you know, you had the intent of the original donor, and then you had like, who are all these people that like, suddenly just ended up with a giant pool of capital and then like, decided to spend it however they felt like spending it and the grandson at the time sort of like, famously resigned because he was like, really frustrated and was just like, this is not at all what my family wanted and like, basically like, kicked off the board. So anyway, so that is the question that I would like figure out if I had a thousand year endowment is like, how do I make sure that whomever manages that endowment actually shares my views? One, shares my views, but then also like, how do I even know what we need to care about in a thousand years? Because like, I don't even know what the problems are in a thousand years. And this is why like, I think like, very long term thinking can be a little bit dangerous in this way, because you're sort of like, presuming that you know what even matters then. Whereas I think like, figure out the most impactful things to do is just like, so contextually dependent on like, what is going on at the time. So I can't, I don't know. And there are also foundations where you know, the donor like, writes in the charter like, this money can only be spent on you know, X cause or whatever, but then it just becomes really awkward over time because
Please Use Facebook Correctly Facebook is an American online social media and social networking service owned by Meta Platforms. Founded in 2004 by Mark Zuckerberg with fellow Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name comes from the face book directories often given to American university students. Membership was initially limited to Harvard students, gradually expanding to other North American universities and, since 2006, anyone over 13 years old. As of 2020, Facebook claimed 2.8 billion monthly active users,[2] and ranked seventh in global internet usage.[6][needs update] It was the most downloaded mobile app of the 2010s.[7] [https://en.wikipedia.org/wiki/Facebook](https://en.wikipedia.org/wiki/Facebook) [https://www.facebook.com/360533124616..](https://www.facebook.com/360533124616..). [https://youtu.be/oub9NptLujQ](https://youtu.be/oub9NptLujQ) https://anchor.fm/jack-bosma3/episodes/Please-Use-Facebook-Correctly-e1s1aq1 #facebook #meta #messenger --- Send in a voice message: https://anchor.fm/jack-bosma3/message Support this podcast: https://anchor.fm/jack-bosma3/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimating the marginal impact of outreach, published by Duncan Mcclements on November 30, 2022 on The Effective Altruism Forum. This post is an entry for $5k challenge to quantify the impact of 80,000 hours' top career paths One of the career paths evaluated by 80000 hours is helping to build the effective altruism movement: this is currently ranked in fifth position, behind AI technical research, AI governance, biorisk and organisation entrepeneurship. This post presents a model for the marginal number of additional individuals an individual devoted to outreach would attract, and finds with very high confidence that on the margin outreach to build effective altruism further is higher impact than working on existential misk mitigation directly. Discount rate estimation The value of outreach compared to immediately working on existential risk heavily depends on our discount rate. This is because the benefits in terms of basis points of existential risk reduced are entirely in the future for outreach (in the form of additional researchers) while immediately working on research will bring gains sooner. Two factors seem relevant for our discount rate: the probability that humanity ceases to exist before the additional researchers can have an impact and the marginal value of labour over time. The best database we are aware of for total existential risk is here: of these, the only bounded annual (or annual-equivalent assuming constant risk over time) estimates for the risk are 0.19%, 0.21%, 0.11% and 0.2%. These have a geometric mean 0f 0.17%, and a standard deviation of 0.046%, which will be used here. For the latter component, if constant returns to scale and variable exogeneity are assumed, a Cobb-Douglas production function can be taken, with Y as output, however defined, A as labour productivity, L as labour, K as capital and α as the capital elasticity of output: If differentiated with respect to labour, this then yields: Thus: So if capital allocated to EA is growing at a faster rate than labour (β>γ), our discount rate should be negative with respect to time: if labour is growing faster, it should be positive, making the simplifying assumption that EAs are the only group in the world producing moral value . Intuitively, this occurs because capital and labour are varying at some rates exogenously and we wish our level of capital per worker to be as close to constant over time as possible due to diminishing marginal returns to all inputs. Is capital or labour growing faster? This 80000 hours article from 2021 estimated at the time that the total capital allocation was growing by around 37% per year, and labour by 10-20%, which would imply deeply negative discount rates if these figures still held. However, the two largest components of the increase in capital allocated were primarily in the form of FTX and Facebook stock. With the former now worthless, and the latter having declined to less than one third of its value at the time of the writing of the article, the overall capital stock allocated to EA has plunged in value while the labour stock remains almost unaffected. Using the same methodology as the above article, noting that Dustin Moskovitz's net worth now only stands at $6.5 billion at the time of writing, reduces the increase in funding over the past 7 years to $2.95 billion nominally (from a starting level of ~$10 billion), or a real terms increase of only $0.35 billion, or 2.8%: 0.39% annually. Capital growth will be modelled as lognormal, as a heavier tail distribution feels appropriate due to the possibility of another FTX, with mean eln(0.39) and log standard deviation of 10%, due to the potential for capital growth of 37% absent FTX's collapse and Facebook's decline in stock value. Labour growth is considerably more stable than capital growth, but sti...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some data on the stock of EA™ funding, published by NunoSempere on November 20, 2022 on The Effective Altruism Forum. Overall Open Philanthropy funding Open Philanthropy's allocation of funding through time looks as follows: Dustin Moskovitz's wealth looks, per Bloomberg, like this: If we plot the two together, we don't see that much of a correlation: Holden Karnofsky, head of Open Philanthropy, writes that the Blomberg estimates might not be all that accurate: Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna's net worth give a substantially understated picture of our available resources. That's because, among other issues, they don't include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume) In mid 2022, Forbes put Sam Bankman-Fried's wealth at $24B. So in some sense, the amount of money allocated to or according to Effective Altruism™ peaked somewhere close to $50B. Funding flow restricted to longtermism & global catatrophic risks (GCRs) The analysis becomes a bit more interesting if we look only at longtermism and GCRs: In contrast, per Forbes,[^1] the FTX Foundation had given out $160M by September 2022. My sense is that most (say, maybe 50% to 80%) of those grants went to “longtermist” cause areas, broadly defined. In addition, SBF and other FTX employees led a $580M funding round for Anthropic Further analysis It's unclear what would have to happen for Open Philanthropy to pick up the slack here. In practical terms, I'm not sure whether their team has enough evaluation capacity for an additional $100M/year, or whether they will choose to expand that. Two somewhat informative posts from Open Philanthropy on this are here and here I'd be curious about both interpretative analysis and forecasting on these numbers. I am up for supporting the later by e.g., committing to rerunning this analysis in a year. Appendix: Code The code to produce these plots can be found here; lines 42 to 48 make the division into categories fairly apparent. To execute this code you will need a working R installation and a document named grants.csv, which can be downloaded from Open Philanthropy's website. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deontology is not the solution, published by Peter McLaughlin on November 16, 2022 on The Effective Altruism Forum. This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it's especially relevant to the kind of reflection that is currently happening in this community, and because I think it's more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft. In the wake of revelations about FTX and Sam Bankman-Fried's behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before 'next time'. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little 'deontology' in Effective Altruism. As this diagnosis goes, Bankman-Fried's behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more 'deontological'. I'm sympathetic here: I'm an ethics guy by background, and I think it's an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement's most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?'. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his. The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you're just asking your friend for £20; it applies even if you are purely altruistically motivated. However, morally thoughtful people tend to have good ‘intuitions' about everyday cases: it is these that common-sense morality was designed to handle. We know that it's wrong to take someone else's money and not pay it back; we know that it's typically wrong to lie solely for our own benefit; we understand that it's good to be trustworthy and honest. Indeed, in everyday contexts certain options are just entirely unthinkable. For example, a surgeon won't typically even ask themselves ‘should I cut up this patient and redistribute their organs to maximise utility?'—the idea to do such a thing would never even enter their mind—and you would probably be a bit uneasy with a surgeon who had indeed asked himself this question, even if he had concluded that he shouldn't cut you up. This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints' most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some comments on recent FTX-related events, published by Holden Karnofsky on November 10, 2022 on The Effective Altruism Forum. It appears that FTX, whose principals support the FTX Foundation, is in serious trouble. We've been getting a lot of questions related to these events. I've made an attempt to get some basic points out quickly that might be helpful to people, but the situation appears to be developing quickly and I have little understanding of what's going on, so this post will necessarily be incomplete and nonauthoritative. One thing I'd like to say up front (more on how this relates to FTX below) is that Open Philanthropy remains committed to our longtermist focus areas and still expects to spend billions of dollars on them over the coming decades. We will raise the bar for our giving, and we don't know how many existing projects that will affect, but we still expect longtermist projects to grow in terms of their impact and output. Are the funds directed by Open Philanthropy invested in or otherwise exposed to FTX or related entities? No. The FTX Foundation has quickly become a major funder of many longtermist and effective altruist organizations. If it stops (or greatly reduces) funding them, how might that affect Open Philanthropy's funding practices? If the FTX Foundation stops (or greatly reduces) funding such people and organizations, then Open Philanthropy will have to consider a substantially larger set of funding opportunities than we were considering before. In this case, we will have to raise our bar for longtermist grantmaking: with more funding opportunities that we're choosing between, we'll have to fund a lower percentage of them. This means grants that we would've made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously. Does Open Philanthropy also need to raise its bar in light of general market movements (particularly the fall in META stock) and other factors? Yes: Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna's net worth give a substantially understated picture of our available resources. That's because, among other issues, they don't include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume.) Dustin and Cari still expect to spend nearly all of their resources in their lifetimes on philanthropy that aims to accomplish as much good per dollar as possible. Additionally, the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar. As stated above, we remain committed to our focus areas and still expect to spend billions of dollars on them over the coming decades. So how much might Open Philanthropy raise its bar for longtermist grantmaking, and what does this mean for today's potential grantees? We don't know yet — the news about FTX was sudden, and we're working to figure things out. It's a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we're dealing with major new information and don't have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few. (We don't have a crisp formalization of “the bar”; instead we have general guidance to grantmakers on what sorts of requests should be generously funded vs. carefully considered vs. rejected. We need to rethink and revise this guidance.) Because of this, we are pausing mo...
Crime is rising across the United States and voters are taking notice. According to the latest polls, it's become a major election issue with midterms right around the corner. Parker Thayer at the Capital Research Center has been documenting the millions of dollars billionaires like George Soros and Dustin Moskovitz are spending on progressive candidates particularly for district attorney. He explains the impact that's having across the country. Then, in America Q&A we asked if crime is an issue that motivates you to vote? Next, Rob O'Donnell, retired NYPD detective and Pipe Hitter Foundation board member, gives us law enforcement's perspective on the deep consequences after years of “defund the police” rhetoric. Finally, in our second America Q&A we ask: Do you feel you have enough time to spend with your family? ⭕️Watch in-depth videos based on Truth & Tradition at Epoch TV
Mark Elliot Zuckerberg (/ˈzʌkərbɜːrɡ/; born May 14, 1984) is an American business magnate, internet entrepreneur, and philanthropist. He is known for co-founding the social media website Facebook and its parent company Meta Platforms (formerly Facebook, Inc.), of which he is the chairman, chief executive officer, and controlling shareholder.Zuckerberg attended Harvard University, where he launched Facebook in February 2004 with his roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes. Originally launched to select college campuses, the site expanded rapidly and eventually beyond colleges, reaching one billion users by 2012. Zuckerberg took the company public in May 2012 with majority shares. In 2007, at age 23, he became the world's youngest self-made billionaire. As of 18 August 2022, Zuckerberg's net worth was $62.7 billion according to the Forbes' Real Time Billionaires.Since 2008, Time magazine has named Zuckerberg among the 100 most influential people in the world as a part of its Person of the Year award, which he was recognized with in 2010. In December 2016, Zuckerberg was ranked tenth on Forbes list of The World's Most Powerful People.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many EA Billionaires five years from now?, published by Erich Grunewald on August 20, 2022 on The Effective Altruism Forum. Dwarkesh Patel argues that "there will be many more effective altruist billionaires". He gives three reasons for thinking so: People who seek glory will be drawn to ambitious and prestigious effective altruist projects. One such project is making a ton of money in order to donate it to effective causes. Effective altruist wealth creation is a kind of default choice for "young, risk-neutral, ambitious, pro-social tech nerds", i.e. people who are likelier than usual to become very wealthy. Effective altruists are more risk-tolerant by default, since you don't get diminishing returns on larger donations the same way you do on increased personal consumption. These early-stage businesses will be able to recruit talented effective altruists, who will be unusually aligned with the business's objectives. That's because if the business is successful, even if you as an employee don't cash out personally, you're still having an impact (either because the business's profits are channelled to good causes, as with FTX, or because the business's mission is itself good, as with Wave). The post itself is kind of fuzzy on what "many" means or which time period it's concerned with, but in a follow-up comment Patel mentions having made an even-odds bet to the effect that there'll be ≥10 new effective altruist billionaires in the next five years. He also created a Manifold Markets question which puts the probability at 38% as I write this. (A similar question on whether there'll be ≥1 new, non-crypto, non-inheritance effective altruist billionaire in 2031 is currently at 79% which seems noticeably more pessimistic.) I commend Patel for putting his money where his mouth is! Summary With (I believe) moderate assumptions and a simple model, I predict 3.5 new effective altruist billionaires in 2027. With more optimistic assumptions, I predict 6.0 new billionaires. ≥10 new effective altruist billionaires in the next five years seems improbable. I present these results and the assumptions that produced them and then speculate haphazardly. Assumptions If we want to predict how many effective altruist billionaires there will be in 2027, we should attend to base rates. As far as I know, there are five or six effective altruists billionaires right now, depending on how you count. They are Jaan Tallinn (Skype), Dustin Moskovitz (Facebook), Sam Bankman-Fried (FTX), Gary Wang (FTX) and one unknown person doing earning to give. We could also count Cari Tuna (Dustin Moskovitz's wife and cofounder of Open Philanthropy). It's possible that someone else from FTX is also an effective altruist and a billionaire. Of these, as far as I know only Sam Bankman-Fried and Gary Wang were effective altruists prior to becoming billionaires (the others never had the chance, since effective altruism wasn't a thing when they made their fortunes). William MacAskill writes: Effective altruism has done very well at raising potential funding for our top causes. This was true two years ago: GiveWell was moving hundreds of millions of dollars per year; Open Philanthropy had potential assets of $14 billion from Dustin Moskovitz and Cari Tuna. But the last two years have changed the situation considerably, even compared to that. The primary update comes from the success of FTX: Sam Bankman-Fried has an estimated net worth of $24 billion (though bear in mind the difficulty of valuing crypto assets, and their volatility), and intends to give essentially all of it away. The other EA-aligned FTX early employees add considerably to that total. There are other prospective major donors, too. Jaan Tallinn, the cofounder of Skype, is an active EA donor. At least one person earning to give (and not related to FT...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There will be many more effective altruist billionaires, published by Dwarkesh Patel on July 3, 2022 on The Effective Altruism Forum. Cross posted from my blog. Because of the generosity of a few billionaires, the effective altruism movement has recently come into a lot of money. The total amount of capital committed to the movement varies day to day with the crypto markets on which Sam Bankman-Fried's net worth is based. But the sum was recently estimated at 46 billion1. The movement has been trying to figure out how quickly it should give away this money. There's lots of fascinating questions you have to resolve before you can decide on a disbursement schedule2. An especially interesting one is: how many future billionaires will be effective altruists? If a new Sam Bankman-Fried and Dustin Moskovitz will join the movement every few years, then the argument for allocating money now becomes much more compelling. Future billionaires can handle future problems, but only you can fund the causes that are neglected and important today. Here are the three reasons I expect the number of EA billionaires to grow significantly. Effective altruism allows thymodic natures to achieve recognition and impact that is otherwise unavailable in the modern world. Effective altruism acts as a Schelling point for ambitious and risk taking founders. Effective altruism creates alignment in an organization and reduces adverse selection. Thymos In The End of History and the Last Man, Fukuyama argues that the leading contender for the final form of government is capitalist liberal democracy. Capitalism is peerless in satisfying people's desires and democracy is so far the best method of affording them recognition. His greatest hesitation about the sustainability of liberal democracies is whether societies where everyone has comfortable lives and no one gets special recognition can appease the appetites of the most ambitious personalities. As he puts it: [T]he virtues and ambitions called forth by war are unlikely to find expression in liberal democracies. There will be plenty of metaphorical wars—corporate lawyers specializing in hostile takeovers who will think of themselves as sharks or gunslingers, and bond traders who imagine . that they are “masters of the universe.” . But as they sink into the soft leather of their BMWs, they will know somewhere in the back of their minds that there have been real gunslingers and masters in the world, who would feel contempt for the petty virtues required to become rich or famous in modern America. How long megalothymia will be satisfied with metaphorical wars and symbolic victories is an open question. You can always have a great late night conversation by asking, “What would Napoleon or Ceasar do if he was born in modern America?” Surely the unique combinations of genes which make up the will and capacities of such men have not disappeared. But today their ambition cannot be exercised through great conquests and wars. So how is their energy redirected? The first place to look for modern Caesars or Alexanders is Silicon Valley. Napoleon's Grande Armée was basically run like a startup - extremely efficient and flexible, with quick promotion and delegation given to the most able and even quicker terminations provided to the least. Say what you will, they certainly knew how to capture bigger markets. When Napoleon told the Austrian statesman Metternich, “You cannot stop me, I can spend 30,000 men a month,” he was simply expressing the principle of blitzscaling. It would be much as if the Uber CEO told the Lyft CEO, “You can't catch up to us, I'm willing to burn 1 billion dollars of VC cash a quarter.” There are only a few startups whose mission is so intrinsically motivating that it would satisfy the most ambitious individuals of history. Sure, building a ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript of a Twitter Discussion on EA from June 2022, published by Zvi on June 6, 2022 on LessWrong. Recently on Twitter, in response to seeing a contest announcement asking for criticism of EA, I offered some criticism of that contest's announcement. That sparked a bunch of discussion about central concepts in Effective Altruism. Those discussions ended up including Dustin Moskovitz, who showed an excellent willingness to engage and make clear how his models worked. The whole thing seems valuable enough to preserve in a form that one can navigate, hence this post. This compiles what I consider the most important and interesting parts of that discussion into post form, so it can be more easily seen and referenced, including in the medium-to-long term. There are a lot of offshoots and threads involved, so I'm using some editorial discretion to organize and filter. To create as even-handed and useful a resource as possible, I am intentionally not going to interject commentary into the conversation here beyond the bare minimum. As usual, I use screenshots for most tweets to guard against potential future deletions or suspensions, with links to key points in the threads. (As Kevin says, I did indeed mean should there.) At this point there are two important threads that follow, and one additional reply of note. Thread one, which got a bit tangled at the beginning but makes sense as one thread: Thread two, which took place the next day and went in a different direction. Link here to Ben's post, GiveWell and the problem of partial funding. Link to GiveWell blog post on giving now versus later. Dustin's “NO WE ARE FAILING” point seemed important so I highlighted it. There was also a reply from Eliezer. And this on pandemics in particular. Sarah asked about the general failure to convince Dustin's friends. These two notes branch off of Ben's comment that covers-all-of-EA didn't make sense. Ben also disagreed with the math that there was lots of opportunity, linking to his post A Drowning Child is Hard to Find. This thread responds to Dustin's claim that you need to know details about the upgrade to the laptop further up the main thread, I found it worthwhile but did not include it directly for reasons of length. This came in response to Dustin's challenge on whether info was 10x better. After the main part of thread two, there was a different discussion about pressures perhaps being placed on students to be performative, which I found interesting but am not including for length. This response to the original Tweet is worth noting as well. Again, thanks to everyone involved and sorry if I missed your contribution. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript of Twitter Discussion on EA from June 2022, published by Zvi on June 6, 2022 on The Effective Altruism Forum. (Poster's note: Given subject matter I am posting an additional copy here in the EA Forum. The theoretically canonical copy of this post is on my Substack and I also post to Wordpress and LessWrong.) Recently on Twitter, in response to seeing a contest announcement asking for criticism of EA, I offered some criticism of that contest's announcement. That sparked a bunch of discussion about central concepts in Effective Altruism. Those discussions ended up including Dustin Moskovitz, who showed an excellent willingness to engage and make clear how his models worked. The whole thing seems valuable enough to preserve in a form that one can navigate, hence this post. This compiles what I consider the most important and interesting parts of that discussion into post form, so it can be more easily seen and referenced, including in the medium-to-long term. There are a lot of offshoots and threads involved, so I'm using some editorial discretion to organize and filter. To create as even-handed and useful a resource as possible, I am intentionally not going to interject commentary into the conversation here beyond the bare minimum. As usual, I use screenshots for most tweets to guard against potential future deletions or suspensions, with links to key points in the threads. (As Kevin says, I did indeed mean should there.) At this point there are two important threads that follow, and one additional reply of note. Thread one, which got a bit tangled at the beginning but makes sense as one thread: Thread two, which took place the next day and went in a different direction. Link here to Ben's post, GiveWell and the problem of partial funding. Link to GiveWell blog post on giving now versus later. Dustin's “NO WE ARE FAILING” point seemed important so I highlighted it. There was also a reply from Eliezer. And this on pandemics in particular. Sarah asked about the general failure to convince Dustin's friends. These two notes branch off of Ben's comment that covers-all-of-EA didn't make sense. Ben also disagreed with the math that there was lots of opportunity, linking to his post A Drowning Child is Hard to Find. This thread responds to Dustin's claim that you need to know details about the upgrade to the laptop further up the main thread, I found it worthwhile but did not include it directly for reasons of length. This came in response to Dustin's challenge on whether info was 10x better. After the main part of thread two, there was a different discussion about pressures perhaps being placed on students to be performative, which I found interesting but am not including for length. This response to the original Tweet is worth noting as well. Again, thanks to everyone involved and sorry if I missed your contribution. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript of a Twitter Discussion on EA from June 2022, published by Zvi on June 6, 2022 on LessWrong. Recently on Twitter, in response to seeing a contest announcement asking for criticism of EA, I offered some criticism of that contest's announcement. That sparked a bunch of discussion about central concepts in Effective Altruism. Those discussions ended up including Dustin Moskovitz, who showed an excellent willingness to engage and make clear how his models worked. The whole thing seems valuable enough to preserve in a form that one can navigate, hence this post. This compiles what I consider the most important and interesting parts of that discussion into post form, so it can be more easily seen and referenced, including in the medium-to-long term. There are a lot of offshoots and threads involved, so I'm using some editorial discretion to organize and filter. To create as even-handed and useful a resource as possible, I am intentionally not going to interject commentary into the conversation here beyond the bare minimum. As usual, I use screenshots for most tweets to guard against potential future deletions or suspensions, with links to key points in the threads. (As Kevin says, I did indeed mean should there.) At this point there are two important threads that follow, and one additional reply of note. Thread one, which got a bit tangled at the beginning but makes sense as one thread: Thread two, which took place the next day and went in a different direction. Link here to Ben's post, GiveWell and the problem of partial funding. Link to GiveWell blog post on giving now versus later. Dustin's “NO WE ARE FAILING” point seemed important so I highlighted it. There was also a reply from Eliezer. And this on pandemics in particular. Sarah asked about the general failure to convince Dustin's friends. These two notes branch off of Ben's comment that covers-all-of-EA didn't make sense. Ben also disagreed with the math that there was lots of opportunity, linking to his post A Drowning Child is Hard to Find. This thread responds to Dustin's claim that you need to know details about the upgrade to the laptop further up the main thread, I found it worthwhile but did not include it directly for reasons of length. This came in response to Dustin's challenge on whether info was 10x better. After the main part of thread two, there was a different discussion about pressures perhaps being placed on students to be performative, which I found interesting but am not including for length. This response to the original Tweet is worth noting as well. Again, thanks to everyone involved and sorry if I missed your contribution. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free-spending EA might be a big problem for optics and epistemics, published by George Rosenfeld on April 12, 2022 on The Effective Altruism Forum. NB: I think EA spending is probably a very good thing overall and I'm not confident my concerns necessarily warrant changing much. But I think it's important to be aware of the specific ways this can go wrong and hopefully identify mitigations. Thanks to Marka Ellertson, Joe Benton, Andrew Garber, Dewi Erwan, Joshua Monrad and Jake Mendel for their input. Summary The influx of EA funding is brilliant news, but it has also left many EAs feeling uncomfortable. I share this feeling of discomfort and propose two concrete concerns which I have recently come across. Optics: EA spending is often perceived as wasteful and self-serving, creating a problematic image which could lead to external criticism, outreach issues and selection effects. Epistemics: Generous funding has provided extrinsic incentives for being EA/longtermist which are exciting but also significantly increase the risks of motivated reasoning and make the movement more reliant on the judgement of a small number of grantmakers. I don't really know what to do about this (especially since it's overall very positive), so I give a few uncertain suggestions but mainly hope that others will have ideas and that this will at least serve as a call to vigilance in the midst of funding excitement. Introduction In recent years, the EA movement has received an influx of funding. Most notably, Dustin Moskovitz, Cari Tuna and Sam Bankman-Fried have each pledged billions of dollars, such that funding is more widely available and deployed. This influx of funding has completely changed the game. First and foremost, it is wonderful news for those of us who care deeply about doing the most good and tackling the huge problems which we have been discussing for years. It should accelerate our progress significantly and I am very grateful that this is the case. But it has also had a drastic effect on the culture of the movement which may have unfortunate consequences. A few years ago, I remember EA meet-ups where we'd be united by our discomfort towards spending money in fancy restaurants because of the difference it could make if donated to effective charities. Now, EA chapters will pay for weekly restaurant dinners to incentivise discussion and engagement. Many of my early EA friends also found it difficult to spend money on holidays. Now, we are told that one of the most impactful things university groups can do is host an all-expenses-paid retreat for their students. I should emphasise here that I think these expenditures are probably good ideas which can be justified by the counterfactual engagement which they facilitate. These should probably continue to happen, however uncomfortable they make us feel. But the fact that these decisions can be justified on one level doesn't mean that they don't also cause concrete problems which we should think about and mitigate. Big Spending as an Optics Issue Over the past few months, I've heard critical comments about a range of spending decisions. Several people asked me whether it was really a good use of EA money to pay for my transatlantic flights for EAG. Others challenged whether EAs seriously claim that the most effective way to spend money is to send privileged university students to an AirBNB for the weekend. And that's before they hear about the Bahamas visitor programme. In fact, I have recently found myself responding to spending objections more often than the standard substantive ones (e.g. what about my favourite charity?, can you really compare charities with each other?, what about systemic issues?). I am not contesting here whether these programmes are worth the money. My own view is that most of them probably are and I try to ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Future Fund and Longtermism, published by rhys lindmark on March 17, 2022 on The Effective Altruism Forum. This is a linkpost for/. Warning: Lots of napkin math below. Lending y'all an Idea That Is Not Yet Fully Formed™. But wanted to share so you get a rough map of longtermist funding. My org is writing a grant application for FTX Future Fund's first grant round. (You should too! Apply by March 21.) As part of that, I wanted to research how important FTX Future Fund is for the longtermist ecosystem more generally. In summary: It's quite important! Let's learn why. I. EA Funding Right Now First, let's look at EA funding over time. Of all Effective Altruist (EA) funding, 20% comes from GiveWell and 60% comes from Open Philanthropy (Open Phil). In 2019, here's how much each org processed: What about GiveWell's giving over time? Their graph is below. They processed only $2M per year in the 2000s, then started to grow from $10M to $100M per year throughout the 2010s. / (this doesn't include Open Phil) And here's Open Phil's estimate of how much they've given per year: So, taking GiveWell and Open Phil together, here's how much EA money has been given per year throughout the 2020s: $400M, not bad. But this is actually going to ramp up a bunch in the coming few years. Open Phil only regranted $100M to GiveWell in 2020, but they plan to grant GiveWell $300M in 2021, $500M in 2022, and $500M again in 2023. So how much will Open Phil be granting total? Based on 2021 data, GiveWell granting is roughly 50% of Open Phil's budget: So by increasing their 2022/2023 GiveWell giving to $500M, we'd roughly expect Open Phil to give $1B by that time: GiveWell itself wants to direct $1B by 2025. If we take all of these together: $$ from Open Phil to GiveWell $$ from Open Phil to not GiveWell $$ to GiveWell from not Open Phil Other Grantmaking The growth of EA giving into 2025 looks like this: In other words, we're just at the start of EA funders giving a lot more money. Still, most of EA granting lies with Open Phil and GiveWell. And much of that is still in Global Health. ...Until now! II. FTX Future Fund and Longtermism Meanwhile, Sam Bankman-Fried has been making magic internet money. He's starting to give it back, mostly towards longtermism. How much of an impact is it having? We can start by looking at how much money is in longtermism now. Let's start with Ben Todd's excellent overview of 2019 EA granting categories, which I've slightly modified. As you can see, longtermism (in red) is roughly 30% of all EA giving. In 2021, it was roughly 15% of Open Phil giving. So, assuming roughly 20% of Open Phil's giving is longtermist, and assuming other longtermist donors are roughly 20% of Open Phil's longtermist giving, here's what longtermist giving looks like until now: This is good! It's a reflection of the EA ecosystem accounting for the idea that ~future lives matter. But FTX Future Fund is about to drastically increase it even more. They're trying to give $100M in 2022 alone. Here's what the graph will look like going forward: That's a big yellow jump! It makes longtermist giving look like this for 2022: But even this assumes that Open Phil is going to 2x their longtermist grantmaking in a similar fashion as they're pumping money into GiveWell. If they keep their longtermist grantmaking at current levels, around $100M, the 2022 pie chart looks like this: So, yes, the FTX Future Fund is a big deal for the longtermist funding ecosystem. The EA funding ecosystem has had a shift. Dustin Moskovitz was a Web2 Facebook Money. SBF is Web3 FTX Money. This means we should add a new player, FTX, (in green!) to our overall EA giving graph below. Hope this helps give context to FTX's longtermist grantmaking. Thanks for reading and don't forget to apply for that sweet sweet cash from FTX Future...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new media outlet focused on philanthropy, published by teddyschleifer on March 12, 2022 on The Effective Altruism Forum. Hi all Longtime reader, first time poster. I'm probably delinquent in making this pitch, but some EA friends of mine encouraged me to write in under the logic that if you're interested in effective altruism, you may find what we're building at Puck, our new publication, to be of value. Puck is a new investor-backed media outlet focused on the inside conversation in Silicon Valley, Hollywood, Wall Street and Washington, and I am one of the founding partners over there. I came to Puck after about five years covering donors and their work in politics and philanthropy (first at CNN, and then at Vox); at Puck, I'm continuing to write about the world of Silicon Valley wealth, trying to report out a specific question at the heart of the Forum — how can people with resources do the most good for the world? I describe myself as EA-sympathetic, although I find myself driven primarily by a desire to equip the public with new information about a philanthropic sector that is not as transparent as many of us would like. Over the last few months, for instance, we've written about a lot of EA topics — such as the pandemic preparedness efforts of Sam Bankman-Fried; the political activities of Dustin Moskovitz and Open Phil; and broken news about lots of the big philanthropists, such as MacKenzie Scott, Peter Thiel and Laurene Powell Jobs. A particular area of interest of mine, and perhaps to some of you, is the collision between EA and Democratic politics, or how Democratic donors can bring EA principles to campaigns and outside groups. Anyway, you'll see that the links above are paywalled in pursuit of building a new media model, but you can read one article for free by trading an email address, and I'd be happy to email you any full story you're interested in (teddy@puck.news) If you'd like to sign up to receive this reporting straight in your inbox, you can enter your address here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
So UATX happened. Dear God I wish stuff would stop happening. Thankfully we got Kristin Rawls and Jeff Eaton from Christian Rightcast to come back to help us talk about it. * Content Warnings. Podcast Notes: Please consider donating to help us make the show and stay independent. Patrons get exclusive access to one full extra episode a month. Daniel's Patreon: https://www.patreon.com/danielharper Jack's Patreon: https://www.patreon.com/user?u=4196618 IDSG Twitter: https://twitter.com/idsgpod Daniel's Twitter: @danieleharper Jack's Twitter: @_Jack_Graham_ IDSG on Apple Podcasts: https://podcasts.apple.com/us/podcast/i-dont-speak-german/id1449848509?ls=1 Christian Rightcast https://rightcast.substack.com/ CHristian Rightcast on Twitter https://twitter.com/crightcast?lang=en Kristin's Twitter https://twitter.com/kristinrawls Jeff's Twitter https://twitter.com/eaton Show Notes: Pano Kanelos at Bari Weiss's Substack We Can't Wait For Universities to Fix Themselves. So We're Starting a New One The numbers tell the story as well as any anecdote you've read in the headlines or heard within your own circles. Nearly a quarter of American academics in the social sciences or humanities endorse ousting a colleague for having a wrong opinion about hot-button issues such as immigration or gender differences. Over a third of conservative academics and PhD students say they had been threatened with disciplinary action for their views. Four out of five American PhD students are willing to discriminate against right-leaning scholars, according to a report by the Center for the Study of Partisanship and Ideology. The picture among undergraduates is even bleaker. In Heterodox Academy's 2020 Campus Expression Survey, 62% of sampled college students agreed that the climate on their campus prevented students from saying things they believe. Nearly 70% of students favor reporting professors if the professor says something students find offensive, according to a Challey Institute for Global Innovation survey. The Foundation for Individual Rights in Education reports at least 491 disinvitation campaigns since 2000. Roughly half were successful. University of Austin Wikipedia The University of Austin website Under "Our principles" Universities devoted to the unfettered pursuit of truth are the cornerstone of a free and flourishing democratic society. For universities to serve their purpose, they must be fully committed to freedom of inquiry, freedom of conscience, and civil discourse. In order to maintain these principles, UATX will be fiercely independent—financially, intellectually, and politically. Under "What Makes UATX Different" A COMMITMENT TO FREEDOM OF INQUIRY We're reclaiming a place in higher education for freedom of inquiry and civil discourse. Our students and faculty will confront the most vexing questions of human life and civil society. We will create a community of conversation grounded in intellectual humility that respects the dignity of each individual and cultivates a passion for truth. A NEW FINANCIAL MODEL We're completely rethinking how a university operates by developing a novel financial model. We will lower tuition by avoiding costly administrative excess and overreach. We will focus our resources intensively on academics, rather than amenities. We will align institutional incentives with student outcomes. AN INNOVATIVE CURRICULUM Our curriculum is being designed in partnership not only with the world's great thinkers but also with its great doers—visionaries who have founded bold ventures, artists and writers of the highest order, pioneers in tech, and the leading lights in engineering and the natural sciences. Students will apply their foundational skills to practical problems in fields such as entrepreneurship, public policy, education, and engineering. Under "Programs" Beginning Summer 2022 The Forbidden Courses Our Forbidden Courses summer program invites top students from other universities to join us for a spirited discussion about the most provocative questions that often lead to censorship or self-censorship in many universities. Students will become proficient and comfortable with productive disagreement. Instructors will range from top professors to accomplished business leaders, journalists, and artists. Beginning Fall 2022 ENTREPRENEURSHIP AND LEADERSHIP MA PROGRAM The primary purpose of most conventional business programs is to credential large cohorts of passive learners with a lowest-common-denominator curriculum comprised of the most abstract principles of accounting, finance, management, and organizational leadership. In this 12-month program, UATX will recruit elite students from top schools, teach them the classical principles of leadership and market foundations, and then embed them into a network of successful technologists, entrepreneurs, venture capitalists, and public-policy reformers. Students will then actively apply their learning to the most urgent and seemingly intractable problems facing our society, both in the private and public sectors. Under "Frequently Asked Questions" Do you have a physical location? Our headquarters are located in central Austin: 2112 Rio Grande Street, Austin, TX 78705 RashChapman, Attorneys at Law Texas Tribune, The new University of Austin hopes to counter what its founders say is a culture of censorship at most colleges They also haven't officially received nonprofit status from the federal government. They are using Cicero Research, which is run by Austin-based tech investor and advisory board member Joe Lonsdale, as a temporary nonprofit sponsor. According to the 2020 tax filing for Cicero Research, its mission is to “create and distribute non-partisan documents recommending free-market based solutions to public policy issues,” and “produce and distribute non-partisan educational materials about the importance of preserving Texan policies, values and history.” Lonsdale tweet about Pete Buttigieg's parternity leave Great for fathers to spend time w their kids and support moms, but any man in an important position who takes 6 months of leave for a newborn is a loser. In the old days men had babies and worked harder to provide for their future - that's the correct masculine response. US tech investor Joe Lonsdale refuses to apologise for ‘loser' paternity tweet Why Palantir co-founder Joe Lonsdale is leaving Silicon Valley Lonsdale said that 10 to 15 of his firm's 45 employees are likely to join him in the Austin area. He's also moving his policy organization, Cicero Institute, to the city. Among 8VC's better-known investments to date are Palmer Luckey's start-up, Anduril, which is building a virtual border wall, and Dustin Moskovitz's software company, Asana, which went public in September. He's also putting a lot of money in health-care companies like insurance provider Oscar Health and men's health company Hims.
On the 49th episode of ZemachFM, we are taking a look at Facebook’s company profile. We will be discussing the history of Facebook, how Facebook was founded by the five original founders Marck Zuckerberg, Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes. We also take a look at what products and services that Facebook is shipping currently. Episode Timeline 02:22 Episode Introduction 03:30 How accurate was the movie, The Social Network 05:05 How was Facebook was started 06:20 The contribution of Facemash to the development of Facebook. 09:30 The Facebook of Harvard 10:20 Early founders of Facebook 12:04 Expansion of Facebook to other colleges 13:00 The involvement of Saun Parker 14:20 The move to the name Facebook 15:45 The Winklevoss brothers 16:55 The deposition with Edwardo Saverin 18:55 The Saun parker drug incident 21:00 Recent incidents caused by Facebook 22:32 The Cambridge Analytica incident 25:25 Facebook acquisitions 27:45 The WhatsApp acquisition 28:35 Current products and services from Facebook 32:49 Facebook and RayBan collaboration 34:00 AR and VR at Facebook 34:36 Facebook and the software development environment Contact the hosts Henok Tsegaye Twitter Instagram LinkedIn Abdulhadmid Oumer Twitter Instagram linkedIn Follow Zemach FM and give us comment
Tiny Capital Co-founder Andrew Wilkinson calls into the show to discuss his approach to buying wonderful internet business, taking WeCommerce Holdings public on the Toronto Stock Exchange, and what he thinks the future could be for crypto and NFTs. Andrew is Co-founder of Tiny, a technology holding company using a simple one-month acquisition process, then leaving the companies alone to do their thing. Home to companies like Dribble, Flow, Creative Market, and Unicorn Hunt. 00:00 - Subscribe for Tech & Business News Daily 00:58 - Bringing WeCommerce Public 06:06 - RTOs and SPACs 09:03 - Quick Deal Making Process 11:25 - Losing $10 Million vs Asana 17:34 - Profitable and Sustainable Tech Businesses 20:39 - Staying Under the Radar 21:49 - Meeting w/ Dustin Moskovitz 24:51 - Evaluating Potential Acquisitions 26:39 - Crypto Aligning Users and Owners 30:23 - Would You Issues a Dribbble Coin? 33:11 - Altcoins vs Marketable Securities 37:51 - Crypto, a Feature or a Product? 38:59 - BitClout 41:09 - What's Interesting About NFTs 42:19 - Impossible to Regulate? 43:42 - Next Trends Andrew is Watching 45:36 - Closing Remarks Originally Aired: 05/18/21 #VentureCapital #eCommerce #Investments
According to The Center for Responsive Politics, the 2020 elections cost a combined $14.4 billion. That money has to come from somewhere, and while the last cycle saw an unprecedented number of small donors, a lot of those dollars came from billionaires.Amanda and Faiz talk with Teddy Schleifer, a senior reporter covering money and influence (or billionaires) for Recode. They discuss how tech billionaires with relatively little political experience, like Reid Hoffman and Dustin Moskovitz, spent gobs of money trying to get Democrats get elected; whether it actually helped; and whether Democrats should partner with billionaires in the first place. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com See acast.com/privacy for privacy and opt-out information.
According to The Center for Responsive Politics, the 2020 elections cost a combined $14.4 billion. That money has to come from somewhere, and while the last cycle saw an unprecedented number of small donors, a lot of those dollars came from billionaires. Amanda and Faiz talk with Teddy Schleifer, a senior reporter covering money and influence (or billionaires) for Recode. They discuss how tech billionaires with relatively little political experience, like Reid Hoffman and Dustin Moskovitz, spent gobs of money trying to get Democrats get elected; whether it actually helped; and whether Democrats should partner with billionaires in the first place. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
Teddy Schleifer is a journalist for Vox’s recode, covering what billionaires in the Silicon Valley are doing with their money. Teddy works for Vox as a journalist and hosts the podcast Recode DailyHe’s active on Twitter here: https://twitter.com/teddyschleiferHe says his reason for covering billionaires = “just as reporters cover poverty in America, reporters must also cover and uncover wealth in America - offering the scrutiny that informs essential debates about income inequality, money in politics, and the role of private philanthropy. If we don’t have a common set of facts about how the wealthiest people in society spend their money or live their lives, then we are just shooting in the dark - arguing based on press releases, unfounded suspicions and our set-in-stone prior beliefs.”Teddy would love to hear from you with hot tips about badly behaved foundations or billionaires! Or anything else alarming in philanthropy. Hit him up.References:During the pandemic, the number of billionaires spiked by 30%. Around the time of this podcast recording in Spring 2021, there is a record high of 2,755 billionaires. 86% of those billionaires are richer than they were a year ago. We discussed the relative wealth of MacKenzie Scott who has given $5.8B (the single largest gift in the history of the US), but still gained in wealth in 2020.DAFs: As Teddy explains, a Donor Advised Fund, essentially serves as a place to set money aside for charity, “and the money then goes to charity later. It could be much later.” (it could also be never and keep power in the hands of family members.) There is over $140B sitting in Donor Advised Funds in the US. Of the billionaires he covers, he recently spoke with who was relatively surprised to see how he could influence the presidential election with his donationsThis is where Silicon Valley is.Larry Page, is listed as one of the billionaires ($80-$100B Net Worth in a given day) that places the minimum 5% payout from his foundation, into a DAF, literally with no benefit to society at this time. He’s the worst case scenario. He’s a cofounder of Google.Teddy warns us that we have very little information about what billionaires are doing with their money, and that this lack of information is getting even worse!Effective Altruist MovementDustin Moskovitz, co-founder of Facebook “...is probably the most prominent billionaire philanthropist in effective altruism, and he's in his thirties, but he has a clear point of view on what he wants to do with the money and is working on it.”Here is the interview with young Billionaire Sam Bankman-Fried who talks about influencing the Biden electionStephanie Ellis-Smith of Philanthropy NW/The Giving Practice in Seattle, was listed as a philanthropy consultant who talks about “analysis paralysis”Jack Dorsey has a new charitable effort called Start Small, which is taking a lot of criticism because of its gifts to other celebrities.What Americans Really Think About Billionaires During The Pandemic is the Data for Progress poll Teddy mentionsWe are self-funded. So. If you’d like to inspire this beautiful series through your financial contribution - we’ll take it on Patreon! Subscribe to this podcast to get the best of what we have to offer.I promise there are more incredible episodes on their way - every other Wednesday.The Ethical Rainmaker is produced in Seattle, Washington by Kasmira Hall, and Isaac Kaplan-Woolner, and socials by Rachelle Pierce. Michelle Shireen Muri is the executive producer and this pod is sponsored by Freedom Conspiracy.
Hello, heute geht es darum warum wir unsere Jobs gekündigt haben. Eine Folge die wir uns selber bestimmt nochmal anhören werden. Alles unsere eigenen Erfahrungen und Meinungen :) ✌️ Eine Korrektur: Anstatt 20Std/Monat meinte Ron 20Std/Woche beim Freelancing Job Links: - Die 4-Stunden-Woche: Mehr Zeit, mehr Geld, mehr Leben https://www.amazon.de/Die-4-Stunden-Woche-Mehr-Zeit-Leben/dp/3548375960 - Fear-Setting: The Most Valuable Exercise I Do Every Month https://tim.blog/2017/05/15/fear-setting/ - Stundensatz selbstständiger Ingenieure und IT-Freelancer https://www.ingenieur.de/karriere/gehalt/stundensatz-selbststaendiger-ingenieure-und-it-freelancer/ - Lecture 1 - How to Start a Startup (Sam Altman, Dustin Moskovitz) https://www.youtube.com/watch?v=CBYhVcO4WgI&list=PL5q_lef6zVkaTY_cT1k7qFNF2TidHCe-1 - Y Combinator https://www.ycombinator.com/ - What is a Flow State? https://www.flowresearchcollective.com/blog/what-is-flow-state - Radikal Ehrlich: Verwandle Dein Leben - Sag die Wahrheit https://www.amazon.de/Radikal-Ehrlich-Verwandle-Leben-Wahrheit/dp/3945719003 - Kreuzigung | Philippinen https://de.wikipedia.org/wiki/Kreuzigung#Philippinen - Why is the deathbed perspective considered so valuable? https://aeon.co/essays/why-is-the-deathbed-perspective-considered-so-valuable - 5 Dinge, die Sterbende am meisten bereuen: Einsichten, die Ihr Leben verändern werden https://www.amazon.de/Dinge-die-Sterbende-meisten-bereuen/dp/3442157528 Viel Spaß beim Zuhören
Invest Like the Best Podcast Notes Key Takeaways Give people clarity about what’s most important, the strategy, and goals they’re working towardsMost of “work about work” is really exchanging status information, and getting on the same page with your teamThe pyramid of clarity from top to bottom:The MissionStrategyCompany-wide objectivesBusiness, product, and internal objectivesKey resultsProjectsYou get diminishing returns as you go beyond 50 or 60 hours per week- your hours get less productiveThe goal is not to maximize your hours but maximize your outputMeetings are not evil – but they chop up your calendar, and can interrupt focus timeRadical inclusiveness is where whoever shows up is totally welcome and embraced and included and encouraged to participateRead the full notes @ podcastnotes.orgMy guest today is Dustin Moskovitz, co-founder and CEO of Asana, a team-centric product management tool used by over 1.3 million users around the world. Dustin started Asana in 2008, 4 years after co-founding Facebook. In this conversation, we dive into Dustin's belief about the diminishing returns of hard work, the shocking amount of productivity lost in doing "work about work", and Dustin's philanthropic investment strategy around leverage and maximizing ROI. I hope you enjoy my wide-ranging conversation with Dustin Moskovitz. For the full show notes, transcript, and links to mentioned content, check out https://www.joincolossus.com/episodes/88012555/moskovitz-eliminating-work-about-work ----- This episode is brought to you by Tegus. Tegus has built the most extensive primary information platform available for investors. With Tegus, you can learn everything you’d want to know about a company in an on-demand digital platform. Investors share their expert calls, allowing others to instantly access more than 10,000 calls on Affirm, Teladoc, Roblox, or almost any company of interest. Visit https://www.tegus.co/patrick to learn more. ----- This episode is brought to you by Vanta. Vanta has built software that makes it easier to both get and maintain your SOC 2 report at a fraction of the normal cost. Founders Field Guide listeners can redeem a $1k off coupon at vanta.com/patrick. ----- Founder's Field Guide is a property of Colossus Inc. For more episodes of Founder's Field Guide, go to https://www.joincolossus.com/episodes. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up at https://www.joincolossus.com/newsletter. Follow Patrick on Twitter at @patrick_oshag Follow Colossus on Twitter at @JoinColossus Show Notes [00:03:19] – [First question] – Balancing hard purposeful work and too much work that leads to burn out [00:05:41] – What led to this way of thinking [00:06:54] – Regulating hard work through culture [00:08:25] – False tradeoffs and how Asana represents this [00:09:43] – Origins of Asana [00:13:22] – Organizing the chaos of a project [00:18:09] – Change vs discipline of the mission [00:19:55] – Transferring good ideas from one company to another [00:23:19] – Instilling leverage as a concept in an early company [00:25:21] – New learning curves in building Asana [00:26:52] – Hardest boss battle during his time at Asana [00:28:43] – The role of the work graph [00:31:46] – The proliferation of the work management space and the overall landscape [00:32:56] – The idea of radical inclusiveness [00:36:31] – Best reasons to start a new company [00:37:47] – What will lead to Asana’s continued success [00:38:59] – Lessons building the product [00:41:13] – Work with the Open Philanthropy Project [00:43:44] – Work on pandemics and biosecurity [00:46:11] – Where he sees the future of artificial intelligence [00:50:47] – Kindest thing anyone has done for him
My guest today is Dustin Moskovitz, co-founder and CEO of Asana, a team-centric product management tool used by over 1.3 million users around the world. Dustin started Asana in 2008, 4 years after co-founding Facebook. In this conversation, we dive into Dustin's belief about the diminishing returns of hard work, the shocking amount of productivity lost in doing "work about work", and Dustin's philanthropic investment strategy around leverage and maximizing ROI. I hope you enjoy my wide-ranging conversation with Dustin Moskovitz. For the full show notes, transcript, and links to mentioned content, check out https://www.joincolossus.com/episodes/88012555/moskovitz-eliminating-work-about-work ----- This episode is brought to you by Tegus. Tegus has built the most extensive primary information platform available for investors. With Tegus, you can learn everything you’d want to know about a company in an on-demand digital platform. Investors share their expert calls, allowing others to instantly access more than 10,000 calls on Affirm, Teladoc, Roblox, or almost any company of interest. Visit https://www.tegus.co/patrick to learn more. ----- This episode is brought to you by Vanta. Vanta has built software that makes it easier to both get and maintain your SOC 2 report at a fraction of the normal cost. Founders Field Guide listeners can redeem a $1k off coupon at vanta.com/patrick. ----- Founder's Field Guide is a property of Colossus Inc. For more episodes of Founder's Field Guide, go to https://www.joincolossus.com/episodes. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up at https://www.joincolossus.com/newsletter. Follow Patrick on Twitter at @patrick_oshag Follow Colossus on Twitter at @JoinColossus Show Notes [00:03:19] – [First question] – Balancing hard purposeful work and too much work that leads to burn out [00:05:41] – What led to this way of thinking [00:06:54] – Regulating hard work through culture [00:08:25] – False tradeoffs and how Asana represents this [00:09:43] – Origins of Asana [00:13:22] – Organizing the chaos of a project [00:18:09] – Change vs discipline of the mission [00:19:55] – Transferring good ideas from one company to another [00:23:19] – Instilling leverage as a concept in an early company [00:25:21] – New learning curves in building Asana [00:26:52] – Hardest boss battle during his time at Asana [00:28:43] – The role of the work graph [00:31:46] – The proliferation of the work management space and the overall landscape [00:32:56] – The idea of radical inclusiveness [00:36:31] – Best reasons to start a new company [00:37:47] – What will lead to Asana’s continued success [00:38:59] – Lessons building the product [00:41:13] – Work with the Open Philanthropy Project [00:43:44] – Work on pandemics and biosecurity [00:46:11] – Where he sees the future of artificial intelligence [00:50:47] – Kindest thing anyone has done for him
Alex Konrad is a Senior Editor at Forbes covering venture capital, cloud and enterprise software out of New York. He also edits the Midas List, Midas List Europe, Cloud 100 list and 30 Under 30 for VC. In this conversation, we discuss the various stories Alex has written, including Dustin Moskovitz, Lee Fixel, Masayoshi Son, Chris Sacca, Canva, Clubhouse, Zoom, and Snowflake. =============================== ExpressVPN lets you access the internet as if you’re from a different country. There are hundreds of VPNs out there, but ExpressVPN is ridiculously fast. You can stream everything in HD quality with zero buffering! If you use my link right now at EXPRESSVPN dot com slash pomp, you can get an extra three months of ExpressVPN for free! That’s https://www.expressvpn.com/pomp =============================== Bybit currently has over 300,000 users, with the number growing in double-digit percentages monthly. The exchange has no overloads during volatility and low latency trading plus 24/7 customer live support. At Bybit, we listen, care, and improve to provide the best possible trading experience and create a faster, fairer, and more human trading environment. Visit Now! =============================== Pomp writes a daily letter to over 50,000 investors about business, technology, and finance. He breaks down complex topics into easy to understand language, while sharing opinions on various aspects of each industry. You can subscribe at https://www.pompletter.com
Founded in 2008 by Facebook co-founder Dustin Moskovitz and ex-Google-and-Facebook engineer Justin Rosenstein, who both worked on improving the productivity of employees at Facebook. They took their internal Facebook tool and commercialized it, launching publically in 2011. Since they, they've been trying to simplify the "work about work in Moskovitz's own words. Asana’s mission is to help humanity thrive by enabling the world’s teams to work together effortlessly. Follow along as Anna Marie Clifton takes us through how Asana built their new Automation product, in an attempt to "Automate away the work about work". Want an ad free experience? Subscribe today at glow.fm/rocketship and get a private feed you listen to where ever you listen to podcasts. This episode is brought to you by: Product Institute is an online course for new and tenured product managers. Head to productinstitute.com and enter the code ROCKET at checkout, you'll receive $200 off your subscription. Participate builds and hosts online learning communities that inspire learning, connection and growth. Head to participate.com/rocketship for a free virtual learning workshop, valued at $1,000. Digital Ocean is a cloud provider that makes it easy for entrepreneurs and startups to deploy and scale web applications with no issues and unplanned costs. Get started for free at do.co/rocketship. Rocketship is brought to you by The Podglomerate. Learn more about your ad choices. Visit megaphone.fm/adchoices
Mark Elliot Zuckerberg, más conocido como Mark Zuckerberg, es uno de los hombres más importantes e influyentes del mundo. Mark desarrolló la popular red social Facebook junto a sus compañeros de la Universidad de Harvard: Eduardo Saverin, Andrew McCollum, Dustin Moskovitz y Chris Hughes. Según la lista Forbes, Zuckerberg se encuentra entre las 10 personas más ricas del mundo.
Diva Tech Talk interviewed Sonja Gittens-Ottley, Head of Diversity and Inclusion at Asana, a leading work management platform that helps teams organize, track, and manage work. (Dustin Moskovitz, aco-founder of Facebook, is also co-founder of Asana). Sonja’s mission is setting standards to drive inclusivity and equity in the workplace. As a first generation transplant to the United States, she immigrated from The West Indies. “I am the mother of a 4-year old boy,” Sonja said. “I am bringing up a child in this society. How can what I do, today, impact his life, and shape his opportunities for the future?” Growing up in The Republic of Trinidad/Tobago, Sonja did not have aspirational limits placed on her. Growing up “we had really structured expectations of what were ‘cool’ jobs.” Sonja became an attorney, with a bachelor’s of law degree from The University of the West Indies in Barbados, and a graduate degree from The Hugh Wooding Law School. She worked at both the Ministry of Legal Affairs/Office of the Attorney General and the Central Bank of Trinidad and Tobago. Now she is adamant that “Inclusion includes thinking about all the opportunities; ensuring that everyone has access; not being confined to what society says you should be doing.” Sonja’s transition from law to tech was prompted by her move to the U.S. that was originally planned as a two-year stint. “But I got the option to work at a company called Yahoo.” There she implemented project management and legal internal consulting. When Yahoo established a human rights program, Sonja played a significant role. That led to working with Yahoo’s corporate policymaking for diversity and inclusion. From Yahoo, Sonja moved to Facebook as the company’s Global Diversity Program Manager and then to Asana as Head of Diversity and Inclusion. To empower Asana’s diversity, she focused on two strategic pillars: recruiting and employee evaluation and growth. She stressed that “the culture is really supportive” and that neither pillar can exist without the other; diverse recruitment and nurturing culture must work in tandem. She works closely with the company’s University Recruiting team, and targets events that attract diverse attendees. To enhance existing culture, she is working on a variety of supportive initiatives like ERGs (Employee Resource Groups) for internal communities “making space for the community, and space for allies to learn more…”. There are three: Asana Women, Asana Gradient (for people of color), and Asana Team Rainbow (for LGBTQ employees). Each group autonomously sets its objectives, but all three are aligned, overall, to the greater Asana mission. One practical approach that Asana initiated to support inclusion is the Asana Real Talk series where people engage in honest, authentic discussions about overcoming challenges, communicating purpose and driving change, individually and in the greater world/workplace. Sonja also does an onboarding session with all Asana team members emphasizing how vital inclusion is to the company. Asana’s liberal Family Leave policy is an example of progress. Sonia proudly exclaimed: “The beauty of Asana is that it is really transparent. People are not shy to ask questions.” Sonja’s leadership is enabled through wielding influence. “Be clear about what you are trying to achieve. Be honest. People want clarity on an objective --- possible issues, risks involved, and probable results.” For candidates, she advised “You have power. It can be as easy as asking: ‘You say you do diversity and inclusion; what are the actions you have taken?’ ” For companies initially adopting diversity and inclusion programs, Sonja recommended a company-wide engagement survey with questions about “belonging” to gauge employee’s perspectives. “Think of it as an audit to see where you are.” She also pointed to mundane vital questions a company can ask: “What is our restroom situation? Should we have ‘all gender’ restrooms? Are we thinking about ‘mother’s rooms’?” For recruiting, in companies without a dedicated diversity expert, she suggested: “You should be thinking about interview skills and training.” To measure success, Sonja said: “At Asana, we look at it as we would look at any other objective, in terms of both qualitative and quantitative data. What’s our new hire rate? How is it mapping to goals? Through surveys, tracking employee engagement and sense of belonging in terms of the overall company, and in terms of how specific groups are doing, and the intersectionality of groups.” The intersectionality data can offer “very different pictures.” To keep momentum, Sonja and Asana do numerous things including monthly All Hands meetings, use a companywide Slack and Asana to consistently share diversity data, and hold “Office Hours” and Ask Me Anything sessions dedicated to inclusion/culture. In addition to the Asana Real Talk series, Sonja is proud of the recent apprenticeship program the company launched, AsanaUp. “We were really thoughtful and intentional about widening that funnel of great candidates coming from non-traditional backgrounds.” The AsanaUP apprenticeship welcomes those without university computer science degrees (with other degrees, from coding schools, or parents returning to the workforce) to join the company for 6-9 months to work alongside software engineers. Sonja characterized herself as “an eternal optimist.” In her view, “everyone can make a difference. Children are the future, and they have no limits.” She exclaimed: “There are people out there, who don’t have access to the opportunities,” she said. “I plan to be working on lengthening that pipeline. This has to be done with really great partners like Grace Hopper Celebration conference”. “We forget that this is new and uncomfortable for a lot of people: to talk about race or gender or any of the other identities that people possess. Getting people to a place of comfort is how you change things!” Everyone much develop “a real sense of empathy; people might look different than you, might sound different, but we are all trying to do the same thing.” And the most important thing she would like to do is “remind people of their own power and their own worth. It makes a difference in what you can achieve!” Make sure to check us out on online at www.divatechtalk.com, on Twitter @divatechtalks, and on Facebook at https://www.facebook.com/divatechtalk. And please listen to us on iTunes, SoundCloud, and Stitcher and provide an online review.
Here are once again with the fabulous Nerds for the weekly episode of hijinks and merriment. This week we look at topics that will hopefully entertain you, perhaps educate you, perchance even make you laugh. As usual we have our three Nerds, idiots, nutjobs, wackjobs, funny farm contenders, or as we like to say, your hosts. Bucky, Professor and the DJ. Bucky is our slightly older, kind of grumpy Nerd, who dislikes Mumble rappers, reality TV and generally stupidity. Professor our younger Nerd who likes gaming, long walks to the camp fire, and his Switch when on the bus. Last but not least, we have the DJ, the resident Droid that no one is looking for, who likes anime, games and laughing. First topic up this week is about some new illustrated novels, or omics, from the Firefly franchise. The DJ is challenged to finally watch the series to help him discover his inner Browncoat, will he be brave enough to walk down the street in a hat like that and show he aint afraid of nothing? We will find out, but by my pretty blue bonnet if he doesn’t we will aim to misbehave and cause mischief. Next up we look at the stress and traumatic conditions developers are suffering through to bring us new games. With reports of people developing PTSD, and hiding this fact so they can get jobs. This is seriously messed up, what these people are going through is downright wrong and needs to be looked at. Also Buck has a rant about the need to look after each other because he is sick and tired of morons putting profit before people. Last up Buck brings us an article about Rainbows. No, he hasn’t become a hippie or something drastic. He just felt we needed to take a moment and look around us and admire the simple things, you know, kind of like smell the roses and noticed the politicians as people (we think they are, but don’t hold us to that – Ed.). So we have 20 facts about rainbows and one of which is that the Greeks thought there were only three colours in the rainbow. We follow this with the usual look at the games we have been playing this week and give you a run down on them. Concluding with the episode with the regular Shout outs, remembrances, birthdays and events for the week that we all love. As always, take care of each other and stay hydrated.EPISODE NOTES:Firefly comics - https://comicbook.com/comics/2019/05/13/firefly-the-sting-joss-whedon-boom-studios/MK 11 & PTSD - https://www.kotaku.com.au/2019/05/id-have-these-extremely-graphic-dreams-what-its-like-to-work-on-ultra-violent-games-like-mortal-kombat-11/Rainbows - http://discovermagazine.com/2019/may/20-things-you-didnt-know-about--rainbowsGames currently playingProfessor – Cataclysm: Dark Days Ahead - https://cataclysmdda.org/ Buck – Monster Truck Drive - https://store.steampowered.com/app/847870/Monster_Truck_Drive/DJ – Dota 2 - https://store.steampowered.com/app/570/Dota_2/Other topics discussedChanges to Santa Clarita Diet- https://www.hollywoodreporter.com/live-feed/santa-clarita-diet-creator-explains-season-3-talks-season-4-1198429Ed Boon’s take on fatalities- https://www.businessinsider.com.au/mortal-kombat-creator-ed-boon-explains-how-new-fatalities-are-made-2019-3?r=US&IR=TFacebook content moderators having PTSD- https://futurism.com/the-byte/facebook-content-moderators-lawsuit-ptsdGrumpy Cat (internet personality)- https://en.wikipedia.org/wiki/Grumpy_CatAll Dogs gone to Heaven (1989 film)- https://en.wikipedia.org/wiki/All_Dogs_Go_to_HeavenLinguistic relativity and the colour naming- https://en.wikipedia.org/wiki/Linguistic_relativity_and_the_color_naming_debateChromatic aberration - https://en.wikipedia.org/wiki/Chromatic_aberrationPot of gold at the end of the rainbow- http://luckyireland.com/the-origin-of-a-pot-of-gold-at-the-end-of-the-rainbow/Minecraft Earth (mobile game)- https://www.minecraft.net/en-us/earthDota 2 New Character: Mars - Character bio - https://dota2.gamepedia.com/Mars- Mars’ character design - https://steamcdn-a.akamaihd.net/apps/dota2/images/mars/hero_mars93fd33s5.jpgShadow of the Colossus (2006 game)- https://en.wikipedia.org/wiki/Shadow_of_the_ColossusTrials Fusion (2014 game)- https://en.wikipedia.org/wiki/Trials_FusionStunt Car Arena (arcade game)- http://www.arcadespot.com/game/stunt-car-arena/Millionaire’s advice to young people – stop spending smashed avocados - https://www.theguardian.com/lifeandstyle/2017/may/15/australian-millionaire-millennials-avocado-toast-houseColorectal Cancer also known as colon cancer- https://en.wikipedia.org/wiki/Colorectal_cancerDiamond Jubilee of Elizabeth II- https://en.wikipedia.org/wiki/Diamond_Jubilee_of_Elizabeth_IIQueen Victoria- Bio - https://en.wikipedia.org/wiki/Queen_Victoria- Queen Victoria with her grandchildren and other guests - https://images.immediate.co.uk/volatile/sites/7/2018/01/Queen_victoria_family-fd7d69f.jpg?quality=90&resize=768,574Stevie Wonder catches microphone stand- https://www.youtube.com/watch?v=HUgngvsWLlECarrie Fisher roasts George Lucas- https://www.youtube.com/watch?v=lZ97s396kb0Mark Zuckerberg will eat meat he kills- https://www.huffingtonpost.com.au/2017/07/13/mark-zuckerberg-will-only-eat-meat-he-kills-himself_a_23027199/Apple loses money than the value of Facebook- https://www.businessinsider.com.au/apples-market-cap-falls-by-450-billion-more-than-the-value-of-facebook-2019-1?r=US&IR=TWalt Disney - Bio and urban myth on Walt’s body is frozen - https://simple.wikipedia.org/wiki/Walt_Disney- Human bones in Disneyland - https://collinsrace1.wordpress.com/2018/10/29/are-there-human-bones-at-disney-parks/Elvis Lives (That’s Not Canon Podcast)- https://thatsnotcanon.com/elvislivespodcastCaptain Jack Sparrow (Pirates of The Caribbean character)- https://pirates.fandom.com/wiki/Jack_SparrowHenry Sutton (Australian Inventor)- https://en.wikipedia.org/wiki/Henry_Sutton_(inventor) Shoutouts7 May 1999 - The Mummy opened and grossed $43 million in 3,210 theatres in the United States on its opening weekend. - https://en.wikipedia.org/wiki/The_Mummy_(1999_film)14 May 1796 - English country doctor Edward Jenner administers the first inoculation against smallpox, using cowpox pus, in Berkeley, Gloucestershire - https://en.wikipedia.org/wiki/Edward_JennerRemembrances11 May 2019 – Peggy Lipton, American actress, model, and singer. She was well-known through her role as flower child Julie Barnes in the counterculture television series The Mod Squad (1968–1973), for which she won the Golden Globe Award for Best Actress – Television Series Drama in 1970. Her fifty-year career in television, film, and stage included many roles, includingNorma Jennings in David Lynch'sTwin Peaks. Lipton was formerly married to the musician and producer Quincy Jones and was the mother of their two daughters, Rashida Jones and Kidada Jones. She died of colon cancer at 72 in Los Angeles,California. - https://en.wikipedia.org/wiki/Peggy_Lipton13 May 2019 – Doris Day, American actress, singer, and animal welfare activist. She began her career as a big band singer in 1939, achieving commercial success in 1945 with two No. 1 recordings, "Sentimental Journey" and "My Dreams Are Getting Better All the Time" with Les Brown & His Band of Renown. She left Brown to embark on a solo career and recorded more than 650 songs from 1947 to 1967. Day's film career began during the latter part of the classical Hollywood era with the film Romance on the High Seas, leading to a 20-year career as a motion picture actress. She starred in films of many genres, including musicals, comedies, and dramas. She played the title role in Calamity Jane and starred in Alfred Hitchcock's The Man Who Knew Too Much with James Stewart. Her best-known films are those in which she co-starred with Rock Hudson, chief among them 1959's Pillow Talk, for which she was nominated for the Academy Award for Best Actress. She also worked with James Garner on both Move Over, Darling (1963) and The Thrill of It All, and also starred with Clark Gable, Cary Grant, James Cagney, David Niven, Jack Lemmon, Frank Sinatra, Richard Widmark, Kirk Douglas, Lauren Bacall and Rod Taylor in various movies. After ending her film career in 1968, only briefly removed from the height of her popularity, she starred in the sitcom The Doris Day Show. Day became one of the biggest film stars in the early 1960s, and as of 2012 was one of eight performers to have been the top box-office earner in the United States four times. In 2011, she released her 29th studio album My Heart which contained new material and became a UK Top 10 album. She received the Grammy Lifetime Achievement Award and a Legend Award from the Society of Singers. In 1960, she was nominated for the Academy Award for Best Actress, and was given the Cecil B. DeMille Award for lifetime achievement in motion pictures in 1989. In 2004, she was awarded the Presidential Medal of Freedom; this was followed in 2011 by the Los Angeles Film Critics Association's Career Achievement Award. She died of pneumonia at 97 in Carmel Valley Village, California. - https://en.wikipedia.org/wiki/Doris_Day14 May 1919 - Henry John Heinz, German-American entrepreneur who founded the H. J. Heinz Company based in Pittsburgh,Pennsylvania. He was born in that city, the son of German immigrants from the Palatinate who came independently to the United States in the early 1840s. Heinz developed his business into a national company which made more than 60 food products; one of its first was tomato ketchup. He was influential for introducing high sanitary standards for food manufacturing. He also exercised a paternal relationship with his workers, providing health benefits, recreation facilities, and cultural amenities. His descendants carried on the business until fairly recently, selling their remaining holdings to the predecessor company of what is now Kraft Heinz. Heinz was the great-grandfather of former U.S. Senator H. John Heinz III of Pennsylvania. He died of pneumonia at 75 in Pittsburgh, Pennsylvania. - https://en.wikipedia.org/wiki/Henry_J._Heinz14 May 1998 - Frank Sinatra, American singer, actor and producer who was one of the most popular and influential musical artists of the 20th century. He is one of the best-selling music artists of all time, having sold more than 150 million records worldwide. Born to Italian immigrants in Hoboken, New Jersey, Sinatra began his musical career in the swing era with bandleaders Harry James and Tommy Dorsey. Sinatra found success as a solo artist after he signed with Columbia Records in 1943, becoming the idol of the "bobby soxers". He released his debut album, The Voice of Frank Sinatra, in 1946. Sinatra's professional career had stalled by the early 1950s, and he turned to Las Vegas, where he became one of its best known residency performers as part of the Rat Pack. His career was reborn in 1953 with the success of From Here to Eternity, with his performance subsequently winning an Academy Award and Golden Globe Award for Best Supporting Actor. Sinatra released several critically lauded albums, including In the Wee Small Hours, Songs for Swingin' Lovers!, Come Fly with Me, Only the Lonely and Nice 'n' Easy. Sinatra left Capitol in 1960 to start his own record label, Reprise Records, and released a string of successful albums. In 1965, he recorded the retrospective September of My Years and starred in the Emmy-winning television special Frank Sinatra: A Man and His Music. After releasing Sinatra at the Sands, recorded at the Sands Hotel and Casino in Vegas with frequent collaborator Count Basie in early 1966, the following year he recorded one of his most famous collaborations with Tom Jobim, the album Francis Albert Sinatra & Antonio Carlos Jobim. It was followed by 1968'sFrancis A. & Edward K. with Duke Ellington. Sinatra retired for the first time in 1971, but came out of retirement two years later and recorded several albums and resumed performing at Caesars Palace, and reached success in 1980 with "New York, New York". Using his Las Vegas shows as a home base, he toured both within the United States and internationally until shortly before his death in 1998. Sinatra forged a highly successful career as a film actor. After winning an Academy Award for From Here to Eternity, he starred in The Man with the Golden Arm, and received critical acclaim for his performance in The Manchurian Candidate. He appeared in various musicals such as On the Town, Guys and Dolls, High Society, and Pal Joey, winning another Golden Globe for the latter. Toward the end of his career, he became associated with playing detectives, including the title character in Tony Rome. Sinatra would later receive the Golden Globe Cecil B. DeMille Award in 1971. On television, The Frank Sinatra Show began on ABC in 1950, and he continued to make appearances on television throughout the 1950s and 1960s. Sinatra was also heavily involved with politics from the mid-1940s, and actively campaigned for presidents such as Harry S. Truman, John F. Kennedy and Ronald Reagan. In crime, the FBI investigated Sinatra and his alleged relationship with the Mafia. He was honored at the Kennedy Center Honors in 1983, was awarded the Presidential Medal of Freedom by Ronald Reagan in 1985, and the Congressional Gold Medal in 1997. Sinatra was also the recipient of eleven Grammy Awards, including the Grammy Trustees Award, Grammy Legend Award and the Grammy Lifetime Achievement Award. He was collectively included in Time magazine's compilation of the 20th century's 100 most influential people. After Sinatra's death, American music critic Robert Christgau called him "the greatest singer of the 20th century", and he continues to be seen as an iconic figure. He died of a heart attack at 82 in Los Angeles, California . - https://en.wikipedia.org/wiki/Frank_Sinatra14 May 2019 – Tim Conway, American comedic actor, writer, and director. He portrayed the inept Ensign Parker in the 1960s World War II situation comedy McHale's Navy, was a regular cast member on the 1970s variety and sketch comedy program The Carol Burnett Show, co-starred with Don Knotts in several films in the late 1970s and early 1980s, starred as the title character in the Dorf series of sports comedy films, and provided the voice of Barnacle Boy in the animated series SpongeBob SquarePants. He was particularly admired for his ability to depart from scripts with spontaneously improvised character details and dialogue, and he won six Primetime Emmy Awards during his career, four of which were awarded for The Carol Burnett Show, including one for writing. He died of normal pressure hydrocephalus at 85 in Los Angeles,California. - https://en.wikipedia.org/wiki/Tim_Conway15 May 2019 - Rick Bennett, voice actor, known for X-Men: The Animated Series (1992), Balance of Power (1996) and X-Men vs. Street Fighter (1996) mainly as Cain Marko also known as The Juggernaut. He passed away in Toronto - https://comicbook.com/tv-shows/2019/05/15/x-men-the-animated-series-juggernaut-voice-actor-passes-away/Bio - https://www.imdb.com/name/nm0072001/16 May 2019 – The Honourable Bob Hawke, Australian politician who served as the 23rd prime minister of Australia and Leader of the Labor Party from 1983 to 1991. Hawke served as Member of Parliament (MP) for Wills from 1980 to 1992 and was Labor's longest serving Prime Minister. Bob Hawke was born in Bordertown South Australia. The Hawke family then moved to Western Australia. He attended the University of Western Australia and then went on to Oxford University as a Rhodes Scholar. In 1956, Hawke joined the Australian Council of Trade Unions (ACTU) as a research officer. Having risen to become responsible for wage arbitration, he was elected ACTU President in 1969, where he achieved a high public profile. After a decade serving in that role, Hawke announced his intention to enter politics, and was subsequently elected to the House of Representatives as the Labor MP for Wills. Three years later, he led Labor to a landslide victory at the 1983 election and was sworn in as prime minister. He led Labor to victory three more times, in 1984, 1987 and 1990, making him the most electorally successful Labor Leader. The Hawke Government created Medicare and Landcare, brokered the Prices and Incomes Accord, established APEC, floated the Australian dollar, deregulated the financial sector, introduced the Family Assistance Scheme, announced "Advance Australia Fair" as the official national anthem, initiated superannuation pension schemes for all workers and oversaw passage of the Australia Act that removed all remaining jurisdiction by the United Kingdom from Australia. Hawke remains Labor's longest-serving prime minister, Australia's third-longest-serving Prime Minister and, until his death at the age of 89, Hawke was the oldest living former Australian Prime Minister. Hawke is the only Australian Prime Minister to be born in South Australia, and the only one raised and educated in Western Australia. He also held a world record for beer drinking; he downed 2 1⁄2 imperial pints (1.4 l)—equivalent to a yard of ale—from a sconce pot in 11 seconds as part of a college penalty. He died at 89 in Northbridge, New South Wales. - https://en.wikipedia.org/wiki/Bob_HawkeFamous Birthdays13 May 1950 - Stevie Wonder, American singer, songwriter, musician, record producer, and multi-instrumentalist. A child prodigy, Wonder is considered to be one of the most critically and commercially successful musical performers of the late 20th century. He signed with Motown's Tamla label at the age of 11 and continued performing and recording for Motown into the 2010s. He has been blind since shortly after his birth. Among Wonder's works are singles such as "Signed, Sealed, Delivered I'm Yours", "Superstition", "Sir Duke", "You Are the Sunshine of My Life", and "I Just Called to Say I Love You"; and albums such as Talking Book (1972), Innervisions (1973), and Songs in the Key of Life (1976). He has recorded more than 30 U.S. top-ten hits and received 25 Grammy Awards, one of the most-awarded male solo artists, and has sold more than 100 million records worldwide, making him one of the top 60 best-selling music artists. Wonder is also noted for his work as an activist for political causes, including his 1980 campaign to make Martin Luther King Jr.'s birthday a holiday in the United States. In 2009, Wonder was named a United Nations Messenger of Peace. In 2013, Billboard magazine released a list of the Billboard Hot 100 All-Time Top Artists to celebrate the US singles chart's 55th anniversary, with Wonder at number six. He was born in Saginaw, Michigan - https://en.wikipedia.org/wiki/Stevie_Wonder14 May 1944 – Geroge Lucas, American filmmaker and entrepreneur. Lucas is known for creating the Star Wars and Indiana Jones franchises and founding Lucasfilm,LucasArts and Industrial Light & Magic. He was the chairman and CEO of Lucasfilm before selling it to The Walt Disney Company in 2012. After graduating from the University of Southern California in 1967, Lucas co-founded American Zoetrope with filmmaker Francis Ford Coppola. Lucas wrote and directed THX 1138, based on his earlier student short Electronic Labyrinth: THX 1138 4EB, which was a critical success but a financial failure. His next work as a writer-director was the film American Graffiti, inspired by his youth in early 1960s Modesto, California, and produced through the newly founded Lucasfilm. The film was critically and commercially successful, and received five Academy Award nominations including Best Picture. Lucas' next film, the epic space opera Star Wars, had a troubled production but was a surprise hit, becoming the highest-grossing film at the time, winning six Academy Awards and sparking a cultural phenomenon. Lucas produced and cowrote the sequels The Empire Strikes Back and Return of the Jedi. With director Steven Spielberg, he created the Indiana Jones films Raiders of the Lost Ark, Temple of Doom, and The Last Crusade. He also produced and wrote a variety of films through Lucasfilm in the 1980s and 1990s and during this same period Lucas' LucasArts developed high-impact video games, including Maniac Mansion, The Secret of Monkey Island and Grim Fandango alongside many video games based on the Star Wars universe. In 1997, Lucas rereleased the Star Wars trilogy as part of a Special Edition, featuring several alterations; home media versions with further changes were released in 2004 and 2011. He returned to directing with the Star Wars prequel trilogy, comprising The Phantom Menace, Attack of the Clones, and Revenge of the Sith. He later collaborated on served as executive producer for the war film Red Tails and wrote the CGI film Strange Magic. Lucas is one of the American film industry's most financially successful filmmakers and has been nominated for four Academy Awards. His films are among the 100 highest-grossing movies at the North American box office, adjusted for ticket-price inflation. Lucas is considered a significant figure in the New Hollywood era. He was born in Modesto, California - https://en.wikipedia.org/wiki/George_Lucas14 May 1969 - Cate Blanchett, Australian actress and theatre director. She has received many accolades, including two Academy Awards, three Golden Globe Awards, and three BAFTA Awards. Time named her one of the 100 most influential people in the world in 2007, and in 2018, she was ranked among the highest-paid actresses in the world. After graduating from the National Institute of Dramatic Art, Blanchett began her acting career on the Australian stage, taking on roles in Electra in 1992 and Hamlet in 1994. She came to international attention for portraying Elizabeth I of England in the drama film Elizabeth, for which she won the BAFTA Award for Best Actress and earned her first nomination for the Academy Award for Best Actress. Her portrayal of Katharine Hepburn in the biographical drama The Aviator, earned her the Academy Award for Best Supporting Actress, and she won Best Actress for playing a neurotic divorcée in the black comedy-drama Blue Jasmine. Her other Oscar-nominated roles were in the dramas Notes on a Scandal, Elizabeth: The Golden Age, I'm Not There, and Carol. Blanchett's most commercially successful films include The Talented Mr. Ripley, Peter Jackson's The Lord of the Rings trilogy and The Hobbit trilogy, Babel, The Curious Case of Benjamin Button, Cinderella,Thor: Ragnarok, and Ocean's 8. From 2008 to 2013, Blanchett and her husband Andrew Upton served as the artistic directors of the Sydney Theatre Company. Some of her stage roles during this period were in revivals of A Streetcar Named Desire, Uncle Vanya, and The Maids. She made her Broadway debut in 2017 with The Present, for which she received a Tony Award nomination. Blanchett has been awarded the Centenary Medal by the Australian government, who made her a companion of the Order of Australia in 2017. She was appointed Chevalier of the Order of Arts and Letters by the French government in 2012. She has been presented with a Doctor of Letters from the University of New South Wales, University of Sydney, and Macquarie University. In 2015, she was honoured by the Museum of Modern Art and received the British Film Institute Fellowship. She was born in Ivanhoe, Victoria - https://en.wikipedia.org/wiki/Cate_Blanchett14 May 1984 – Mark Zuckerberg, American technology entrepreneur and philanthropist. He is known for co-founding and leading Facebook as its chairman and chief executive officer. Zuckerberg attended Harvard University, where he launched Facebook from his dormitory room on February 4, 2004, with college roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes. Originally launched to select college campuses, the site expanded rapidly and eventually beyond colleges, reaching one billion users by 2012. Zuckerberg took the company public in May 2012 with majority shares. His net worth is estimated to be $55.0 billion as of November 30, 2018, declining over the last year with Facebook stock. In 2007 at age 23 he became the world's youngest self-made billionaire. As of 2018, he is the only person under 50 in the Forbes ten richest people list, and the only one under 40 in the Top 20 Billionaires list. Since 2010, Time magazine has named Zuckerberg among the 100 wealthiest and most influential people in the world as a part of its Person of the Year award. In December 2016, Zuckerberg was ranked 10th on Forbes list of The World's Most Powerful People. He was born in White Plains, New York - https://en.wikipedia.org/wiki/Mark_ZuckerbergEvents of Interest 14 May 1986 - Netherlands Institute for War Documentation publishes Anne Frank's complete diary - https://www.onthisday.com/people/anne-frank15 May 1928 – Walt Disney character Mickey Mouse premieres in his first cartoon, "Plane Crazy". It was made as a silent film and given a test screening to a theater audience but failed to pick up a distributor. - https://en.wikipedia.org/wiki/Plane_Crazy15 May 2010 – Jessica Watson becomes the youngest person to sail, non-stop and unassisted around the world solo. Watson headed north-east crossing the equator in the Pacific Ocean before crossing the Atlantic and Indian Oceans. - https://en.wikipedia.org/wiki/Jessica_Watson16 May 1888 – Nikola Tesla delivers a lecture describing the equipment which will allow efficient generation and use of alternating currents to transmit electric power over long distances. His lecture caught the attention of George Westinghouse, the inventor who had launched the first AC power system near Boston and was Edison’s major competitor in the “Battle of the Currents.” - https://teslaresearch.jimdo.com/lectures-of-nikola-tesla/a-new-system-of-alternate-current-motors-and-transformers-1888/- https://www.history.com/topics/inventions/nikola-teslaIntroArtist – Goblins from MarsSong Title – Super Mario - Overworld Theme (GFM Trap Remix)Song Link - https://www.youtube.com/watch?v=-GNMe6kF0j0&index=4&list=PLHmTsVREU3Ar1AJWkimkl6Pux3R5PB-QJFollow us on Facebook - https://www.facebook.com/NerdsAmalgamated/Email - Nerds.Amalgamated@gmail.comTwitter - https://twitter.com/NAmalgamatedSpotify - https://open.spotify.com/show/6Nux69rftdBeeEXwD8GXrSiTunes - https://itunes.apple.com/au/podcast/top-shelf-nerds/id1347661094RSS - http://www.thatsnotcanonproductions.com/topshelfnerdspodcast?format=rss
Nicolas Genest accompagné de Gerry parle d'une application web et mobile créé par Dustin Moskovitz qui a participé à la création de Facebook avant de concevoir Asana, une app conçue pour permettre le travail d'équipe sans email.
How do you cultivate a healthy, open culture that empowers employees to thrive?In today's episode, we're talking with Diana Chapman, the co-founder of The Conscious Leadership Group, about how to create a more authentic, people-focused culture at work.Diana and her team have helped dozens of organizations, including Ebay, Asana, Whole Foods and more, to increase employee engagement and performance by eliminating drama, building trust, and cultivating a culture where authenticity, vulnerability, and transparency can take root.And they're getting results. In fact, Dustin Moskovitz and Justin Rosenstein, the co-founders of Asana, actually give every single one of their employee the chance to go through CLG's leadership training, and credit that with helping them to more effectively achieve their company goals.In this episode, you'll learn:How Diana went from teaching scrapbooking classes in Michigan to teaching Silicon Valley giants the importance of conscious leadershipWhy authentic conversations and mindfulness matter, not just in personal relationships, but also within organizationsThe need to value emotional intelligence and ‘body' intelligence as highly as cognitive intelligenceHow to cut through drama and see the underlying factsHow to change your approach to a situation by changing the way that you frame itHow to create a roadmap to get yourself and your team out of negative habitsDiana is an incredibly open and inspiring person and was more than willing to share the exact exercises she guides companies through to help their employees to thrive.Topics Discussed in This Episode:[00:01:10] How Diana moved from teaching scrapbooking to her current career[00:02:36] The class on conscious relationships that Diana took from Gay and Katie Hendricks[00:03:07] What it was about the classes at the Hendricks Institute that made Diana want to focus on conscious leadership[00:06:03] The definition of conscious leadership[00:07:30] The power of being in the present[00:09:09] What being present looks like[00:11:29] The three types of intelligence[00:13:50] How you can cultivate emotional intelligence and body intelligence[00:17:09] What happened when a presentation did not go the way that Diana expected[00:20:12] What Diana changed about her presentation•[00:22:02] How the experience of the group Diana was working with changed once she changed her approach[00:22:58] How that presentation experience impacted the way that Diana makes presentations with other companies[00:23:28] How to present people-first ideas to people in organizations who are skeptical[00:25:47] Exercises that can help with presenting alternative ideas[00:34:55] How clients can use the results of their exercises[00:36:57] Steps organizations that want to change can take[00:38:12] What to do when one person wants to have an authentic conversation and another person does not•[00:41:34] What kind of changes Diana has seen in organizations that have embraced conscious leadership[00:42:26] What flow states are[00:43:30] How Diana would help people understand the benefits of a people-first approach[00:48:02] One resource that Diana recommends•[00:49:03] The Enneagram system
Tim's doing a new experiment. (I'm not surprised.) He's looking at people and asking himself one question... "What happened to this person?" He said, "Normal people are just folks you don't know well enough yet, right? Nobody's normal. We're so full of stuff and trauma and nonsense and silly beliefs. Everyone's a work in progress and since you're a work in progress, it's very hard to know yourself." He gave me an example. But didn't name names. "There was this woman who had some very peculiar emotions. It turned out that she had watched her father beat her mother into unconsciousness on multiple occasions... knocked out, unconscious, on the floor. And that was just the tip of the iceberg." She's acting in response to her past. Not her present. I think that's what Tim means when he said, "we're cause and effect collection machines." And that's really where advice comes from... the intersection between cause, effect, and hindsight. I feel Tim's really mastered this new intersection. He's embracing being "a work in progress." That's what makes his new book so relatable. It's called "Tribe of Mentors: Short Life Advice from the Best in the World." He reached out to Matt Ridley, Stephen Pressfield, Dustin Moskovitz, Naval Ravikant, Patton Oswalt, Susan Cain, Ben Stiller, Annie Duke... the list goes on and on. (But don't worry! I'm in the next book, "Tribe of ALMOST Mentors"). Each person in the book dissects their success. They slice it open, dig through the guts and give you the heart. They show you HOW they became a peak performer. And the best part is it's all through Tim's lens. Make sure to read the full show notes here: https://jamesaltucher.com/2017/11/tim-ferriss-3/ And don't forget to subscribe to "The James Altucher Show" on Apple Podcast or wherever you get your podcasts! ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to "The James Altucher Show" wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
Tim’s doing a new experiment. (I’m not surprised.) He’s looking at people and asking himself one question... “What happened to this person?" He said, “Normal people are just folks you don’t know well enough yet, right? Nobody's normal. We’re so full of stuff and trauma and nonsense and silly beliefs. Everyone’s a work in progress and since you’re a work in progress, it’s very hard to know yourself.” He gave me an example. But didn’t name names. “There was this woman who had some very peculiar emotions. It turned out that she had watched her father beat her mother into unconsciousness on multiple occasions… knocked out, unconscious, on the floor. And that was just the tip of the iceberg.” She’s acting in response to her past. Not her present. I think that’s what Tim means when he said, “we're cause and effect collection machines.” And that’s really where advice comes from… the intersection between cause, effect, and hindsight. I feel Tim’s really mastered this new intersection. He’s embracing being “a work in progress.” That’s what makes his new book so relatable. It’s called “Tribe of Mentors: Short Life Advice from the Best in the World.” He reached out to Matt Ridley, Stephen Pressfield, Dustin Moskovitz, Naval Ravikant, Patton Oswalt, Susan Cain, Ben Stiller, Annie Duke… the list goes on and on. (But don’t worry! I’m in the next book, “Tribe of ALMOST Mentors”). Each person in the book dissects their success. They slice it open, dig through the guts and give you the heart. They show you HOW they became a peak performer. And the best part is it’s all through Tim’s lens. Make sure to read the full show notes here: https://jamesaltucher.com/2017/11/tim-ferriss-3/ And don't forget to subscribe to "The James Altucher Show" on Apple Podcast or wherever you get your podcasts! See omnystudio.com/listener for privacy information.
Podcasting from Cork Ireland, for 43 minutes you'll hear Roger Overall & Paul O'Mahony riff off each other, playing "You can't do it alone" Topic: Attracting Staff, Partners & Funders INSERT CARTOON ________________ In Episode 13 of Season 3, we help you think about your staff & business partners. We want you to become more attractive in business today We discuss What about the staff, employees, partners? [1.10] You need a partner [1.35] Why one-person businesses don't work [1.45] The squabble between Roger & Paul [4.05] With staff comes headaches [4.50] You need to find right people and able people [5.30] You have to make yourself an appealing employer [5.45] The flower shop story - Best of Buds in Cork [6.20] One of Paul's clients makes software for universities [7.30] Every business is always in competition for staff [8.40] Roger goes for water [9.07] We also discuss .... Republic of Work people & Partners [10.00] The importance of signage & balloons for business [11.20] Solopreneurs [11.30] BusinessJazz has Lennon & McCartney [13.30] You need to attract great people to work with you [13.55] How can you afford to take on new staff? [14.25] One of the most important points about business [15.30] Funding the business [16.00] What about doing the work [16.35] The Vaynerchuk approach [16.05] Saving money - having reserves [17.35] The behaviour of banks [17.50] Raising money from investors [18.20] What potential investors look for [18.45] Never borrow from family? [19.10] Kickstarter - the Peter Cox story [19.50] How Peter Cox failed to hit his target [22.10] Peter Cox needed to attract his "ideal" staff [24.30] Writing a 'person specification' [25.05] Selecting staff - avoiding 'bullshit' [26.00] Getting on (gelling) with people in the company [28.40] The company's angle - the employee's job [29.45] "How can I make myself genuinely attractive in business today?" [30.05] Treating an employer as a customer [30.20] Being 'irreplaceable' [31.00] Paul can't remember the point he was going to make [30.45] Best example of a company making its staff feel really valued [31.50] Roger's South Africa story [32.30] Providing employees with the right equipment [34.55] The right amount of pressure [35.20] Summary [37.15] We offer you our best advice: (1) You can't do it all on your own - you must find supporters (2) This podcast is relevant to you no matter what your business is. Appeal to our supporters -------------------- Please tell one other person about #BusinessJazzPodcast _______________________ We spoke fondly of John Lennon & Paul McCartney Mick Jagger & Keith Richards Simon & Garfunkel Facebook: Mark Zuckerberg, Eduardo Saverin, Andrew McCollum, Dustin Moskovitz & Chris Hughes T S Eliot & Ezra Pound Prince Best of Buds (Cork) Akari Software Republic of Work, Bank of Ireland, Nespresso, Teamwork.com (Peter Coppinger & Dan Mackey) Sherlock Holmes Show&Tell Communications (Roger's company) Change Agents Branding Jonathan Amm Gary Vaynerchuk Dragons' Den Kickstarter Peter Cox Photography Michael Port MS Word & Excel ________________________ Let’s work together – fondly Roger Overall @rogeroverall Show&Tell Communications (and Stierkater.com - running & cartoons) Paul O'Mahony @omaniblog BusinessJazz Services @bizjazzpodcast Mark Cotton @mcfontaine. Www.audiowrangler.co.uk – for your audio requirements – Bletchley Park Podcast
When Facebook’s founding engineer and Vanderbilt alumnus, Jeff Rothschild, first visited the fledgling company in 2005 at the behest of a venture capital firm, he was wary of the social media space in general and didn’t expect to stay more than two weeks. But then “I got to meet Mark (Zuckerberg) and Dustin Moskovitz and the other early members of the team and really fell in love with the vision they had for Facebook,” Rothschild told Vanderbilt University Chancellor Nicholas S. Zeppos in an interview for his latest podcast, The Zeppos Report. Rothschild, who earned a bachelor’s degree in psychology in 1977 and a master’s in computer science in 1979, told Zeppos that the Facebook of today closely resembles the original vision of connecting families and friends that the team laid out when it first started. “All of the people who joined Facebook in the early days cared about the mission of the company,” said Rothschild, who served as vice president of Infrastructure Software at Facebook from 2005 until 2015. “Very few of them thought, ‘I’m joining a hot startup.’” In addition to sharing his perspective on what made Facebook successful where other competitors failed, Rothschild talks to Zeppos about the benefits of “collateral knowledge,” as well as data analytics and automation coming together to usher in a new era of augmented intelligence. Before Facebook, Rothschild co-founded Veritas Software. He continues to serve as a consulting partner at the venture capital firm Accel Partners. As of July 1, 2017, he will begin a term as vice chairman of the Vanderbilt Board of Trust. In December 2016, Rothschild and his wife, Marieke, made a $20 million gift through a donor-advised fund to support the development of Vanderbilt’s residential colleges program, College Halls. Rothschild joined Zeppos on campus for the interview, which took place on April 21, 2017. The podcast is available on SoundCloud, Stitcher, Google Play, iTunes, YouTube and The Zeppos Report website. For a transcript of this podcast, please go to this URL: https://s3.amazonaws.com/vu-wp0/wp-content/uploads/sites/79/2017/10/24185247/The_Zeppos_Report_6_with_Jeff_Rothschild.docx The podcast is available on SoundCloud, Stitcher, Google Play, iTunes, YouTube and The Zeppos Report website. Media Inquiries: Melanie Moran, (615) 322-NEWS melanie.moran@vanderbilt.edu
Entrepreneur, Writer, speaker, and much more. For his new book Kevin Kruse interviewed 200+ entrepreneurs, Olympians and billionaires including Mark Cuban, Kevin Harrington and Dustin Moskovitz. Hear his story on The Nice Guys today. Read below to see how you can get a free copy of Kevin's new book "15 Secrets Successful People Know about Time Management". Here's what we learned about Kevin during the interview: He sold his small community bank for over $100 million to Provident bank He built a dozen new school libraries in China and Vietnam His credo is "Life is about making an impact, not about making an income." Here are some of his time management tips: Value your own time. You should throw away your to-do list What was Mark Cuban's #1 time management secret? Default to "No" not "Yes" Don't forget "Me" time Kevin's newest book was released yesterday, it's called 15 Secrets Successful People Know About Time Management. You can get it on Amazon, or Nice Guys on Business listeners or get a free paperback copy from www.15TimeSecrets.com just need to cover shipping and handling. Here's how to reach Kevin: Kevin@kevinkruse.com http://www.kevinkruse.com/ Subscribe to the Podcast Want to get pinned on our listener map? Just go to http://www.dougsandler.com/podcast-by-the-nice-guys/ and answer the question, where are you from? And we'll add you to the map. You can see it here- http://www.niceguysonbusiness.com/services.html
Follow-up: Why not use push notifications for the iPhone 6 Plus iSight replacement program Apple, attitude, "cool", and change Songs of Innocence iPod Hi-Fi Force Touch and Taptic Engine in the iPhone 6S Upgrade #51 New York Times on Amazon's stressful culture Jeff Bezos' rebuttal Ballmer Peak Creativity, Inc. Dustin Moskovitz on 40-hour work weeks Work-life balance and workaholism What's necessary for a successful startup? Post-show: John's well-honed capacity for doing nothing Addiction to coffee and large monitors Follow-up on Casey's computer options Sponsored by: Warby Parker: Boutique-quality, vintage-inspired eyewear at a revolutionary price. Harry's: An exceptional shave at a fraction of the price. Use code ATP for $5 off your first purchase. Hover: The best way to buy and manage domain names. Use coupon code UNEVENTFULWEEK for 10% off your first purchase.
Our system of education is built on the belief that learning is best achieved by bringing the best of the past forward through expert advice and clear example. Consequently, educators rise through the ranks like officers in the military: through compliance and conformity to the norm. But in this era of quantum change, are we really best served by imitating the past? Let's look at two characteristics the innovative leaders of today all seem to have in common: 1. They tend to be college dropouts. Steve Jobs of Apple, Bill Gates and Paul Allen of Microsoft, Jack Dorsey, Evan Williams and Biz Stone of Twitter, Mark Zuckerberg, Dustin Moskovitz and Sean Parker of Facebook. Dropouts, all. The list goes on and on. 2. They have no fear of failure. Innovative leaders experiment constantly because they see failure as an unavoidable step toward success. These leaders know the truth about failure; it's an extremely temporary condition, a fleeting moment, nothing to be feared. Failure is motion and motion is life. Educators hesitate to experiment because they fear failure and reprimand. Consequently, the average teacher with 20 years' experience really has just 1 year's experience 20 times. In the October 22 issue of the New York Times, researcher Michael Ellsberg wrote, “Entrepreneurs must embrace failure. I spent the last two years interviewing college dropouts who went on to become millionaires and billionaires. All spoke passionately about the importance of their business failures in leading them to success. Our education system encourages students to play it safe and retreat at the first sign of failure… Certainly, if you want to become a doctor, lawyer or engineer, then you must go to college. But, beyond regulated fields like these, the focus on higher education… is profoundly misguided.” Pennie had a fantastic idea while we were taking our morning walk. As she explained it to me, I realized her plan would make solid education more widely available, more relevant to the student and save a great deal of money as well. “Princess,” I said, “if someone isn't already doing this, they will be soon. This is the right idea at the right time so it's highly likely that lots of people are having this same idea right now.” I was right. Salman Kahn (pictured above,) already has the project well underway. Pennie's idea – and Kahn's – is to harness Youtube to deliver 10-to-12-minute tutorials in an effort to fill the painful gaps in public education. Stanford University professor Philip Zimbardo recently said, “There is a disaster recipe developing among boys in America dropping out of high school and college. And it's not simply poor performance. One of the problems is, a recent study shows, that by the time a boy is 21, he has spent at least 10,000 hours playing video games by himself, alone… They live in a world they create. They're playing Warcraft and these other games which are exciting… Their brains are being digitally rewired, which means they will never fit in a traditional classroom, which is analog. Somebody talks at you without even nice pictures. Meaning it's boring. You control nothing. You sit there passively. Disaster. These kids will never fit into that. They have to be in a situation where they are controlling something. And school is set up where you control nothing.” Video allows the world's best teachers to be everywhere simultaneously. And if you eliminate the time spent for roll call, bad behavior, discipline, silent reading and working on exercises, there's rarely more than 10 minutes of real teaching delivered during the average class-hour. Tightly scripted 10-minute videos allow the quicker students to move at 5 to 6 times their current pace while slower students are free to pause and rewind as often as they feel necessary....