Non-profit organization promoting open-source software
POPULARITY
Last month, the Open Source Initiative held an election where the votes were tampered with in order to exclude "reform" candidates". Now many fear "retribution" if they speak out. More from The Lunduke Journal: https://lunduke.com/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe
The "Hacker-in-Residence" of the Software Freedom Conservancy (and past Executive Director of the Free Software Foundation) talks about Open Source Initiative election rigging. More from The Lunduke Journal: https://lunduke.com/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe
"Using proprietary software is a non-negotiable requirement for Board participation [in Open Source Initiative]." More from The Lunduke Journal: https://lunduke.com/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe
Diversity! Inclusive! Community! Nobody Cares! Bonus: The UN "Open By Default" policy can only be signed... with Closed Source Microsoft Office 365. More from The Lunduke Journal: https://lunduke.com/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe
Deb Bryant discusses her career journey and the significant role of open source software in public policy, particularly in the US and Europe. She highlights her work with the Open Source Initiative, Oregon State University, and Red Hat, emphasizing the importance of open source in government operations and cybersecurity. Deb also addresses the challenges and evolution of open source policies, the critical need for sustainability in open source projects, and her current focus on AI's impact on the ecosystem. She concludes by advocating for harmonized international regulations and human-centered AI approaches. 00:00 Introduction 00:44 Government and Open Source Software 01:38 Experiences in the Private Sector 02:14 Open Source in Public Policy 04:31 Cybersecurity and Open Source 07:42 Sustainability in Open Source 15:05 Future of Open Source and AI 18:53 Conclusion and Final Thoughts Guest: Deb Bryant, Director, US Policy and Founder, Open Policy Alliance Open Source Initiative Throughout her career, Deborah has lent her voice to supporting open source projects and developers, building bridges between academia, industry, non-profits, and government along the way. Today she provides guidance to open source foundations seeking to support public policy development in open technology domains. She has worked in emerging technology and has been an advocate of free and open source software and the community that makes it so since the 1990s. Deborah is board director emeritus at the Open Source Initiative (OSI); serves on the DemocracyLab board; serves on the advisory boards of Open Source Elections Technology Foundation and the OASIS Open Project, and as an advisor to the Brandeis University Open Technology Management program. She also represents OSI as a member of the Digital Public Goods Alliance. For eight years prior to her reentry into the nonprofit world, she led one of the world's largest open source program offices (OSPO) at Red Hat where her global team was responsible for the company's strategy and stewardship in open source software communities. While at Red Hat she served on the Eclipse Foundation board for two years. Deborah's published academic research includes the Use of Open Source in Cybersecurity in the Energy Industry and Collaborative Models for Creating Software in the Public Sector.
In September 2019, Richard Stallman, a prominent computer scientist and founder of the Free Software Foundation (FSF), resigned from both the Massachusetts Institute of Technology (MIT) and the FSF following controversial comments related to the Jeffrey Epstein case. Specifically, Stallman questioned the use of the term "sexual assault" concerning allegations against the late MIT professor Marvin Minsky, suggesting that the victim may have appeared "entirely willing." These remarks were widely criticized as insensitive and dismissive of the coercive circumstances surrounding Epstein's trafficking of minors.The backlash against Stallman's comments led to his immediate resignation from both institutions. However, in March 2021, he announced his return to the FSF's board of directors, a move that sparked renewed controversy and led to significant criticism from the open-source community. Organizations such as Mozilla and the Open Source Initiative opposed his reinstatement, citing concerns over his past behavior and statements.Leon Botstein, president of Bard College, engaged in a controversial relationship with Jeffrey Epstein, a convicted sex offender, by accepting donations and maintaining contact even after Epstein's 2008 conviction. Epstein contributed $75,000 and 66 laptops to Bard in 2011, and in 2016, he personally gave Botstein $150,000, which Botstein redirected to the college as part of his own $1 million donation. Botstein defended these actions by emphasizing his fundraising responsibilities and Bard's commitment to rehabilitation, stating, "We believe in rehabilitation."Despite knowing Epstein's criminal history, Botstein met with him over a dozen times to solicit further donations, raising ethical questions about engaging with disreputable donors. Botstein acknowledged Epstein's past but justified the interactions as part of his role in securing funding for the college, reflecting the complex dynamics between institutional fundraising and ethical considerations.to contact me:bobbycapucci@protonmail.com
In September 2019, Richard Stallman, a prominent computer scientist and founder of the Free Software Foundation (FSF), resigned from both the Massachusetts Institute of Technology (MIT) and the FSF following controversial comments related to the Jeffrey Epstein case. Specifically, Stallman questioned the use of the term "sexual assault" concerning allegations against the late MIT professor Marvin Minsky, suggesting that the victim may have appeared "entirely willing." These remarks were widely criticized as insensitive and dismissive of the coercive circumstances surrounding Epstein's trafficking of minors.The backlash against Stallman's comments led to his immediate resignation from both institutions. However, in March 2021, he announced his return to the FSF's board of directors, a move that sparked renewed controversy and led to significant criticism from the open-source community. Organizations such as Mozilla and the Open Source Initiative opposed his reinstatement, citing concerns over his past behavior and statements.Leon Botstein, president of Bard College, engaged in a controversial relationship with Jeffrey Epstein, a convicted sex offender, by accepting donations and maintaining contact even after Epstein's 2008 conviction. Epstein contributed $75,000 and 66 laptops to Bard in 2011, and in 2016, he personally gave Botstein $150,000, which Botstein redirected to the college as part of his own $1 million donation. Botstein defended these actions by emphasizing his fundraising responsibilities and Bard's commitment to rehabilitation, stating, "We believe in rehabilitation."Despite knowing Epstein's criminal history, Botstein met with him over a dozen times to solicit further donations, raising ethical questions about engaging with disreputable donors. Botstein acknowledged Epstein's past but justified the interactions as part of his role in securing funding for the college, reflecting the complex dynamics between institutional fundraising and ethical considerations.to contact me:bobbycapucci@protonmail.com
In September 2019, Richard Stallman, a prominent computer scientist and founder of the Free Software Foundation (FSF), resigned from both the Massachusetts Institute of Technology (MIT) and the FSF following controversial comments related to the Jeffrey Epstein case. Specifically, Stallman questioned the use of the term "sexual assault" concerning allegations against the late MIT professor Marvin Minsky, suggesting that the victim may have appeared "entirely willing." These remarks were widely criticized as insensitive and dismissive of the coercive circumstances surrounding Epstein's trafficking of minors.The backlash against Stallman's comments led to his immediate resignation from both institutions. However, in March 2021, he announced his return to the FSF's board of directors, a move that sparked renewed controversy and led to significant criticism from the open-source community. Organizations such as Mozilla and the Open Source Initiative opposed his reinstatement, citing concerns over his past behavior and statements.Leon Botstein, president of Bard College, engaged in a controversial relationship with Jeffrey Epstein, a convicted sex offender, by accepting donations and maintaining contact even after Epstein's 2008 conviction. Epstein contributed $75,000 and 66 laptops to Bard in 2011, and in 2016, he personally gave Botstein $150,000, which Botstein redirected to the college as part of his own $1 million donation. Botstein defended these actions by emphasizing his fundraising responsibilities and Bard's commitment to rehabilitation, stating, "We believe in rehabilitation."Despite knowing Epstein's criminal history, Botstein met with him over a dozen times to solicit further donations, raising ethical questions about engaging with disreputable donors. Botstein acknowledged Epstein's past but justified the interactions as part of his role in securing funding for the college, reflecting the complex dynamics between institutional fundraising and ethical considerations.to contact me:bobbycapucci@protonmail.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-epstein-chronicles--5003294/support.
The OSi has tried for years to control "Open Source"... and they've failed. Their quest for "Open Source" power is a dramatic one: Threats, Trademark Disputes, & Lies. More from The Lunduke Journal: https://lunduke.com/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe
Aujourd'hui, on parle de l'Open Weight Definition, une initiative qui veut clarifier ce que signifie vraiment une IA open source. Vous avez peut-être entendu parler des efforts pour définir l'IA open source, mais des désaccords persistent. C'est pourquoi l'Open Source Alliance propose sa propre vision à travers ce nouveau framework.Alors, en quoi consiste cette Open Weight Definition ? Voici trois points clés à retenir.Accessibilité des poids des modèlesPremièrement, elle met l'accent sur l'accessibilité des poids des modèles.Pour rappel, les poids sont ces valeurs numériques cruciales qui définissent comment un modèle d'IA fonctionne après son entraînement.L'OWD, c'est à dire l'Open Weight Definition, veut garantir que ces poids soient accessibles aux chercheurs et aux développeurs.Transparence des donnéesDeuxièmement, elle introduit une notion de transparence des données.Pas besoin de rendre publics tous les jeux de données d'entraînement, mais il faut au moins documenter leur origine et les méthodes de collecte.Cette exigence vise à renforcer la confiance sans forcément exposer des données sensibles.Transparence de l'architectureTroisièmement, la transparence de l'architecture.L'idée est de permettre aux experts d'analyser, modifier et améliorer les modèles sans devoir partir de zéro.Mais cette définition a aussi une dimension politique. L'Open Source Alliance cherche à s'imposer face à une autre organisation, nommée Open Source Initiative, en proposant une sorte d'"Open Source 2.0". C'est à dire une nouvelle version de la définition traditionnelle de l'open source pour mieux englober les spécificités de l'IA. Enfin quand je dis définition traditionnelle, il faut relativiser. Car la première version de l'IA open source date d'il y a à peine... trois mois.Reste que certains experts, comme l'avocate Heather Meeker, soulignent que les poids des modèles d'IA ne sont pas du code source. Et que donc ils ne peuvent pas être traités de la même manière. D'où son idée d'une licence spécifique, l'Open Weights Permissive License, pour encadrer leur usage.En clair, l'Open Weight Definition est une tentative de normalisation qui reflète les défis uniques de l'IA. Mais dans un marché dominé par quelques grands acteurs et des réglementations encore floues, reste à voir si cette initiative prendra vraiment racine.Le ZD Tech est sur toutes les plateformes de podcast ! Abonnez-vous !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
In this episode, Katherine speaks with Nick Vidal, Community Manager at the Open Source Initiative (OSI), about his role and the organization's work in defining open source AI. Nick shares insights into the challenges and discussions surrounding AI, software licenses, and the necessity for clear definitions and community consensus. He also elaborates on the Clearly Defined project aimed at securing the software supply chain and the importance of community feedback in evolving the OSI's stance on open source AI. 00:00 Introduction and Guest Introduction 00:37 Nick Vidal's Role at OSI 01:04 Community Involvement and Challenges 03:43 Defining Open Source AI 06:21 Handling Feedback and Criticism 13:14 Overview of Open Source AI Definition 16:16 Future Plans and Community Involvement 18:09 Closing Remarks and Invitation to Join Resources: The Open Source AI Definition Guest: Nick Vidal is Community Manager at the Open Source Initiative and former Outreach Chair at the Confidential Computing Consortium from the Linux Foundation. Previously, he was the Director of Community and Business Development at the Open Source Initiative and Director of Americas at the Open Invention Network.
Paris Marx is joined by tante to discuss troubling developments in the open source world as Wordpress goes to war with WP Engine and a new definition of open source AI doesn't require being open about training data.tante is a sociotechnologist, writer, speaker, and Luddite working on tech and its social impact.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:tante wrote about the problem with the Open Source Initiative's definition of open source AI.Check out this link for the full breakdown on the Wordpress drama.Wordpress changed its trademark guidelines on September 19 regarding the use of the WP abbreviation.Tumblr and Wordpress started selling user data for AI training earlier this year.A lot of the controversy around Richard Stallman started blowing up in 2019.Support the show
A new study that reveals how large language models (LLMs) encode truthfulness internally. The research focused on specific response tokens that determine correctness across various models, indicating that LLMs have a structured way of representing truthfulness. This finding could lead to improved reliability in AI outputs, particularly in critical applications like healthcare, where inaccuracies can have serious consequences.The episode also highlights the release of the Open Source AI definition 1.0 by the Open Source Initiative, which aims to clarify what constitutes open-source AI. This new standard requires AI models to disclose detailed information about their design and training data, addressing concerns about transparency in the AI development space. Sobel emphasizes the importance of this definition for IT leaders and developers, as it provides a framework to assess models for true openness, thereby reducing reputational risks and legal liabilities associated with unverified datasets.In addition to these developments, Sobel covers the launch of AI-powered features by TeamViewer, designed to enhance remote support efficiency for IT teams. The new tools, called Session Insights, automatically summarize sessions and provide analytics, which can significantly improve decision-making and handovers. GitHub also announced updates to its coding assistant, GitHub Copilot, which will soon support new large-language models, enhancing developer choice and functionality. Meanwhile, LinkedIn introduced its AI Hiring Assistant to streamline the recruiting process, allowing recruiters to connect with potential candidates more efficiently.Finally, Sobel discusses Cisco's new 360 Partner Program, which aims to modernize infrastructure and enhance the value partners deliver to customers. The program will focus on skill development and solution-based specialization, reflecting a shift in how partners will operate in the evolving tech landscape. The episode concludes with a call for caution regarding the full automation of processes that rely on AI-generated outputs, stressing the need for review and verification policies to mitigate risks associated with AI inaccuracies. Four things to know today00:00 New Study Finds LLMs Encode Truthfulness Internally, Offering Potential to Reduce Hallucinations in AI Responses02:54 OSI's Open Source AI Definition 1.0 Sets New Benchmark for Transparency, Targeting ‘Open in Name Only' Models 04:35 TeamViewer, GitHub, and LinkedIn Launch AI Innovations for IT07:22 Cisco Transitions Partners to Solution-Based Specializations with New Program Supported by: https://mspradio.com/engage/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessoftech.bsky.social
I speak with Stephen Hood of Mozilla about their joint efforts with the Open Source Initiative to create a definition of open source AI.Read all about the announcementLearn more about the definition For show notes and an interactive transcript, visit chrischinchilla.com/podcast/To reach out and say hello, visit chrischinchilla.com/contact/To support the show for ad-free listening and extra content, visit chrischinchilla.com/support/
This week on The Business of Open Source, I spoke with Stefano Maffulli, Executive Director of the Open Source Initiative, about the definition of open source and… the definition of open source AI. We recorded this episode on-site at All Things Open, so there's a little bit of background noise. We talked about why OSI felt like it needed to develop a definition of open source AI, how “open source” is enforced, and the thought process behind the definition that the OSI ultimately published. We talked about open data quite a bit — different kinds of data, what kind of information and data is important to researchers and professionals in the AI space, and if there's a way to include AI models that are trained on proprietary data in the definition of open source AI. If you are interested in open source AI, definitely check out this behind-the-scenes discussion of how, and why, this definition was published — and what the future likely holds for defining open source AI.
Indonesia blocks sale of iPhone 16 due to local investment concerns, the Open Source Initiative releases first version of Open Source AI Definition, Netflix rolls out “Moments” for sharing favorite scenes. MP3 Please SUBSCRIBE HERE. You can get an ad-free feed of Daily Tech Headlines for $3 a month here. A special thanks to allContinue reading "Apple Updates 24″ iMac With M4 Chip – DTH"
In this episode, SD Times editor-in-chief David Rubinstein interviews Ayah Bdeir, senior strategic advisor at Mozilla, about the Open Source Initiative's (OSI) effort to define Open Source AI.Key talking points include:Why there's a need to have a definition for open source AIThe controversy over how to classify training dataMozilla's collaboration with OSI on this work Open Source AI definition: https://opensource.org/osd
Die Corona-Impfkampagne wirft nach wie vor Fragen auf. Unter anderem, warum die natürliche Immunität geleugnet wurde, obwohl sie der Impfung offenbar überlegen war. Ein Artikel von Bastian Barucker, erschienen in der Open-Source-Initiative der Berliner Zeitung Artikel: https://archive.is/xbRL3#selection-923.0-923.165 Eingesprochen von Adam Nümm: https://adamnuemm.de Meine Arbeit ermöglichen: https://blog.bastian-barucker.de/unterstuetzung/
"Zuerst steht die juristische Untersuchung, dann die politische Aufarbeitung. Eine Replik auf den Spiegel-Artikel von Strafrechtler Fischer, der eine „unangenehme Neigung zur Aufarbeitung“ sieht. Ein Artikel aus der Open-Source-Initiative der Berliner Zeitung https://www.berliner-zeitung.de/open-source/corona-und-das-gesellschaftliche-gift-der-verdraengung-li.2260562 Eingesprochen von Adam Nümm: https://adamnuemm.de/about/ Produktionskosten: ca. 150 € Meine Arbeit unterstützen: https://blog.bastian-barucker.de/unterstuetzung/ Bildquelle: https://wolter-hoppenberg.de/team/sebastian-lucenti/
"Im Folgenden werde ich argumentieren, dass die staatlichen Antworten auf das Auftreten des neuen Corona-Virus in den Kontext eines globalen Biosecurity-Dispositivs zu stellen sind, das militärischer Natur ist. Gemeint ist damit eine dem Militärischen entnommene Vorstellung von Sicherheit, die diese primär unter dem Gesichtspunkt von biologischen Bedrohungen wahrnimmt. Zu einem Dispositiv gehört aber nicht nur eine bestimmte Denkweise und die damit verbundene Problemwahrnehmung, sondern alle praktischen Instrumente, die es braucht, um diese in die Realität umzusetzen. Die eigentliche Gefahr scheint mir deshalb heute weniger von rechts auszugehen, als von dieser an sich schon schwer verständlichen und daher kaum greifbaren Gemengelage von linken Werthaltungen, Kapitalinteressen und supranationalen, westlich-kapitalistischen Netzwerken, die sich unter dem Dispositiv von Biosecurity zusammengefunden haben: Eine Gefahr, die sich unter dem Vorwand des „Schutzes des Lebens“ und dem Appell an die Solidarität als linkes Projekt zu tarnen vermag, obwohl es sich um den bisher in der Geschichte des Kapitalismus vermutlich umfassendsten Zugriff des Kapitals auf unser Leben handelt. Tove Soiland ist Historikerin. Sie ist Mitbegründerin von Linksbündig, einer politischen Organisation aus der Schweiz, die sich der Aufarbeitung der Coronakrise aus einer dezidiert linken Perspektive widmet. Ebenso ist sie Mitglied des Kollektivs Feministischer Lookdown." Dieser Beitrag wurde in der Open-Source-Initiative der Berliner Zeitung veröffentlicht: https://www.berliner-zeitung.de/open-source/corona-krise-verschwoerung-oder-zufall-weder-noch-li.2256408 Eingesprochen von Adam Nümm: https://adamnuemm.de/about/ Fotoquelle: https://www.linksbuendig.ch Meine Arbeit unterstützen: https://blog.bastian-barucker.de/unterstuetzung/
There are simply too many examples: Mozilla, NixOS, Python, Open Source Initiative, openSUSE, Red Hat, GNOME, and so many others.More from The Lunduke Journal:https://lunduke.com/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe
Die RKI-Files offenbaren: Die politische Einflussnahme auf die Einschätzung der Gefährdungslage war enorm. Das zeigt sich besonders im Umgang mit Inzidenzwerten und Massentests. Artikel: https://www.berliner-zeitung.de/open-source/rki-files-zu-corona-gefaehrlichkeit-wie-jens-spahn-die-pandemie-herbeigetestet-hat-li.2245449 Das ist ein Beitrag, der im Rahmen unserer Open-Source-Initiative eingereicht wurde. Mit Open Source gibt der Berliner Verlag allen Interessierten die Möglichkeit, Texte mit inhaltlicher Relevanz und professionellen Qualitätsstandards anzubieten. Ausgewählte Beiträge werden veröffentlicht und honoriert. Eingesprochen von Andreas Sparberg: https://sparberg.de
In this episode, we chat with Luis Villa, co-founder of Tidelift, about everything from supporting open source maintainers to coding with AI. Luis, a former programmer turned attorney, shares stories from his early days of discovering Linux, to his contributions to various projects and organizations including Mozilla and Wikipedia. We discussed the critical importance of open source software, the challenges faced by maintainers, including burnout, and how Tidelift works toward compensating maintainers. We also explore broader themes about the sustainability of open source projects, the impact of AI on code generation and legal concerns, and the need for a more structured and community-driven approach to long-term project maintenance. 00:00 Introduction 03:20 Challenges in Open Source Sustainability 07:43 Tidelift's Role in Supporting Maintainers 14:18 The Future of Open Source and AI 32:44 Optimism and Human Element in Open Source 35:38 Conclusion and Final Thoughts Guest: Luis Villa is co-founder and general counsel at Tidelift. Previously he was a top open source lawyer advising clients, from Fortune 50 companies to leading startups, on product development, open source licensing, and other matters. Luis is also an experienced open source community leader with organizations like the Wikimedia Foundation, where he served as deputy general counsel and then led the Foundation's community engagement team. Before the Wikimedia Foundation, he was with Greenberg Traurig, where he counseled clients such as Google on open source licenses and technology transactions, and Mozilla, where he led the revision of the Mozilla Public License. He has served on the boards at the Open Source Initiative and the GNOME Foundation, and been an invited expert on the Patents and Standards Interest Group of the World Wide Web Consortium and the Legal Working Group of OpenStreetMap. Recent speaking engagements include RedMonk's Monki Gras developer event, FOSDEM, and as a faculty member at the Practicing Law Institute's Open Source Software programs. Luis holds a JD from Columbia Law School and studied political science and computer science at Duke University.
Plus Why You Need A Second Brain Now Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us AI Brains in a Humanoid Robot: Meet Figure 02 The Figure 02 humanoid robot has been unveiled, showing advancements from its predecessor, Figure 01. The robot's wires are more concealed, indicating a shift towards a production model. Equipped with six cameras, the robot has human-like hands with increased strength, able to carry 25kg. It features improved battery life, a partnership with OpenAI for conversational capabilities, and is being tested by BMW for automotive manufacturing. Why You Need A Second Brain Now (And How AI Can Help You Build It) The concept of a "second brain," popularized by Tiago Forte, helps store and organize information our brains can't retain. This involves capturing, organizing, distilling, and expressing data. AI can simplify this process, especially in organizing and distilling information, using tools like TextCortex, Mem, and Notion. Can AI Even Be Open Source? It's Complicated AI relies heavily on open source, yet major AI vendors hesitate to fully open-source their programs. The complexity lies in defining open-source AI, which is complicated by the intertwining of software and data. The Open Source Initiative is working with tech companies to develop a definition by October 2024. Google Follows Microsoft's Sneaky Route to Swallow the AI Market Big Tech companies like Google are acquiring AI talent by hiring leadership and licensing intellectual property rather than buying startups, avoiding antitrust scrutiny. Google hired Character.ai's co-founders and will license their technology. This trend, seen in Microsoft's and Amazon's actions, consolidates power in AI under major tech firms. 10 Reasons Why AI May Be Overrated While AI garners substantial media attention and investment, skeptics argue its impact is overstated. MIT economist Daron Acemoglu doubts AI's economic revolution, calling it “autocorrect on steroids.” Limitations include overhyped capabilities, environmental impact, and challenges with AI's intelligence and reliability, suggesting AI may enhance rather than replace human work. AI Accurately Turns Thoughts into Images Researchers at Radboud University have advanced AI to reconstruct images from brain activity with unprecedented accuracy. By analyzing fMRI scans and electrode recordings, the AI successfully recreated images viewed by humans and monkeys. This innovation could lead to treatments for vision loss, enhance communication for those with impairments, and revolutionize neuromarketing and entertainment.
Guests Tracy Hinds | Ashley Williams Panelist Richard Littauer Show Notes On today's episode of Sustain, host Richard Littauer is joined by guests, Tracy Hinds and Ashley Williams, to discuss the structural inequities and funding issues in open source. The episode delves deep into the misaligned incentives in the open source community, how regulatory and policy awareness is growing, and the potential for government regulations to create opportunities for open source maintainers. The conversation also covers the roles of various open source foundations, the impact of large corporations, and the need for more effective advocacy and compensation avenues for contributors. Tracy and Ashley announce their involvement in a working group focused on the European CRA legislation, aiming to bridge gaps between maintainers and policymakers. Press download now! [00:02:22] Ashley responds to Richard's comment about everything being “totally screwed” in open source, but also points out misaligned incentives. She discusses the economic challenges of open source, such as the failure of sustaining efforts and its broader economic impact. [00:04:54] Richard mentions his other podcast “Open Source for Climate” which focuses on leveraging open source technology to combat the climate crisis. [00:06:10] There's a discussion about potential regulatory and policy changes affecting open source, highlighting the need for a more equitable system. Ashley delves into economic theories relating to open source, particularly the concept of externalities and potential regulatory solutions, and upcoming regulations like the software bill of materials. [00:10:05] Tracy stresses the importance of involving open source maintainers in policy discussions to avoid misrepresentation by larger organizations alone. [00:11:47] Richard and Ashley discuss the representations of open source interests in policy making, particularly the dominance of large companies and the potential exclusion of individual maintainers. [00:16:04] Ashley critiques many language-based foundations for their minimal contribution to ecosystem, using Node Foundation as an example of one that has been beneficial due to its library ecosystem, notably NPM. [00:17:35] Tracy acknowledges the efforts of the Python Software Foundation (PSF) and Open Collective in fostering ecosystems that support paid contributors, emphasizing the importance of these roles for sustainability. [00:19:50] Richard notes that while centralized support like AWS services vouchers are helpful, these foundations do not effectively facilitate crucial conversations between maintainers and governments regarding open source regulation and standardization. [00:21:52] Ashley reflects on her experience as the Individual Membership Director at the Node Foundation, discussing the challenges of representing a diverse community within open source projects and foundations. [00:24:45] Tracy mentions her role as the first community seat director on the board, highlighting the evolution and ongoing adjustments in community representation within foundation governance. Also, she discusses the importance of involving individual maintainers in regulatory discussions. [00:27:47] Tracy talks about the economic opportunities in open source, facilitated by platforms like GitHub Sponsors and Patreon, which help reduce barriers for maintainers seeking financial support for their projects. [00:29:20] Ashley puts a small spin on Tracy's optimistic view, noting significant opposition to the empowerment of small open source businesses, primarily due to corporate-dominated structures and antitrust-friendly environments in tech. She argues that open source has been consolidating. [00:33:29] Ashley fills us in on where you can follow her and their future discussions. She mentions a working group at the Eclipse Foundation focusing on CRA legislation, announcing an initiative to gather maintainer feedback on this legislation through a reading group. [00:35:42] Tracy mentions where you can find her online. Quotes [00:03:30] “We have open source – people who maintain open source don't really make a lot of money from it. Attempts to sustain open source have largely failed.” [00:06:24] “Every OSS hacker is also incentivized to be a lawyer.” Spotlight [00:36:32] Richard's spotlight is Jingna Zhang and her new social network, Cara. [00:37:25] Tracy's spotlight is the book, Working in Public: The Making and Maintenance of Open Source Software. [00:38:09] Ashley's spotlight is exercising for mental health. Links SustainOSS (https://sustainoss.org/) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (email) (mailto:podcast@sustainoss.org) richard@theuserismymom.com (email) (mailto:richard@theuserismymom.com) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Socials (https://www.burntfen.com/2023-05-30/socials) Tracy Hinds X/Twitter (https://x.com/hackygolucky?lang=en) Tracy Hinds Mastodon (https://mastodon.social/@hackygolucky) Sustain Podcast-Episode 135 featuring Tracy Hinds (https://podcast.sustainoss.org/guests/hinds) Ashley Williams Twitter (https://x.com/ag_dubs) Ashley Williams LinkedIn (https://www.linkedin.com/in/ashleygwilliams/) Sustain Podcast-Episode 145 featuring Ashley Williams (https://podcast.sustainoss.org/guests/williams) Open Source Initiative (https://opensource.org/) OSS for Climate Podcast (https://ossforclimate.sustainoss.org/) Eclipse Foundation (https://www.eclipse.org/org/foundation/) Jingna Zhang (https://www.zhangjingna.com/) Cara (https://cara.app/login) Working in Public: The Making and Maintenance of Open Source Software by Nadia Eghbal (https://www.amazon.com/Working-Public-Making-Maintenance-Software/dp/0578675862) Sustain Podcast-Episode 51 featuring Nadia Eghbal (https://podcast.sustainoss.org/guests/nadia) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guests: Ashley Williams and Tracy Hinds.
The Open Source Initiative -- backed by Microsoft, Amazon, Meta -- is pushing for a "Closed" definition of "Open Source Artificial Intelligence." More from The Lunduke Journal: http://lunduke.com This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe
ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI
Venture into the tech frontier as Meta takes a bold step, open-sourcing AI audio platforms - MusicGen, AudioGen, and EnCodec4. Join this episode to explore the frontiers of technology, gain insights into challenges faced, and discuss the potential advancements in AI-powered audio creation.
In this episode, we decode Microsoft's EvoDiff, an open-source initiative shaping the future of protein-generating AI. Join me for an insightful discussion on the implications and applications of this cutting-edge technology in the realm of scientific discovery. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community Learn About ChatGPT Learn About AI at Tesla
Guest Heather Meeker Panelist Richard Littauer Show Notes In this episode, host Richard Littauer welcomes renowned author and open source lawyer, Heather Meeker, in our first venture into video format. Heather discusses her journey from being a ‘big law' lawyer to focusing specifically on open source matters. She talks about her latest book, From Project to Profit: How to Succeed in Commercial Open Source, and the valuable insights it provides for entrepreneurs and developers looking to transform their open source projects into successful businesses. The conversation also delves into the significance of open source, economic analysis, and the mission of the Open Source Initiative. We end with Heather sharing her all-time favorite open source project, Audacity, and why she thinks it's a perfect example of an exquisite open source project. Press download to hear more! [00:01:49] Heather talks about her current practice and how she's focusing on open source matters after leaving big law firms, driven by pandemic induced life choices, and she touches on her involvement with AI related issues. [00:04:18] Richard asks about Heather's transition to writing for the public, and she details her journey of writing articles since the late 90's and the process of creating her books. [00:06:41] We hear about Heather's book, From Project to Profit, and it's focus on the business potential of open source. She discusses the audience and motivation behind the book. [00:10:17] Heather describes the book's layout: case studies, economic analysis, business models, and a final checklist for starting an open source business. [00:11:31] We learn about the checklist and the thought process behind starting an open source business. [00:13:18] Heather acknowledges that there are suggestions beyond VC funding, relating it to family businesses, which may not grow large but can provide a living and enjoyment. She tells us the book discusses setting realistic goals for open source projects and understanding when it's appropriate to seek professional investment. [00:15:39] Richard talks about community projects that aim to be sustainable without necessarily seeking significant investments. Heather explains most small open source projects start as labors of love and discusses the motivations behind starting such projects, and she notes the commitment required to build a business. [00:19:16] Richard inquires about the fund that invests in open source projects. Heather describes OSS Capital, focusing on early-stage commercial open source software development, unique in its dedicated investment thesis. [00:21:15] Heather shares that the fund often approaches founders proactively, differing from traditional VC operations. [00:22:21] Richard is curious about equitable payment for contributors in open source projects, and Heather states they prefer to fund companies started by the projects' founders and describes the dynamic between contributors and the core team. [00:25:03] What was the toughest section of the book to write? Heather reveals the economic analysis was difficult as it required refreshing her knowledge and ensuring accuracy. She also didn't mention specific economists but focused on basic economic principles. [00:28:15] Richard asks about common pitfalls in open source projects. Heather points out that mistakes in start-ups are not unique to open source and expands on the issue of companies taking code private due to misaligned investor interests. [00:31:15] Richard questions if misaligned investors are a by-product of capitalism, and Heather believes it's possible to sustainably create value with open source without prioritizing it. [00:32:08] Richard asks what “open source” means to OSS Capital, and Heather explains that for their fund, open source means the core product is under a recognized open source license. She discusses the challenge of defining open source for non-software fields like AI and data. [00:35:31] Find out where you can buy Heather's book and follow her online. Quotes [00:11:44] “One of the initial decisions that someone asked me is that they actually want to run a business around an open source project and that's a non-trivial decision.” [00:31:24] “I do think it's possible to run a business sustainably, create a ton of value with the open source projects, and never take it private.” Spotlight [00:37:33] Richard's spotlight is the book, Man's Search for Meaning. [00:38:10] Heather's spotlight is one of her favorite authors, Primo Levi, and some books he wrote, The Periodic Table and Survival in Auschwitz. Also, another book she read called, Games Mother Never Taught You, and the open source project, Audacity. Links SustainOSS (https://sustainoss.org/) SustainOSS X/Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Mastodon (https://mastodon.social/@richlitt) Heather Meeker X/Twitter (https://twitter.com/HeatherMeeker4?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Heather Meeker Website (https://heathermeeker.com/) Heather Meeker LinkedIn (https://www.linkedin.com/in/heathermeeker/) Heather Meeker YouTube (https://www.youtube.com/c/HeatherMeekerOpenSourceLicensing/videos) [From Project to Profit: How to Build a Business Around Your Open Source Project by Heather Meeker](https://www.amazon.com/Project-Profit-Business-Around-Source/dp/B0CKMKMFH5/ref=sr11?crid=371EUHUERWTB3&keywords=From+project+to+profit&qid=1700960660&s=books&sprefix=from+project+to+profit%2Cstripbooks%2C76&sr=1-1) Sustain Podcast-Episode 46: Commercial Open Source with Joseph Jacks (https://podcast.sustainoss.org/guests/joseph-jacks) [Man's Search for Meaning by Viktor E. Frankl](https://en.wikipedia.org/wiki/Man%27sSearchforMeaning)_ Primo Levi (https://en.wikipedia.org/wiki/Primo_Levi) The Periodic Table by Primo Levi (https://en.wikipedia.org/wiki/The_Periodic_Table_(short_story_collection)) Survival In Auschwitz by Primo Levi (https://www.amazon.com/Survival-Auschwitz-Primo-Levi/dp/1492942588) Games Mother Never Taught Me by Betty Lehan Harragan (https://www.amazon.com/Games-Mother-Never-Taught-You/dp/0446357030) Audacity (https://www.audacityteam.org/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Heather Meeker.
Table of contentsNote: links take you to the corresponding section below; links to the original episode can be found there.* Laura Duffy solves housing, ethics, and more [00:01:16]* Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]* Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]* Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]* Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]* Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54]* Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]* Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43]* Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56] * Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism with Max Alexander and Sarah Hastings-Woodhouse [01:41:08]Intro [00:00:00]To wrap up the year of Pigeon Hour, the podcast, I put together some clips from each episode to create a best-of compilation. This was inspired by 80,000 Hours, a podcast that did the same with their episodes, and I thought it was pretty cool and tractable enough.It's important to note that the clips I chose range in length significantly. This does not represent the quality or amount of interesting content in the episode. Sometimes there was a natural place to break the episode into a five-minute chunk, and other times it wouldn't have made sense to take a five-minute chunk out of what really needed to be a 20-minute segment. I promise I'm not just saying that.So without further ado, please enjoy.#1: Laura Duffy solves housing, ethics, and more [00:01:16]In this first segment, Laura, Duffy, and I discuss the significance and interpretation of Aristotle's philosophical works in relation to modern ethics and virtue theory.AARON: Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that?LAURA: Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that.AARON: But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption.LAURA: Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago.AARON: Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're going to get from a summary of the state of the knowledge in any prior year. At least. Unless something weird happened. And I don't know. I don't know if that makes sense.LAURA: I think we're talking about two different things, though. Okay. In terms of knowledge about logic or something or, I don't know, argumentation about trying to derive the correct moral theory or something, versus how should we think about our own lives. I don't see any reason as to why the framework of virtue theory is incorrect and just because it's old. There's many virtue theorists now who are like, oh yeah, they were really on to something and we need to adapt it for the times in which we live and the kind of societies we live in now. But it's still like there was a huge kernel of truth in at least the way of thinking that Aristotle put forth in terms of balancing the different virtues that you care about and trying to find. I think this is true. Right? Like take one virtue of his humor. You don't want to be on one extreme where you're just basically a meme your entire life. Everybody thinks you're funny, but that's just not very serious. But you don't want to be a boar and so you want to find somewhere in the middle where it's like you have a good sense of humor, but you can still function and be respected by other people.AARON: Yeah. Once again, I agree. Well, I don't agree with everything. I agree with a lot of what you just said. I think there was like two main points of either confusion or disagreement. And like, the first one is that I definitely think, no, Aristotle shouldn't be discounted or like his ideas or virtue ethics or anything like that shouldn't be discounted because they were canonical texts or something were written a long time ago. I guess it's just like a presumption that I have a pretty strong presumption that conditional on them being good, they would also be written about today. And so you don't actually need to go back to the founding texts and then in fact, you probably shouldn't because the good stuff will be explained better and not in weird it looks like weird terms. The terms are used differently and they're like translations from Aramaic or whatever. Probably not Aramaic, probably something else. And yeah, I'm not sure if you.LAURA: Agree with this because we have certain assumptions about what words like purpose mean now that we're probably a bit richer in the old conception of them like telos or happiness. Right. Udaimnia is much better concept and to read the original text and see how those different concepts work together is actually quite enriching compared to how do people use these words now. And it would take like I don't know, I think there just is a lot of value of looking at how these were originally conceived because popularizers of the works now or people who are seriously doing philosophy using these concepts. You just don't have the background knowledge that's necessary to understand them fully if you don't read the canonical text.AARON: Yeah, I think that would be true. If you are a native speaker. Do you know Greek? If you know Greek, this is like dumb because then you're just right.LAURA: I did take a quarter of it.AARON: Oh God. Oh my God. I don't know if that counts, but that's like more than anybody should ever take. No, I'm just kidding. That's very cool. No, because I was going to say if you're a native speaker of Greek and you have the connotations of the word eudaimonia and you were like living in the temper shuttle, I would say. Yeah, that's true actually. That's a lot of nuanced, connotation and context that definitely gets lost with translation. But once you take the jump of reading English translations of the texts, not you may as well but there's nothing super special. You're not getting any privileged knowledge from saying the word eudaimonia as opposed to just saying some other term as a reference to that concept or something. You're absorbing the connotation in the context via English, I guess, via the mind of literally the translators who have like.LAURA: Yeah, well see, I tried to learn virtue theory by any other route than reading Aristotle.AARON: Oh God.LAURA: I took a course specifically on Plato and Aristotle.AARON: Sorry, I'm not laughing at you. I'm just like the opposite type of philosophy person.LAURA: But keep going. Fair. But she had us read his physics before we read Nicomachi.AARON: Think he was wrong about all that.LAURA: Stuff, but it made you understand what he meant by his teleology theory so much better in a way that I could not get if I was reading some modern thing.AARON: I don't know, I feel like you probably could. No, sorry, that's not true. I don't think you could get what Aristotle the man truly believed as well via a modern text. But is that what you? Depends. If you're trying to be a scholar of Aristotle, maybe that's important. If you're trying to find the best or truest ethics and learn the lessons of how to live, that's like a different type of task. I don't think Aristotle the man should be all that privileged in that.LAURA: If all of the modern people who are talking about virtue theory are basically Aristotle, then I don't see the difference.AARON: Oh, yeah, I guess. Fair enough. And then I would say, like, oh, well, they should probably start. Is that in fact the state of the things in virtue theory? I don't even know.LAURA: I don't know either.#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]All right, next, Arjun Panixery and I explore the effectiveness of reading books in retaining and incorporating knowledge, discussing the value of long form content and the impact of great literary works on understanding and shaping personal worldviews.ARJUN: Oh, you were in the book chat, though. The book rant group chat, right?AARON: Yeah, I think I might have just not read any of it. So do you want to fill me in on what I should have read?ARJUN: Yeah, it's group chat of a bunch of people where we were arguing about a bunch of claims related to books. One of them is that most people don't remember pretty much anything from books that they read, right? They read a book and then, like, a few months later, if you ask them about it, they'll just say one page's worth of information or maybe like, a few paragraphs. The other is that what is it exactly? It's that if you read a lot of books, it could be that you just incorporate the information that's important into your existing models and then just forget the information. So it's actually fine. Isn't this what you wrote in your blog post or whatever? I think that's why I added you to that.AARON: Oh, thank you. I'm sorry I'm such a bad group chat participant. Yeah, honestly, I wrote that a while ago. I don't fully remember exactly what it says, but at least one of the things that it said was and that I still basically stand by, is that it's basically just like it's increasing the salience of a set of ideas more so than just filling your brain with more facts. And I think this is probably true insofar as the facts support a set of common themes or ideas that are kind of like the intellectual core of it. It would be really hard. Okay, so this is not a book, but okay. I've talked about how much I love an 80,000 hours podcast, and I've listened to, I don't think every episode, but at least 100 of the episodes. And no, you're just, like, not going to definitely I've forgotten most of the actual almost all of the actual propositional pieces of information said, but you're just not going to convince me that it's completely not affecting either model of the world or stuff that I know or whatever. I mean, there are facts that I could list. I think maybe I should try.ARJUN: Sure.AARON: Yeah. So what's your take on book other long form?ARJUN: Oh, I don't know. I'm still quite confused or I think the impetus for the group chat's creation was actually Hanania's post where he wrote the case against most books or most was in parentheses or something. I mean, there's a lot of things going on in that post. He just goes off against a bunch of different categories of books that are sort of not closely related. Like, he goes off against great. I mean, this is not the exact take he gives, but it's something like the books that are considered great are considered great literature for some sort of contingent reason, not because they're the best at getting you information that you want.AARON: This is, like, another topic. But I'm, like, anti great books. In fact, I'm anti great usually just means old and famous. So insofar as that's what we mean by I'm like, I think this is a bad thing, or, like, I don't know, aristotle is basically wrong about everything and stuff like that.ARJUN: Right, yeah. Wait, we could return to this. I guess this could also be divided into its component categories. He spends more time, though, I think, attacking a certain kind of nonfiction book that he describes as the kind of book that somebody pitches to a publisher and basically expands a single essay's worth of content into with a bunch of anecdotes and stuff. He's like, most of these books are just not very useful to read, I guess. I agree with that.AARON: Yeah. Is there one that comes to mind as, like, an? Mean, I think of Malcolm Gladwell as, like, the kind of I haven't actually read any of his stuff in a while, but I did, I think, when I started reading nonfiction or with any sort of intent, I read. A bunch of his stuff or whatever and vaguely remember that this is basically what he like for better or.ARJUN: Um yeah, I guess so. But he's almost, like, trying to do it on purpose. This is the experience that you're getting by reading a Malcolm Gladwell book. It's like talib. Right? It's just him just ranting. I'm thinking, I guess, of books that are about something. So, like, if you have a book that's know negotiation or something, it'll be filled with a bunch of anecdotes that are of dubious usefulness. Or if you get a book that's just about some sort of topic, there'll be historical trivia that's irrelevant. Maybe I can think of an example.AARON: Yeah. So the last thing I tried to read, maybe I am but haven't in a couple of weeks or whatever, is like, the Derek Parfit biography. And part of this is motivated because I don't even like biographies in general for some reason, I don't know. But I don't know. He's, like, an important guy. Some of the anecdotes that I heard were shockingly close to home for me, or not close to home, but close to my brain or something. So I was like, okay, maybe I'll see if this guy's like the smarter version of Aaron Bergman. And it's not totally true.ARJUN: Sure, I haven't read the book, but I saw tweet threads about it, as one does, and I saw things that are obviously false. Right. It's the claims that he read, like, a certain number of pages while brushing his teeth. That's, like, anatomically impossible or whatever. Did you get to that part? Or I assumed no, I also saw.AARON: That tweet and this is not something that I do, but I don't know if it's anatomically impossible. Yeah, it takes a little bit of effort to figure out how to do that, I guess. I don't think that's necessarily false or whatever, but this is probably not the most important.ARJUN: Maybe it takes long time to brush his teeth.#3: Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]In this next segment, Nathan Barnard and I dive into the complexities of AI regulation, including potential challenges and outcomes of governing AI in relation to economic growth and existential security. And we compare it to banking regulation as well.AARON: Yeah, I don't know. I just get gloomy for, I think justified reasons when people talk about, oh yeah, here's the nine step process that has to take place and then maybe there's like a 20% chance that we'll be able to regulate AI effectively. I'm being facetious or exaggerating, something like that, but not by a gigantic amount.NATHAN: I think this is pretty radically different to my mainline expectation.AARON: What's your mainline expectation?NATHAN: I suppose I expect like AI to come with an increasing importance past economy and to come up to really like a very large fraction of the economy before really crazy stuff starts happening and this world is going very anonymous. Anonymous, anonymous, anonymous. I know the word is it'd be very unusual if this extremely large sector economy which was impacted like a very large number of people's lives remains like broadly unregulated.AARON: It'll be regulated, but just maybe in a stupid way.NATHAN: Sure, yes, maybe in a stupid way. I suppose critically, do you expect the stupid way to be like too conservative or too like the specific question of AI accenture it's basically too conservative or too lenient or I just won't be able to interact with this.AARON: I guess generally too lenient, but also mostly on a different axis where just like I don't actually know enough. I don't feel like I've read learned about various governance proposals to have a good object level take on this. But my broad prior is that there are just a lot of ways to for anything. There's a lot of ways to regulate something poorly. And the reason insofar as anything isn't regulated poorly it's because of a lot of trial and error.NATHAN: Maybe.AARON: I mean, there's probably exceptions, right? I don't know. Tax Americana is like maybe we didn't just kept winning wars starting with World War II. I guess just like maybe like a counterexample or something like that.NATHAN: Yeah, I think I still mostly disagree with this. Oh, cool. Yeah. I suppose I see a much like broader spectrum between bad regulation and good regulation. I agree it's like very small amount. The space of optimal regulation is very small. But I think we have to hit that space for regulation to be helpful. Especially in this especially if you consider that if you sort of buy the AI extension safety risk then the downsides of it's not this quite fine balancing act between too much whether consumer protection and siphoning competition and cycling innovation too much. It's like trying to end this quite specific, very bad outcome which is maybe much worse than going somewhat slowering economic growth, at least somewhat particularly if we think we're going to get something. This is very explosive rates for economic growth really quite soon. And the cost of slowing down economic growth by weather even by quite a large percentage, very small compared to the cost of sort of an accidental catastrophe. I sort of think of Sony iconic growth as the main cost of main way regulation goes wrong currently.AARON: I think in an actual sense that is correct. There's the question of like okay, Congress in the states like it's better than nothing. I'm glad it's not anarchy in terms of like I'm glad we have a legislature.NATHAN: I'm also glad the United States.AARON: How reasons responsive is Congress? I don't think reasons responsive enough to make it so that the first big law that gets passed insofar as there is one or if there is one is on the pareto frontier trading off between economic growth and existential security. It's going to be way inside of that production frontier or whatever. It's going to suck on every action, maybe not every act but at least like some relevant actions.NATHAN: Yeah that doesn't seem like obviously true to me. I think Dodge Frank was quite a good law.AARON: That came after 2008, right?NATHAN: Yeah correct. Yeah there you go. No, I agree. I'm not especially confident about doing regulation before there's some quite bad before there's a quite bad warning shot and yes, if we're in world where we have no warning shots and we're just like blindsided by everyone getting turned into everyone getting stripped their Athens within 3 seconds, this is not good. Both in law we do have one of those shots and I think Glass Seagull is good law. Not good law is a technical term. I think Glass Steagall was a good piece of legislation. I think DoD Frank was a good piece of legislation. I think the 2008 Seamless Bill was good piece of legislation. I think the Troubled Assets Relief Program is a good piece of piece of legislation.AARON: I recognize these terms and I know some of them and others I do not know the contents of.NATHAN: Yeah so Glass Eagle was the financial regulation passed in 1933 after Great Depression. The Tropical Asset Relief Program was passed in I think 2008, moved 2009 to help recapitalize banks. Dodge Frank was the sort of landmark post financial cris piece of legislation passed in 2011. I think these are all good pieces of legislation now. I think like financial regulation is probably unusually good amongst US legislation. This is like a quite weak take, I guess. It's unusually.AARON: So. I don't actually know the pre depression financial history at all but I feel like the more relevant comparison to the 21st century era is what was the regulatory regime in 1925 or something? I just don't know.NATHAN: Yeah, I know a bit. I haven't read this stuff especially deeply and so I don't want to don't want to be so overcompensant here but sort of the core pieces which were sort of important for the sort of the Great Depression going very badly was yeah, no distinction between commercial banks and investment banks. Yes, such a bank could take much riskier. Much riskier. Things with like custom deposits than they could from 1933 until the Peel Glass Eagle. And combine that with no deposit insurance and if you sort of have the combination of banks being able to do quite risky things with depositors money and no deposit insurance, this is quite dangerously known. And glassy repeal.AARON: I'm an expert in the sense that I have the Wikipedia page up. Well, yeah, there was a bunch of things. Basically. There's the first bank of the United States. There's the second bank of the United States. There's the free banking era. There was the era of national banks. Yada, yada, yada. It looks like 19. Seven was there was some panic. I vaguely remember this from like, AP US history, like seven years ago or.NATHAN: Yes, I suppose in short, I sort of agree that the record of sort of non post Cris legislation is like, not very good, but I think record of post Cris legislation really, at least in the financial sector, really is quite good. I'm sure lots of people disagree with this, but this is my take.#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]Up next, Winston Oswald Drummond and I talk about the effectiveness and impact of donating to various research organizations, such as suffering-focused S-risk organizations. We discuss tractability, expected value, and essentially where we should give our money.AARON: Okay, nice. Yeah. Where to go from here? I feel like largely we're on the same page, I feel like.WINSTON: Yeah. Is your disagreement mostly tractability? Then? Maybe we should get into the disagreement.AARON: Yeah. I don't even know if I've specified, but insofar as I have one, yes, it's trapped ability. This is the reason why I haven't donated very much to anywhere for money reasons. But insofar as I have, I have not donated to Clrcrs because I don't see a theory of change that connects the research currently being done to actually reducing s risks. And I feel like there must be something because there's a lot of extremely smart people at both of these orgs or whatever, and clearly they thought about this and maybe the answer is it's very general and the outcome is just so big in magnitude that anything kind.WINSTON: Of that is part of it, I think. Yeah, part of it is like an expected value thing and also it's just very neglected. So it's like you want some people working on this, I think, at least. Even if it's unlikely to work. Yeah, even that might be underselling it, though. I mean, I do think there's people at CRS and Clr, like talking to people at AI labs and some people in politics and these types of things. And hopefully the research is a way to know what to try to get done at these places. You want to have some concrete recommendations and I think obviously people have to also be willing to listen to you, but I think there is some work being done on that and research is partially just like a community building thing as well. It's a credible signal that you were smart and have thought about this, and so it gives people reason to listen to you and maybe that mostly pays off later on in the future.AARON: Yeah, that all sounds like reasonable. And I guess one thing is that I just don't there's definitely things I mean, first of all, I haven't really stayed up to date on what's going on, so I haven't even done I've done zero research for this podcast episode, for example. Very responsible and insofar as I've know things about these. Orgs. It's just based on what's on their website at some given time. So insofar as there's outreach going on, not like behind the scenes, but just not in a super public way, or I guess you could call that behind the scenes. I just don't have reason to, I guess, know about that. And I guess, yeah, I'm pretty comfortable. I don't even know if this is considered biting a bullet for the crowd that will be listening to this, if that's anybody but with just like yeah, saying a very small change for a very large magnitude, just, like, checks out. You can just do expected value reasoning and that's basically correct, like a correct way of thinking about ethics. But even I don't know how much you know specifically or, like, how much you're allowed want to reveal, but if there was a particular alignment agenda that I guess you in a broad sense, like the suffering focused research community thought was particularly promising and relative to other tractable, I guess, generic alignment recommendations. And you were doing research on that and trying to push that into the alignment mainstream, which is not very mainstream. And then with the hope that that jumps into the AI mainstream. Even if that's kind of a long chain of events. I think I would be a lot more enthusiastic about I don't know that type of agenda, because it feels like there's like a particular story you're telling where it cashes out in the end. You know what I mean?WINSTON: Yeah, I'm not the expert on this stuff, but I do think you just mean I think there's some things about influencing alignment and powerful AI for sure. Maybe not like a full on, like, this is our alignment proposal and it also handles Sris. But some things we could ask AI labs that are already building, like AGI, we could say, can you also implement these sort of, like, safeguards so if you failed alignment, you fail sort of gracefully and don't cause lots of suffering.AARON: Right?WINSTON: Yeah. Or maybe there are other things too, which also seem potentially more tractable. Even if you solve alignment in some sense, like aligning with whatever the human operator tells the AI to do, then you can also get the issue that malevolent actors can take control of the AI and then what they want also causes lots of suffering that type of alignment wouldn't. Yeah, and I guess I tend to be somewhat skeptical of coherent extrapolated volition and things like this, where the idea is sort of like it'll just figure out our values and do the right thing. So, yeah, there's some ways to push on this without having a full alignment plan, but I'm not sure if that counts as what you were saying.AARON: No, I guess it does. Yeah, it sounds like it does. And it could be that I'm just kind of mistaken about the degree to which that type of research and outreach is going on. That sounds like it's at least partially true.#5: Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]Up next, Nathan Barnard is back for his second episode. And we talked about the nature of general intelligence, its relationship with language and the implications of specialized brain functions on the understanding of human cognitive abilities.NATHAN: Yes. This like symbolic like symbolic, symbolic reasoning stuff. Yeah. So I think if I was, like, making the if I was, like, making the case for general intelligence being real, I wouldn't have symbolic reasoning, but I would have language stuff. I'd have this hierarchical structure thing, which.AARON: I would probably so I think of at least most uses of language and central examples as a type of symbolic reasoning because words mean things. They're like yeah. Pointers to objects or something like that.NATHAN: Yeah, I think it's like, pretty confidence isn't where this isn't a good enough description of general intelligence. So, for instance so if you bit in your brain called, I'm using a checklist, I don't fuck this up vernacular, I'm not making this cool. Lots of connects to use words like pointers as these arbitrary signs happens mostly in this area of the brain called Berkeley's area. But very famously, you can have Berkeley's epaxics who lose the ability to do language comprehension and use the ability to consistently use words as pointers, as signs to point to things, but still have perfect good spatial reasoning abilities. And so, conversely, people with brokers of fascia who fuck up, who have the broker's reason their brain fucks up will not be able to form fluent sentences and have some problems like unsigned syntax, and they'll still be able to have very good spatial reasoning. It could still, for instance, be like, good engineers. Would you like many problems which, like, cost engineering?AARON: Yeah, I totally buy that. I don't think language is the central thing. I think it's like an outgrowth of, like I don't know, there's like a simplified model I could make, which is like it's like an outgrowth of whatever general intelligence really is. But whatever the best spatial or graphical model is, I don't think language is cognition.NATHAN: Yes, this is a really big debate in psycholinguistics as to whether language is like an outgrowth of other abilities like the brain has, whether language whether there's very specialized language modules. Yeah, this is just like a very live debate in psycholinguistics moments. I actually do lean towards the reason I've been talking about this actually just going to explain this hierarchical structure thing? Yeah, I keep talking about it. So one theory for how you can comprehend new sentences, like, the dominant theory in linguistics, how you can comprehend new sentences, um, is you break them up into, like you break them up into, like, chunks, and you form these chunks together in this, like, tree structure. So something like, if you hear, like, a totally novel sentence like the pit bull mastiff flopped around deliciously or something, you can comprehend what the sentence means despite the fact you've never heard it. Theory behind this is you saw yes, this can be broken up into this tree structure, where the different, like, ah, like like bits of the sentence. So, like like the mastiff would be like, one bit, and then you have, like, another bit, which is like, the mastiff I can't remember I said rolled around, so that'd be like, another bit, and then you'd have connectors to our heart.AARON: Okay.NATHAN: So the massive rolling around one theory of one of the sort of distinctive things that humans have disabilities is like, this quite general ability to break things up into these these tree structures. This is controversial within psycholinguistics, but it's broadly an area which I broadly buy it because we do see harms to other areas of intelligence. You get much worse at, like, Ravens Progressive Matrices, for instance, when you have, like, an injury to brokers area, but, like, not worse at, like, tests like tests of space, of, like, spatial reasoning, for instance.AARON: So what is like, is there, like, a main alternative to, like, how humans.NATHAN: Understand language as far as this specificity of how we pass completely novel sentences, as far as where this is just like this is just like the the academic consensus. Okay.AARON: I mean, it sounds totally like right? I don't know.NATHAN: Yeah. But yeah, I suppose going back to saying, how far is language like an outgrowth of general intelligence? An outgrowth like general intelligence versus having much more specialized language modules? Yeah, I lean towards the latter, despite yeah, I still don't want to give too strong of a personal opinion here because I'm not a linguistic this is a podcast.AARON: You're allowed to give takes. No one's going to say this is like the academic we want takes.NATHAN: We want takes. Well, gone to my head is.AARON: I.NATHAN: Think language is not growth of other abilities. I think the main justification for this, I think, is that the loss of other abilities we see when you have damage to broker's area and verca's area.AARON: Okay, cool. So I think we basically agree on that. And also, I guess one thing to highlight is I think outgrowth can mean a couple of different things. I definitely think it's plausible. I haven't read about this. I think I did at some point, but not in a while. But outgrowth could mean temporarily or whatever. I think I'm kind of inclined to think it's not that straightforward. You could have coevolution where language per se encourages both its own development and the development of some general underlying trait or something.NATHAN: Yeah. Which seems likely.AARON: Okay, cool. So why don't humans have general intelligence?NATHAN: Right. Yeah. As I was sort of talking about previously.AARON: Okay.NATHAN: I think I think I'd like to use go back to like a high level like a high level argument is there appears to be very surprised, like, much higher levels of functional specialization in brains than you expect. You can lose much more specific abilities than you expect to be able to lose. You can lose specifically the ability a famous example is like facebindness, actually. You probably lose the ability to specifically recognize things which you're, like, an expert in.AARON: Who does it or who loses this ability.NATHAN: If you've damaged your fuse inform area, you'll lose the ability to recognize faces, but nothing else.AARON: Okay.NATHAN: And there's this general pattern that your brain is much more you can lose much more specific abilities than you expect. So, for instance, if you sort of have damage to your ventral, medial, prefrontal cortex, you can say the reasoning for why you shouldn't compulsively gamble but still compulsively gamble.AARON: For instance okay, I understand this not gambling per se, but like executive function stuff at a visceral level. Okay, keep going.NATHAN: Yeah. Some other nice examples of this. I think memory is quite intuitive. So there's like, a very famous patient called patient HM who had his hippocampus removed and so as a result, lost all declarative memory. So all memory of specific facts and things which happened in his life. He just couldn't remember any of these things, but still perfectly functioning otherwise. I think at a really high level, I think this functional specialization is probably the strongest piece of evidence against the general intelligence hypothesis. I think fundamentally, general intelligence hypothesis implies that, like, if you, like yeah, if you was, like, harm a piece of your brain, if you have some brain injury, you might like generically get worse at tasks you like, generically get worse at, like at like all task groups use general intelligence. But I think suggesting people, including general intelligence, like the ability to write, the ability to speak, maybe not speak, the ability to do math, you do have.AARON: This it's just not as easy to analyze in a Cogsy paper which IQ or whatever. So there is something where if somebody has a particular cubic centimeter of their brain taken out, that's really excellent evidence about what that cubic centimeter does or whatever, but that non spatial modification is just harder to study and analyze. I guess we'll give people drugs, right? Suppose that set aside the psychometric stuff. But suppose that general intelligence is mostly a thing or whatever and you actually can ratchet it up and down. This is probably just true, right? You can probably give somebody different doses of, like, various drugs. I don't know, like laughing gas, like like, yeah, like probably, probably weed. Like I don't know.NATHAN: So I think this just probably isn't true. Your working memory corrects quite strongly with G and having better working memory generic can make you much better at lots of tasks if you have like.AARON: Yeah.NATHAN: Sorry, but this is just like a specific ability. It's like just specifically your working memory, which is improved if you go memory to a drugs. Improved working memory. I think it's like a few things like memory attention, maybe something like decision making, which are all like extremely useful abilities and improve how well other cognitive abilities work. But they're all separate things. If you improved your attention abilities, your working memory, but you sort of had some brain injury, which sort of meant you sort of had lost ability to pass syntax, you would not get better at passing syntax. And you can also use things separately. You can also improve attention and improve working memory separately, which just it's not just this one dial which you can turn up.AARON: There's good reason to expect that we can't turn it up because evolution is already sort of like maximizing, given the relevant constraints. Right. So you would need to be looking just like injuries. Maybe there are studies where they try to increase people's, they try to add a cubic centimeter to someone's brain, but normally it's like the opposite. You start from some high baseline and then see what faculties you lose. Just to clarify, I guess.NATHAN: Yeah, sorry, I think I've lost the you still think there probably is some general intelligence ability to turn up?AARON: Honestly, I think I haven't thought about this nearly as much as you. I kind of don't know what I think at some level. If I could just write down all of the different components and there are like 74 of them and what I think of a general intelligence consists of does that make it I guess in some sense, yeah, that does make it less of an ontologically legit thing or something. I think I think the thing I want to get the motivating thing here is that with humans yet you can like we know humans range in IQ, and there's, like, setting aside a very tiny subset of people with severe brain injuries or development disorders or whatever. Almost everybody has some sort of symbolic reasoning that they can do to some degree. Whereas the smartest maybe I'm wrong about this, but as far as I know, the smartest squirrel is not going to be able to have something semantically represent something else. And that's what I intuitively want to appeal to, you know what I mean?NATHAN: Yeah, I know what you're guessing at. So I think there's like two interesting things here. So I think one is, could a squirrel do this? I'm guessing a squirrel couldn't do this, but a dog can, or like a dog probably can. A chimpanzee definitely can.AARON: Do what?NATHAN: Chimpanzees can definitely learn to associate arbitrary signs, things in the world with arbitrary signs.AARON: Yes, but maybe I'm just adding on epicentercles here, but I feel like correct me if I'm wrong, but I think that maybe I'm just wrong about this, but I would assume that Chicken Tees cannot use that sign in a domain that is qualitatively different from the ones they've been in. Right. So, like, a dog will know that a certain sign means sit or whatever, but maybe that's not a good I.NATHAN: Don'T know think this is basically not true.AARON: Okay.NATHAN: And we sort of know this from teaching.AARON: Teaching.NATHAN: There's like a famously cocoa de guerrilla. Also a bonobo whose name I can't remember were taught sign language. And the thing they were consistently bad at was, like, putting together sentences they could learn quite large vocabularies learning to associate by large, I mean in the hundreds of words, in the low hundreds of words which they could consistently use consistently use correctly.AARON: What do you mean by, like, in what sense? What is bonobo using?NATHAN: A very famous and quite controversial example is like, coco gorilla was like, saw a swan outside and signed water bird. That's like, a controversial example. But other things, I think, which are controversial here is like, the syntax part of putting water and bird together is the controversial part, but it's not the controversial part that she could see a swan and call that a bird.AARON: Yeah, I mean, this is kind of just making me think, okay, maybe the threshold for D is just like at the chimp level or something. We are like or whatever the most like that. Sure. If a species really can generate from a prefix and a suffix or whatever, a concept that they hadn't learned before.NATHAN: Yeah, this is a controversial this is like a controversial example of that the addition to is the controversial part. Yeah, I suppose maybe brings back to why I think this matters is will there be this threshold which AIS cross such that their reasoning after this is qualitatively different to their reasoning previously? And this is like two things. One, like a much faster increase in AI capabilities and two, alignment techniques which worked on systems which didn't have g will no longer work. Systems which do have g. Brings back to why I think this actually matters. But I think if we're sort of accepting it, I think elephants probably also if you think that if we're saying, like, g is like a level of chimpanzees, chimpanzees just, like, don't don't look like quantitatively different to, like, don't look like that qualitatively different to, like, other animals. Now, lots of other animals live in similar complex social groups. Lots of other animals use tools.AARON: Yeah, sure. For one thing, I don't think there's not going to be a discontinuity in the same way that there wasn't a discontinuity at any point between humans evolution from the first prokaryotic cells or whatever are eukaryotic one of those two or both, I guess. My train of thought. Yes, I know it's controversial, but let's just suppose that the sign language thing was legit with the waterbird and that's not like a random one off fluke or something. Then maybe this is just some sort of weird vestigial evolutionary accident that actually isn't very beneficial for chimpanzees and they just stumbled their way into and then it just enabled them to it enables evolution to bootstrap Shimp genomes into human genomes. Because at some the smartest or whatever actually, I don't know. Honestly, I don't have a great grasp of evolutionary biology or evolution at all. But, yeah, it could just be not that helpful for chimps and helpful for an extremely smart chimp that looks kind of different or something like that.NATHAN: Yeah. So I suppose just like the other thing she's going on here, I don't want to keep banging on about this, but you can lose the language. You can lose linguistic ability. And it's just, like, happens this happens in stroke victims, for instance. It's not that rare. Just, like, lose linguistic ability, but still have all the other abilities which we sort of think of as like, general intelligence, which I think would be including the general intelligence, like, hypothesis.AARON: I agree that's, like, evidence against it. I just don't think it's very strong evidence, partially because I think there is a real school of thought that says that language is fundamental. Like, language drives thought. Language is, like, primary to thought or something. And I don't buy that. If you did buy that, I think this would be, like, more damning evidence.#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54][Note: I forgot to record an intro segment here. Sorry!]AARON: Yeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.DANIEL: Yeah, we can talk about that.AARON: Maybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.DANIEL: So you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.AARON: Yeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that wouldn't have otherwise been created. Yeah. It's also been a couple of months since I've thought about this in extreme detail, although I thought about it quite a bit. Yeah.DANIEL: Maybe I should say my contrary view, I guess, when you say that, I don't know, does total utilitarianism imply something or not? I'm like, well, presumably it depends on what we mean by total utilitarianism. Right. So setting that aside, I think that thesis is probably false. I think that yeah. You can offset great amounts of suffering with great amounts of pleasure, even for arbitrary amounts of suffering.AARON: Okay. I do think that position is like the much more common and even, I'd say default view. Do you agree with that? It's sort of like the implicit position of people who are of self described total utilitarians who haven't thought a ton about this particular question.DANIEL: Yeah, I think it's probably the implicit default. I think it's the implicit default in ethical theory or something. I think that in practice, when you're being a utilitarian, I don't know, normally, if you're trying to be a utilitarian and you see yourself inflicting a large amount of suffering, I don't know. I do think there's some instinct to be like, is there any way we can get around this?AARON: Yeah, for sure. And to be clear, I don't think this would look like a thought experiment. I think what it looks like in practice and also I will throw in caveats as I see necessary, but I think what it looks like in practice is like, spreading either wild animals or humans or even sentient digital life through the universe. That's in a non as risky way, but that's still just maybe like, say, making the earth, making multiple copies of humanity or something like that. That would be an example that's probably not like an example of what an example of creating suffering would be. For example, just creating another duplicate of earth. Okay.DANIEL: Anything that would be like so much suffering that we shouldn't even the pleasures of earth outweighs.AARON: Not necessarily, which is kind of a cop out. But my inclination is that if you include wild animals, the answer is yes, that creating another earth especially. Yeah, but I'm much more committed to some amount. It's like some amount than this particular time and place in human industry is like that or whatever.DANIEL: Okay, can I get a feel of some other concrete cases to see?AARON: Yeah.DANIEL: So one example that's on my mind is, like, the atomic bombing of Hiroshima and Nagasaki, right? So the standard case for this is, like, yeah, what? A hundred OD thousand people died? Like, quite terrible, quite awful. And a lot of them died, I guess a lot of them were sort of some people were sort of instantly vaporized, but a lot of people died in extremely painful ways. But the countercase is like, well, the alternative to that would have been like, an incredibly grueling land invasion of Japan, where many more people would have died or know regardless of what the actual alternatives were. If you think about the atomic bombings, do you think that's like the kind of infliction of suffering where there's just not an offsetting amount of pleasure that could make that okay?AARON: My intuition is no, that it is offsettable, but I would also emphasize that given the actual historical contingencies, the alternative, the implicit case for the bombing includes reducing suffering elsewhere rather than merely creating happiness. There can definitely be two bad choices that you have to make or something. And my claim doesn't really pertain to that, at least not directly.#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]Up next, Holly Elmore and I discuss the complexities and implications of AI development and open sourcing. We talk about protests and ethical considerations around her, um, uh, campaign to pause the development of frontier AI systems until, until we can tell that they're safe.AARON: So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.HOLLY: Well, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.AARON: That's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.HOLLY: Maybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.AARON: I'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.HOLLY: Yeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?AARON: Yeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.HOLLY: I think that's like 99 it's like a way of showing magnanimity showing, but.AARON: I don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.HOLLY: Literally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.AARON: Yeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.HOLLY: There are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.AARON: Aside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.HOLLY: Yeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.AARON: Not personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.HOLLY: You know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.AARON: Okay. Yeah.HOLLY: I just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.AARON: Maybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't k
This episode features a panel discussion with Stefano Maffulli, Executive Director of the Open Source Initiative (OSI); and Stephen O'Grady, Co-founder of RedMonk. Stefano has decades of experience in open source advocacy. He co-founded the Italian chapter of Free Software Foundation Europe, built the developer community of the OpenStack Foundation, and led open source marketing teams at several international companies. Stephen has been an industry analyst for several decades and is author of the developer playbook, The New Kingmakers: How Developers Conquered the World.In this episode, Sam, Stefano, and Stephen discuss the intersection of open source and AI, good data for everyone, and open data foundations.-------------------“Internet Archive, Wikipedia, they have that mission to accumulate data. The OpenStreetMap is another big one with a lot of interesting data. It's a fascinating space, though. There are so many facets of the word ‘data.' One of the reasons why open data is so hard to manage and hasn't had that same impact of open source is because, like Stephen, the stories that he was telling about the startups having a hard time assembling the mixing and matching, or modifying of data has a different connotation. It's completely different from being able to do the same with software.” – Stefano Maffulli“It's also not clear how said foundation would get buy-in. Because, as far as a lot of the model holders themselves, they've been able to do most of what they want already. What's the foundation really going to offer them? They've done what they wanted. Not having any inside information here, but just judging by the fact that they are willing to indemnify their users, they feel very confident legally in their stance. Therefore, it at least takes one of the major cards off the table for them.” – Stephen O'Grady-------------------Episode Timestamps:(01:44): What open source in the context of AI means to each guest(16:21): Stefano explains OSI's opportunity to shine a light on models and teams(21:22): The next step of open source AI according to Stephen(25:38): Creating better definitions in order to modify software(33:09): The case of funding an open data foundation(42:31): The future of open source data(51:54): Executive producer, Audra Montenegro's backstage takeaways-------------------Links:LinkedIn - Connect with StefanoVisit Open Source InitiativeLinkedIn - Connect with StephenVisit RedMonk
In this episode, Stefano Maffulli, Executive Director of the Open Source Initiative, discusses the need for a new definition as AI differs significantly from open source software. The complexity arises from the unique nature of AI, particularly large language models and transformers, which challenge traditional copyright frameworks. Maffulli emphasizes the urgency of establishing a definition for open source AI and discusses an ongoing effort to release a set of principles by the year's end.The concept of "open" in the context of AI is undergoing a significant transformation, reminiscent of the early days of open source. The recent upheaval at OpenAI, resulting in the removal of CEO Sam Altman, reflects a profound shift in the technology community, prompting a reconsideration of the definition of "open" in the realm of AI.The conversation highlights the parallels between the current AI debate and the early days of software development, emphasizing the necessity for a cohesive approach to navigate the evolving landscape. Altman's ousting underscores a clash of belief systems within OpenAI, with a "safetyist" community advocating caution and transparency, while Altman leans towards experimentation. The historical significance of open source, with a focus on trust preservation over technical superiority, serves as a guide for defining "open" and "AI" in a rapidly changing environment.Learn more from The New Stack about AI and Open Source:Artificial Intelligence News, Analysis, and ResourcesOpen Source Development Threatened in EuropeThe AI Engineer Foundation: Open Source for the Future of AI
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
* Listen on Spotify or Apple Podcasts* Be sure to check out and follow Holly's Substack and org Pause AI. Blurb and summary from ClongBlurbHolly and Aaron had a wide-ranging discussion touching on effective altruism, AI alignment, genetic conflict, wild animal welfare, and the importance of public advocacy in the AI safety space. Holly spoke about her background in evolutionary biology and how she became involved in effective altruism. She discussed her reservations around wild animal welfare and her perspective on the challenges of AI alignment. They talked about the value of public opinion polls, the psychology of AI researchers, and whether certain AI labs like OpenAI might be net positive actors. Holly argued for the strategic importance of public advocacy and pushing the Overton window within EA on AI safety issues.Detailed summary* Holly's background - PhD in evolutionary biology, got into EA through New Atheism and looking for community with positive values, did EA organizing at Harvard* Worked at Rethink Priorities on wild animal welfare but had reservations about imposing values on animals and whether we're at the right margin yet* Got inspired by FLI letter to focus more on AI safety advocacy and importance of public opinion* Discussed genetic conflict and challenges of alignment even with "closest" agents* Talked about the value of public opinion polls and influencing politicians* Discussed the psychology and motives of AI researchers* Disagreed a bit on whether certain labs like OpenAI might be net positive actors* Holly argued for importance of public advocacy in AI safety, thinks we have power to shift Overton window* Talked about the dynamics between different AI researchers and competition for status* Discussed how rationalists often dismiss advocacy and politics* Holly thinks advocacy is neglected and can push the Overton window even within EA* Also discussed Holly's evolutionary biology takes, memetic drive, gradient descent vs. natural selectionFull transcript (very imperfect)AARONYou're an AI pause, Advocate. Can you remind me of your shtick before that? Did you have an EA career or something?HOLLYYeah, before that I was an academic. I got into EA when I was doing my PhD in evolutionary biology, and I had been into New Atheism before that. I had done a lot of organizing for that in college. And while the enlightenment stuff and what I think is the truth about there not being a God was very important to me, but I didn't like the lack of positive values. Half the people there were sort of people like me who are looking for community after leaving their religion that they grew up in. And sometimes as many as half of the people there were just looking for a way for it to be okay for them to upset people and take away stuff that was important to them. And I didn't love that. I didn't love organizing a space for that. And when I got to my first year at Harvard, harvard Effective Altruism was advertising for its fellowship, which became the Elite Fellowship eventually. And I was like, wow, this is like, everything I want. And it has this positive organizing value around doing good. And so I was totally made for it. And pretty much immediately I did that fellowship, even though it was for undergrad. I did that fellowship, and I was immediately doing a lot of grad school organizing, and I did that for, like, six more years. And yeah, by the time I got to the end of grad school, I realized I was very sick in my fifth year, and I realized the stuff I kept doing was EA organizing, and I did not want to keep doing work. And that was pretty clear. I thought, oh, because I'm really into my academic area, I'll do that, but I'll also have a component of doing good. I took giving what we can in the middle of grad school, and I thought, I actually just enjoy doing this more, so why would I do anything else? Then after grad school, I started applying for EA jobs, and pretty soon I got a job at Rethink Priorities, and they suggested that I work on wild animal welfare. And I have to say, from the beginning, it was a little bit like I don't know, I'd always had very mixed feelings about wild animal welfare as a cause area. How much do they assume the audience knows about EA?AARONA lot, I guess. I think as of right now, it's a pretty hardcore dozen people. Also. Wait, what year is any of this approximately?HOLLYSo I graduated in 2020.AARONOkay.HOLLYYeah. And then I was like, really?AARONOkay, this is not extremely distant history. Sometimes people are like, oh, yeah, like the OG days, like four or something. I'm like, oh, my God.HOLLYOh, yeah, no, I wish I had been in these circles then, but no, it wasn't until like, 2014 that I really got inducted. Yeah, which now feels old because everybody's so young. But yeah, in 2020, I finished my PhD, and I got this awesome remote job at Rethink Priorities during the Pandemic, which was great, but I was working on wild animal welfare, which I'd always had some. So wild animal welfare, just for anyone who's not familiar, is like looking at the state of the natural world and seeing if there's a way that usually the hedonic so, like, feeling pleasure, not pain sort of welfare of animals can be maximized. So that's in contrast to a lot of other ways of looking at the natural world, like conservation, which are more about preserving a state of the world the way preserving, maybe ecosystem balance, something like that. Preserving species diversity. The priority with wild animal welfare is the effect of welfare, like how it feels to be the animals. So it is very understudied, but I had a lot of reservations about it because I'm nervous about maximizing our values too hard onto animals or imposing them on other species.AARONOkay, that's interesting, just because we're so far away from the margin of I'm like a very pro wild animal animal welfare pilled person.HOLLYI'm definitely pro in theory.AARONHow many other people it's like you and formerly you and six other people or whatever seems like we're quite far away from the margin at which we're over optimizing in terms of giving heroin to all the sheep or I don't know, the bugs and stuff.HOLLYBut it's true the field is moving in more my direction and I think it's just because they're hiring more biologists and we tend to think this way or have more of this perspective. But I'm a big fan of Brian domestics work. But stuff like finding out which species have the most capacity for welfare I think is already sort of the wrong scale. I think a lot will just depend on how much. What are the conditions for that species?AARONYeah, no, there's like seven from the.HOLLYCoarseness and the abstraction, but also there's a lot of you don't want anybody to actually do stuff like that and it would be more possible to do the more simple sounding stuff. My work there just was consisted of being a huge downer. I respect that. I did do some work that I'm proud of. I have a whole sequence on EA forum about how we could reduce the use of rodenticide, which I think was the single most promising intervention that we came up with in the time that I was there. I mean, I didn't come up with it, but that we narrowed down. And even that just doesn't affect that many animals directly. It's really more about the impact is from what you think you'll get with moral circle expansion or setting precedents for the treatment of non human animals or wild animals, or semi wild animals, maybe like being able to be expanded into wild animals. And so it all felt not quite up to EA standards of impact. And I felt kind of uncomfortable trying to make this thing happen in EA when I wasn't sure that my tentative conclusion on wild animal welfare, after working on it and thinking about it a lot for three years, was that we're sort of waiting for transformative technology that's not here yet in order to be able to do the kinds of interventions that we want. And there are going to be other issues with the transformative technology that we have to deal with first.AARONYeah, no, I've been thinking not that seriously or in any formal way, just like once in a while I just have a thought like oh, I wonder how the field of, like, I guess wild animal sorry, not wild animal. Just like animal welfare in general and including wild animal welfare might make use of AI above and beyond. I feel like there's like a simple take which is probably mostly true, which is like, oh, I mean the phrase that everybody loves to say is make AI go well or whatever that but that's basically true. Probably you make aligned AI. I know that's like a very oversimplification and then you can have a bunch of wealth or whatever to do whatever you want. I feel like that's kind of like the standard line, but do you have any takes on, I don't know, maybe in the next couple of years or anything more specifically beyond just general purpose AI alignment, for lack of a better term, how animal welfare might put to use transformative AI.HOLLYMy last work at Rethink Priorities was like looking a sort of zoomed out look at the field and where it should go. And so we're apparently going to do a public version, but I don't know if that's going to happen. It's been a while now since I was expecting to get a call about it. But yeah, I'm trying to think of what can I scrape from that?AARONAs much as you can, don't reveal any classified information. But what was the general thing that this was about?HOLLYThere are things that I think so I sort of broke it down into a couple of categories. There's like things that we could do in a world where we don't get AGI for a long time, but we get just transformative AI. Short of that, it's just able to do a lot of parallel tasks. And I think we could do a lot we could get a lot of what we want for wild animals by doing a ton of surveillance and having the ability to make incredibly precise changes to the ecosystem. Having surveillance so we know when something is like, and the capacity to do really intense simulation of the ecosystem and know what's going to happen as a result of little things. We could do that all without AGI. You could just do that with just a lot of computational power. I think our ability to simulate the environment right now is not the best, but it's not because it's impossible. It's just like we just need a lot more observations and a lot more ability to simulate a comparison is meteorology. Meteorology used to be much more of an art, but it became more of a science once they started just literally taking for every block of air and they're getting smaller and smaller, the blocks. They just do Bernoulli's Law on it and figure out what's going to happen in that block. And then you just sort of add it all together and you get actually pretty good.AARONDo you know how big the blocks are?HOLLYThey get smaller all the time. That's the resolution increase, but I don't know how big the blocks are okay right now. And shockingly, that just works. That gives you a lot of the picture of what's going to happen with weather. And I think that modeling ecosystem dynamics is very similar to weather. You could say more players than ecosystems, and I think we could, with enough surveillance, get a lot better at monitoring the ecosystem and then actually have more of a chance of implementing the kinds of sweeping interventions we want. But the price would be just like never ending surveillance and having to be the stewards of the environment if we weren't automating. Depending on how much you want to automate and depending on how much you can automate without AGI or without handing it over to another intelligence.AARONYeah, I've heard this. Maybe I haven't thought enough. And for some reason, I'm just, like, intuitively. I feel like I'm more skeptical of this kind of thing relative to the actual. There's a lot of things that I feel like a person might be skeptical about superhuman AI. And I'm less skeptical of that or less skeptical of things that sound as weird as this. Maybe because it's not. One thing I'm just concerned about is I feel like there's a larger scale I can imagine, just like the choice of how much, like, ecosystem is like yeah, how much ecosystem is available for wild animals is like a pretty macro level choice that might be not at all deterministic. So you could imagine spreading or terraforming other planets and things like that, or basically continuing to remove the amount of available ecosystem and also at a much more practical level, clean meat development. I have no idea what the technical bottlenecks on that are right now, but seems kind of possible that I don't know, AI can help it in some capacity.HOLLYOh, I thought you're going to say that it would increase the amount of space available for wild animals. Is this like a big controversy within, I don't know, this part of the EA animal movement? If you advocate diet change and if you get people to be vegetarians, does that just free up more land for wild animals to suffer on? I thought this was like, guys, we just will never do anything if we don't choose sort of like a zone of influence and accomplish something there. It seemed like this could go on forever. It was like, literally, I rethink actually. A lot of discussions would end in like, okay, so this seems like really good for all of our target populations, but what about wild animals? I could just reverse everything. I don't know. The thoughts I came to on that were that it is worthwhile to try to figure out what are all of the actual direct effects, but I don't think we should let that guide our decision making. Only you have to have some kind of theory of change, of what is the direct effect going to lead to? And I just think that it's so illegible what you're trying to do. If you're, like, you should eat this kind of fish to save animals. It doesn't lead society to adopt, to understand and adopt your values. It's so predicated on a moment in time that might be convenient. Maybe I'm not looking hard enough at that problem, but the conclusion I ended up coming to was just like, look, I just think we have to have some idea of not just the direct impacts, but something about the indirect impacts and what's likely to facilitate other direct impacts that we want in the future.AARONYeah. I also share your I don't know. I'm not sure if we share the same or I also feel conflicted about this kind of thing. Yeah. And I don't know, at the very least, I have a very high bar for saying, actually the worst of factory farming is like, we should just like, yeah, we should be okay with that, because some particular model says that at this moment in time, it has some net positive effect on animal welfare.HOLLYWhat morality is that really compatible with? I mean, I understand our morality, but maybe but pretty much anyone else who hears that conclusion is going to think that that means that the suffering doesn't matter or something.AARONYeah, I don't know. I think maybe more than you, I'm willing to bite the bullet if somebody really could convince me that, yeah, chicken farming is actually just, in fact, good, even though it's counterintuitive, I'll be like, all right, fine.HOLLYSurely there are other ways of occupying.AARONYeah.HOLLYSame with sometimes I would get from very classical wild animal suffering people, like, comments on my rodenticide work saying, like, well, what if it's good to have more rats? I don't know. There are surely other vehicles for utility other than ones that humans are bent on destroying.AARONYeah, it's kind of neither here nor there, but I don't actually know if this is causally important, but at least psychologically. I remember seeing a mouse in a glue trap was very had an impact on me from maybe turning me, like, animal welfare pills or something. That's like, neither here nor there. It's like a random anecdote, but yeah, seems bad. All right, what came after rethink for you?HOLLYYeah. Well, after the publication of the FLI Letter and Eliezer's article in Time, I was super inspired by pause. A number of emotional changes happened to me about AI safety. Nothing intellectual changed, but just I'd always been confused at and kind of taken it as a sign that people weren't really serious about AI risk when they would say things like, I don't know, the only option is alignment. The only option is for us to do cool, nerd stuff that we love doing nothing else would. I bought the arguments, but I just wasn't there emotionally. And seeing Eliezer advocate political change because he wants to save everyone's lives and he thinks that's something that we can do. Just kind of I'm sure I didn't want to face it before because it was upsetting. Not that I haven't faced a lot of upsetting and depressing things like I worked in wild animal welfare, for God's sake, but there was something that didn't quite add up for me, or I hadn't quite grocked about AI safety until seeing Eliezer really show that his concern is about everyone dying. And he's consistent with that. He's not caught on only one way of doing it, and it just kind of got in my head and I kept wanting to talk about it at work and it sort of became clear like they weren't going to pursue that sort of intervention. But I kept thinking of all these parallels between animal advocacy stuff that I knew and what could be done in AI safety. And these polls kept coming out showing that there was really high support for Paws and I just thought, this is such a huge opportunity, I really would love to help out. Originally I was looking around for who was going to be leading campaigns that I could volunteer in, and then eventually I thought, it just doesn't seem like somebody else is going to do this in the Bay Area. So I just ended up quitting rethink and being an independent organizer. And that has been really I mean, honestly, it's like a tough subject. It's like a lot to deal with, but honestly, compared to wild animal welfare, it's not that bad. And I think I'm pretty used to dealing with tough and depressing low tractability causes, but I actually think this is really tractable. I've been shocked how quickly things have moved and I sort of had this sense that, okay, people are reluctant in EA and AI safety in particular, they're not used to advocacy. They kind of vaguely think that that's bad politics is a mind killer and it's a little bit of a threat to the stuff they really love doing. Maybe that's not going to be so ascendant anymore and it's just stuff they're not familiar with. But I have the feeling that if somebody just keeps making this case that people will take to it, that I could push the Oberson window with NEA and that's gone really well.AARONYeah.HOLLYAnd then of course, the public is just like pretty down. It's great.AARONYeah. I feel like it's kind of weird because being in DC and I've always been, I feel like I actually used to be more into politics, to be clear. I understand or correct me if I'm wrong, but advocacy doesn't just mean in the political system or two politicians or whatever, but I assume that's like a part of what you're thinking about or not really.HOLLYYeah. Early on was considering working on more political process type advocacy and I think that's really important. I totally would have done it. I just thought that it was more neglected in our community to do advocacy to the public and a lot of people had entanglements that prevented them from doing so. They work sort of with AI labs or it's important to their work that they not declare against AI labs or something like that or be perceived that way. And so they didn't want to do public advocacy that could threaten what else they're doing. But I didn't have anything like that. I've been around for a long time in EA and I've been keeping up on AI safety, but I've never really worked. That's not true. I did a PiBBs fellowship, but.AARONI've.HOLLYNever worked for anybody in like I was just more free than a lot of other people to do the public messaging and so I kind of felt that I should. Yeah, I'm also more willing to get into conflict than other EA's and so that seems valuable, no?AARONYeah, I respect that. Respect that a lot. Yeah. So like one thing I feel like I've seen a lot of people on Twitter, for example. Well, not for example. That's really just it, I guess, talking about polls that come out saying like, oh yeah, the public is super enthusiastic about X, Y or Z, I feel like these are almost meaningless and maybe you can convince me otherwise. It's not exactly to be clear, I'm not saying that. I guess it could always be worse, right? All things considered, like a poll showing X thing is being supported is better than the opposite result, but you can really get people to say anything. Maybe I'm just wondering about the degree to which the public how do you imagine the public and I'm doing air quotes to playing into policies either of, I guess, industry actors or government actors?HOLLYWell, this is something actually that I also felt that a lot of EA's were unfamiliar with. But it does matter to our representatives, like what the constituents think it matters a mean if you talk to somebody who's ever interned in a congressperson's office, one person calling and writing letters for something can have actually depending on how contested a policy is, can have a largeish impact. My ex husband was an intern for Jim Cooper and they had this whole system for scoring when calls came in versus letters. Was it a handwritten letter, a typed letter? All of those things went into how many points it got and that was something they really cared about. Politicians do pay attention to opinion polls and they pay attention to what their vocal constituents want and they pay attention to not going against what is the norm opinion. Even if nobody in particular is pushing them on it or seems to feel strongly about it. They really are trying to calibrate themselves to what is the norm. So those are always also sometimes politicians just get directly convinced by arguments of what a policy should be. So yeah, public opinion is, I think, underappreciated by ya's because it doesn't feel like mechanistic. They're looking more for what's this weird policy hack that's going to solve what's? This super clever policy that's going to solve things rather than just like what's acceptable discourse, like how far out of his comfort zone does this politician have to go to advocate for this thing? How unpopular is it going to be to say stuff that's against this thing that now has a lot of public support?AARONYeah, I guess mainly I'm like I guess I'm also I definitely could be wrong with this, but I would expect that a lot of the yeah, like for like when politicians like, get or congresspeople like, get letters and emails or whatever on a particular especially when it's relevant to a particular bill. And it's like, okay, this bill has already been filtered for the fact that it's going to get some yes votes and some no votes and it's close to or something like that. Hearing from an interested constituency is really, I don't know, I guess interesting evidence. On the other hand, I don't know, you can kind of just get Americans to say a lot of different things that I think are basically not extremely unlikely to be enacted into laws. You know what I mean? I don't know. You can just look at opinion. Sorry. No great example comes to mind right now. But I don't know, if you ask the public, should we do more safety research into, I don't know, anything. If it sounds good, then people will say yes, or am I mistaken about this?HOLLYI mean, on these polls, usually they ask the other way around as well. Do you think AI is really promising for its benefits and should be accelerated? They answer consistently. It's not just like, well now that sounds positive. Okay. I mean, a well done poll will correct for these things. Yeah. I've encountered a lot of skepticism about the polls. Most of the polls on this have been done by YouGov, which is pretty reputable. And then the ones that were replicated by rethink priorities, they found very consistent results and I very much trust Rethink priorities on polls. Yeah. I've had people say, well, these framings are I don't know, they object and wonder if it's like getting at the person's true beliefs. And I kind of think like, I don't know, basically this is like the kind of advocacy message that I would give and people are really receptive to it. So to me that's really promising. Whether or not if you educated them a lot more about the topic, they would think the same is I don't think the question but that's sometimes an objection that I get. Yeah, I think they're indicative. And then I also think politicians just care directly about these things. If they're able to cite that most of the public agrees with this policy, that sort of gives them a lot of what they want, regardless of whether there's some qualification to does the public really think this or are they thinking hard enough about it? And then polls are always newsworthy. Weirdly. Just any poll can be a news story and journalists love them and so it's a great chance to get exposure for the whatever thing. And politicians do care what's in the news. Actually, I think we just have more influence over the political process than EA's and less wrongers tend to believe it's true. I think a lot of people got burned in AI safety, like in the previous 20 years because it would be dismissed. It just wasn't in the overton window. But I think we have a lot of power now. Weirdly. People care what effective altruists think. People see us as having real expertise. The AI safety community does know the most about this. It's pretty wild now that's being recognized publicly and journalists and the people who influence politicians, not directly the people, but the Fourth Estate type, people pay attention to this and they influence policy. And there's many levels of I wrote if people want a more detailed explanation of this, but still high level and accessible, I hope I wrote a thing on EA forum called The Case for AI Safety Advocacy. And that kind of goes over this concept of outside versus inside game. So inside game is like working within a system to change it. Outside game is like working outside the system to put pressure on that system to change it. And I think there's many small versions of this. I think that it's helpful within EA and AI safety to be pushing the overton window of what I think that people have a wrong understanding of how hard it is to communicate this topic and how hard it is to influence governments. I want it to be more acceptable. I want it to feel more possible in EA and AI safety to go this route. And then there's the public public level of trying to make them more familiar with the issue, frame it in the way that I want, which is know, with Sam Altman's tour, the issue kind of got framed as like, well, AI is going to get built, but how are we going to do it safely? And then I would like to take that a step back and be like, should AI be built or should AGI be just if we tried, we could just not do that, or we could at least reduce the speed. And so, yeah, I want people to be exposed to that frame. I want people to not be taken in by other frames that don't include the full gamut of options. I think that's very possible. And then there's a lot of this is more of the classic thing that's been going on in AI safety for the last ten years is trying to influence AI development to be more safety conscious. And that's like another kind of dynamic. There, like trying to change sort of the general flavor, like, what's acceptable? Do we have to care about safety? What is safety? That's also kind of a window pushing exercise.AARONYeah. Cool. Luckily, okay, this is not actually directly responding to anything you just said, which is luck. So I pulled up this post. So I should have read that. Luckily, I did read the case for slowing down. It was like some other popular post as part of the, like, governance fundamentals series. I think this is by somebody, Zach wait, what was it called? Wait.HOLLYIs it by Zach or.AARONKatya, I think yeah, let's think about slowing down AI. That one. So that is fresh in my mind, but yours is not yet. So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.HOLLYWell, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.AARONThat's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.HOLLYMaybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.AARONI'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.HOLLYYeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?AARONYeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.HOLLYI think that's like 99 it's like a way of showing magnanimity showing, but.AARONI don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.HOLLYLiterally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.AARONYeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.HOLLYThere are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.AARONAside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.HOLLYYeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.AARONNot personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.HOLLYYou know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.AARONOkay. Yeah.HOLLYI just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.AARONMaybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't know. I don't think that actually that's not an actual possibility. But, like, maybe, like, we have a different idea about, like, the degree to which, like, a problem is just, like, a million different not even a million, but, say, like, a thousand different actors, like, having increasingly powerful models versus, like, the actual, like like the actual, like, state of the art right now, being plausibly near a dangerous threshold or something. Does this make any sense to you?HOLLYBoth those things are yeah, and this is one thing I really like about the pause position is that unlike a lot of proposals that try to allow for alignment, it's not really close to a bad choice. It's just more safe. I mean, it might be foregoing some value if there is a way to get an aligned AI faster. But, yeah, I like the pause position because it's kind of robust to this. I can't claim to know more about alignment than OpenAI or anthropic staff. I think they know much more about it. But I have fundamental doubts about the concept of alignment that make me think I'm concerned about even if things go right, like, what perverse consequences go nominally right, like, what perverse consequences could follow from that. I have, I don't know, like a theory of psychology that's, like, not super compatible with alignment. Like, I think, like yeah, like humans in living in society together are aligned with each other, but the society is a big part of that. The people you're closest to are also my background in evolutionary biology has a lot to do with genetic conflict.AARONWhat is that?HOLLYGenetic conflict is so interesting. Okay, this is like the most fascinating topic in biology, but it's like, essentially that in a sexual species, you're related to your close family, you're related to your ken, but you're not the same as them. You have different interests. And mothers and fathers of the same children have largely overlapping interests, but they have slightly different interests in what happens with those children. The payoff to mom is different than the payoff to dad per child. One of the classic genetic conflict arenas and one that my advisor worked on was my advisor was David Haig, was pregnancy. So mom and dad both want an offspring that's healthy. But mom is thinking about all of her offspring into the future. When she thinks about how much.AARONWhen.HOLLYMom is giving resources to one baby, that is in some sense depleting her ability to have future children. But for dad, unless the species is.AARONPerfect, might be another father in the future.HOLLYYeah, it's in his interest to take a little more. And it's really interesting. Like the tissues that the placenta is an androgenetic tissue. This is all kind of complicated. I'm trying to gloss over some details, but it's like guided more by genes that are active in when they come from the father, which there's this thing called genomic imprinting that first, and then there's this back and forth. There's like this evolution between it's going to serve alleles that came from dad imprinted, from dad to ask for more nutrients, even if that's not good for the mother and not what the mother wants. So the mother's going to respond. And you can see sometimes alleles are pretty mismatched and you get like, mom's alleles want a pretty big baby and a small placenta. So sometimes you'll see that and then dad's alleles want a big placenta and like, a smaller baby. These are so cool, but they're so hellishly complicated to talk about because it involves a bunch of genetic concepts that nobody talks about for any other reason.AARONI'm happy to talk about that. Maybe part of that dips below or into the weeds threshold, which I've kind of lost it, but I'm super interested in this stuff.HOLLYYeah, anyway, so the basic idea is just that even the people that you're closest with and cooperate with the most, they tend to be clearly this is predicated on our genetic system. There's other and even though ML sort of evolves similarly to natural selection through gradient descent, it doesn't have the same there's no recombination, there's not genes, so there's a lot of dis analogies there. But the idea that being aligned to our psychology would just be like one thing. Our psychology is pretty conditional. I would agree that it could be one thing if we had a VNM utility function and you could give it to AGI, I would think, yes, that captures it. But even then, that utility function, it covers when you're in conflict with someone, it covers different scenarios. And so I just am like not when people say alignment. I think what they're imagining is like an omniscient. God, who knows what would be best? And that is different than what I think could be meant by just aligning values.AARONNo, I broadly very much agree, although I do think at least this is my perception, is that based on the right 95 to 2010 Miri corpus or whatever, alignment was like alignment meant something that was kind of not actually possible in the way that you're saying. But now that we have it seems like actually humans have been able to get ML models to understand basically human language pretty shockingly. Well, and so actually, just the concern about maybe I'm sort of losing my train of thought a little bit, but I guess maybe alignment and misalignment aren't as binary as they were initially foreseen to be or something. You can still get a language model, for example, that tries to well, I guess there's different types of misleading but be deceptive or tamper with its reward function or whatever. Or you can get one that's sort of like earnestly trying to do the thing that its user wants. And that's not an incoherent concept anymore.HOLLYNo, it's not. Yeah, so yes, there is like, I guess the point of bringing up the VNM utility function was that there was sort of in the past a way that you could mathematically I don't know, of course utility functions are still real, but that's not what we're thinking anymore. We're thinking more like training and getting the gist of what and then getting corrections when you're not doing the right thing according to our values. But yeah, sorry. So the last piece I should have said originally was that I think with humans we're already substantially unaligned, but a lot of how we work together is that we have roughly similar capabilities. And if the idea of making AGI is to have much greater capabilities than we have, that's the whole point. I just think when you scale up like that, the divisions in your psyche or are just going to be magnified as well. And this is like an informal view that I've been developing for a long time, but just that it's actually the low capabilities that allows alignment or similar capabilities that makes alignment possible. And then there are, of course, mathematical structures that could be aligned at different capabilities. So I guess I have more hope if you could find the utility function that would describe this. But if it's just a matter of acting in distribution, when you increase your capabilities, you're going to go out of distribution or you're going to go in different contexts, and then the magnitude of mismatch is going to be huge. I wish I had a more formal way of describing this, but that's like my fundamental skepticism right now that makes me just not want anyone to build it. I think that you could have very sophisticated ideas about alignment, but then still just with not when you increase capabilities enough, any little chink is going to be magnified and it could be yeah.AARONSeems largely right, I guess. You clearly have a better mechanistic understanding of ML.HOLLYI don't know. My PiBBs project was to compare natural selection and gradient descent and then compare gradient hacking to miotic drive, which is the most analogous biological this is a very cool thing, too. Meatic drive. So Meiosis, I'll start with that for everyone.AARONThat's one of the cell things.HOLLYYes. Right. So Mitosis is the one where cells just divide in your body to make more skin. But Meiosis is the special one where you go through two divisions to make gametes. So you go from like we normally have two sets of chromosomes in each cell, but the gametes, they recombine between the chromosomes. You get different combinations with new chromosomes and then they divide again to bring them down to one copy each. And then like that, those are your gametes. And the gametes eggs come together with sperm to make a zygote and the cycle goes on. But during Meiosis, the point of it is to I mean, I'm going to just assert some things that are not universally accepted, but I think this is by far the best explanation. But the point of it is to take this like, you have this huge collection of genes that might have individually different interests, and you recombine them so that they don't know which genes they're going to be with in the next generation. They know which genes they're going to be with, but which allele of those genes. So I'm going to maybe simplify some terminology because otherwise, what's to stop a bunch of genes from getting together and saying, like, hey, if we just hack the Meiosis system or like the division system to get into the gametes, we can get into the gametes at a higher rate than 50%. And it doesn't matter. We don't have to contribute to making this body. We can just work on that.AARONWhat is to stop that?HOLLYYeah, well, Meiosis is to stop that. Meiosis is like a government system for the genes. It makes it so that they can't plan to be with a little cabal in the next generation because they have some chance of getting separated. And so their best chance is to just focus on making a good organism. But you do see lots of examples in nature of where that cooperation is breaking down. So some group of genes has found an exploit and it is fucking up the species. Species do go extinct because of this. It's hard to witness this happening. But there are several species. There's this species of cedar that has a form of this which is, I think, maternal genome. It's maternal genome elimination. So when the zygote comes together, the maternal chromosomes are just thrown away and it's like terrible because that affects the way that the thing works and grows, that it's put them in a death spiral and they're probably going to be extinct. And they're trees, so they live a long time, but they're probably going to be extinct in the next century. There's lots of ways to hack meiosis to get temporary benefit for genes. This, by the way, I just think is like nail in the coffin. Obviously, gene centered view is the best evolutionarily. What is the best the gene centered view of evolution.AARONAs opposed to sort of standard, I guess, high school college thing would just be like organisms.HOLLYYeah, would be individuals. Not that there's not an accurate way to talk in terms of individuals or even in terms of groups, but to me, conceptually.AARONThey'Re all legit in some sense. Yeah, you could talk about any of them. Did anybody take like a quirk level? Probably not. That whatever comes below the level of a gene, like an individual.HOLLYWell, there is argument about what is a gene because there's multiple concepts of genes. You could look at what's the part that makes a protein or you can look at what is the unit that tends to stay together in recombination or something like over time.AARONI'm sorry, I feel like I cut you off. It's something interesting. There was meiosis.HOLLYMeiotic drive is like the process of hacking meiosis so that a handful of genes can be more represented in the next generation. So otherwise the only way to get more represented in the next generation is to just make a better organism, like to be naturally selected. But you can just cheat and be like, well, if I'm in 90% of the sperm, I will be next in the next generation. And essentially meiosis has to work for natural selection to work in large organisms with a large genome and then yeah, ingredient descent. We thought the analogy was going to be with gradient hacking, that there would possibly be some analogy. But I think that the recombination thing is really the key in Meadic Drive. And then there's really nothing like that in.AARONThere'S. No selection per se. I don't know, maybe that doesn't. Make a whole lot of sense.HOLLYWell, I mean, in gradient, there's no.AARONG in analog, right?HOLLYThere's no gene analog. Yeah, but there is, like I mean, it's a hill climbing algorithm, like natural selection. So this is especially, I think, easy to see if you're familiar with adaptive landscapes, which looks very similar to I mean, if you look at a schematic or like a model of an illustration of gradient descent, it looks very similar to adaptive landscapes. They're both, like, in dimensional spaces, and you're looking at vectors at any given point. So the adaptive landscape concept that's usually taught for evolution is, like, on one axis you have fitness, and on the other axis you have well, you can have a lot of things, but you have and you have fitness of a population, and then you have fitness on the other axis. And what it tells you is the shape of the curve there tells you which direction evolution is going to push or natural selection is going to push each generation. And so with gradient descent, there's, like, finding the gradient to get to the lowest value of the cost function, to get to a local minimum at every step. And you follow that. And so that part is very similar to natural selection, but the Miosis hacking just has a different mechanism than gradient hacking would. Gradient hacking probably has to be more about I kind of thought that there was a way for this to work. If fine tuning creates a different compartment that doesn't there's not full backpropagation, so there's like kind of two different compartments in the layers or something. But I don't know if that's right. My collaborator doesn't seem to think that that's very interesting. I don't know if they don't even.AARONKnow what backup that's like a term I've heard like a billion times.HOLLYIt's updating all the weights and all the layers based on that iteration.AARONAll right. I mean, I can hear those words. I'll have to look it up later.HOLLYYou don't have to full I think there are probably things I'm not understanding about the ML process very well, but I had thought that it was something like yeah, like in yeah, sorry, it's probably too tenuous. But anyway, yeah, I've been working on this a little bit for the last year, but I'm not super sharp on my arguments about that.AARONWell, I wouldn't notice. You can kind of say whatever, and I'll nod along.HOLLYI got to guard my reputation off the cuff anymore.AARONWe'll edit it so you're correct no matter what.HOLLYHave you ever edited the Oohs and UMS out of a podcast and just been like, wow, I sound so smart? Like, even after you heard yourself the first time, you do the editing yourself, but then you listen to it and you're like, who is this person? Looks so smart.AARONI haven't, but actually, the 80,000 Hours After hours podcast, the first episode of theirs, I interviewed Rob and his producer Kieran Harris, and that they have actual professional sound editing. And so, yeah, I went from totally incoherent, not totally incoherent, but sarcastically totally incoherent to sounding like a normal person. Because of that.HOLLYI used to use it to take my laughter out of I did a podcast when I was an organizer at Harvard. Like, I did the Harvard Effective Alchruism podcast, and I laughed a lot more than I did now than I do now, which is kind of like and we even got comments about it. We got very few comments, but they were like, girl hosts laughs too much. But when I take my laughter out, I would do it myself. I was like, wow, this does sound suddenly, like, so much more serious.AARONYeah, I don't know. Yeah, I definitely say like and too much. So maybe I will try to actually.HOLLYRealistically, that sounds like so much effort, it's not really worth it. And nobody else really notices. But I go through periods where I say like, a lot, and when I hear myself back in interviews, that really bugs me.AARONYeah.HOLLYGod, it sounds so stupid.AARONNo. Well, I'm definitely worse. Yeah. I'm sure there'll be a way to automate this. Well, not sure, but probably not too distant.HOLLYFuture people were sending around, like, transcripts of Trump to underscore how incoherent he is. I'm like, I sound like that sometimes.AARONOh, yeah, same. I didn't actually realize that this is especially bad. When I get this transcribed, I don't know how people this is a good example. Like the last 10 seconds, if I get it transcribed, it'll make no sense whatsoever. But there's like a free service called AssemblyAI Playground where it does free drAARONased transcription and that makes sense. But if we just get this transcribed without identifying who's speaking, it'll be even worse than that. Yeah, actually this is like a totally random thought, but I actually spent not zero amount of effort trying to figure out how to combine the highest quality transcription, like whisper, with the slightly less goodAARONased transcriptions. You could get the speaker you could infer who's speaking based on the lower quality one, but then replace incorrect words with correct words. And I never I don't know, I'm.HOLLYSure somebody that'd be nice. I would do transcripts if it were that easy, but I just never have but it is annoying because I do like to give people the chance to veto certain segments and that can get tough because even if I talk you.AARONHave podcasts that I don't know about.HOLLYWell, I used to have the Harvard one, which is called the turning test. And then yeah, I do have I.AARONProbably listened to that and didn't know it was you.HOLLYOkay, maybe Alish was the other host.AARONI mean, it's been a little while since yeah.HOLLYAnd then on my I like, publish audio stuff sometimes, but it's called low effort. To underscore.AARONOh, yeah, I didn't actually. Okay. Great minds think alike. Low effort podcasts are the future. In fact, this is super intelligent.HOLLYI just have them as a way to catch up with friends and stuff and talk about their lives in a way that might recorded conversations are just better. You're more on and you get to talk about stuff that's interesting but feels too like, well, you already know this if you're not recording it.AARONOkay, well, I feel like there's a lot of people that I interact with casually that I don't actually they have these rich online profiles and somehow I don't know about it or something. I mean, I could know about it, but I just never clicked their substack link for some reason. So I will be listening to your casual.HOLLYActually, in the 15 minutes you gave us when we pushed back the podcast, I found something like a practice talk I had given and put it on it. So that's audio that I just cool. But that's for paid subscribers. I like to give them a little something.AARONNo, I saw that. I did two minutes of research or whatever. Cool.HOLLYYeah. It's a little weird. I've always had that blog as very low effort, just whenever I feel like it. And that's why it's lasted so long. But I did start doing paid and I do feel like more responsibility to the paid subscribers now.AARONYeah. Kind of the reason that I started this is because whenever I feel so much I don't know, it's very hard for me to write a low effort blog post. Even the lowest effort one still takes at the end of the day, it's like several hours. Oh, I'm going to bang it out in half an hour and no matter what, my brain doesn't let me do that.HOLLYThat usually takes 4 hours. Yeah, I have like a four hour and an eight hour.AARONWow. I feel like some people apparently Scott Alexander said that. Oh, yeah. He just writes as fast as he talks and he just clicks send or whatever. It's like, oh, if I could do.HOLLYThat, I would have written in those paragraphs. It's crazy. Yeah, you see that when you see him in person. I've never met him, I've never talked to him, but I've been to meetups where he was and I'm at this conference or not there right now this week that he's supposed to be at.AARONOh, manifest.HOLLYYeah.AARONNice. Okay.HOLLYCool Lighthaven. They're now calling. It looks amazing. Rose Garden. And no.AARONI like, vaguely noticed. Think I've been to Berkeley, I think twice. Right? Definitely. This is weird. Definitely once.HOLLYBerkeley is awesome. Yeah.AARONI feel like sort of decided consciously not to try to, or maybe not decided forever, but had a period of time where I was like, oh, I should move there, or we'll move there. But then I was like I think being around other EA's in high and rational high concentration activates my status brain or something. It is very less personally bad. And DC is kind of sus that I was born here and also went to college here and maybe is also a good place to live. But I feel like maybe it's actually just true.HOLLYI think it's true. I mean, I always like the DCAS. I think they're very sane.AARONI think both clusters should be more like the other one a little bit.HOLLYI think so. I love Berkeley and I think I'm really enjoying it because I'm older than you. I think if you have your own personality before coming to Berkeley, that's great, but you can easily get swept. It's like Disneyland for all the people I knew on the internet, there's a physical version of them here and you can just walk it's all in walking distance. That's all pretty cool. Especially during the pandemic. I was not around almost any friends and now I see friends every day and I get to do cool stuff. And the culture is som
Ebbene sono arrivate le scuse e i cambiamenti attesi alle polici di prezzo di Unity, e non potevo non parlarne. Open Source Initiative in chiusura. #unity #gaming #gamedev #opensource #osi === Podcast Anchor - https://anchor.fm/edodusi Spotify - https://open.spotify.com/show/4B2I1RTHTS5YkbCYfLCveU Apple Podcasts - https://podcasts.apple.com/us/podcast/buongiorno-da-edo/id1641061765 Google Podcasts - https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy9iMWJmNDhhMC9wb2RjYXN0L3Jzcw Amazon Music - https://music.amazon.it/podcasts/5f724c1e-f318-4c40-9c1b-34abfe2c9911/buongiorno-da-edo = RSS - https://anchor.fm/s/b1bf48a0/podcast/rss --- Send in a voice message: https://podcasters.spotify.com/pod/show/edodusi/message
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Protest against Meta's irreversible proliferation (Sept 29, San Francisco), published by Holly Elmore on September 20, 2023 on The Effective Altruism Forum. Meta's frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models - which will have more dangerous capabilities - we call on Meta to take responsible release seriously and stop irreversible proliferation. Join us for a peaceful protest at Meta's office in San Francisco at 250 Howard St at 4pm PT. RSVP on Facebook or through this form. Let's send a message to Meta: Stop irreversible proliferation of model weights. Meta's models are not safe if anyone can remove the safety measures. Take AI risks seriously. Take responsibility for harms caused by your AIs. Stop free-riding on the goodwill of the open-source community. Llama models are not and have never been open source, says the Open Source Initiative. All you need to bring is yourself and a sign, if you want to make your own. I will lead a trip to SF from Berkeley but anyone can join at the location. We will have a sign-making party before the demonstration-- stay tuned for details. We'll go out for drinks afterward I like the irony. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Guests Nick Vidal | Masae Shida Panelist Richard Littauer Show Notes Hello and welcome to Sustain! On this episode, Richard is roaming the halls at FOSS Backstage 2023 this week, and you just never know who you're going to bump into. He grabs Nick Vidal, the new Community Manager for ClearlyDefined, which is an open-source project that aims to bring clarity to licensing information for open-source projects. Nick is trying to reach out to different communities to work together, such as OpenSSF and Open Research Toolkit. Nick and Richard discuss the licensing issues related to AI, particularly regarding chatbot models like ChatGPT. They talk about copyright issues related to gathering data, images, and texts from the internet and feeding them into proprietary models. His next guest is Masae Shida, a Senior Program Manager at VMware. Masae is in Berlin to talk about why Asian participation in open source is not as significant as it should be. She talks about how Diversity, Equity, and Inclusion (DEI) are important for companies as they lead to higher productivity and innovation. However, in open source, she has noticed that the number of Asian participants is much lower than expected, even though there are large populations in Asian countries like India and China. Masae aims to identify the barriers preventing Asian participation in open source and find ways to overcome them. Download this episode now! Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) Richard Littauer Twitter (https://twitter.com/richlitt?lang=en) FOSS Backstage 2023 (https://foss-backstage.de/) Nick Vidal Twitter (https://twitter.com/nickvidal?lang=en) ClearlyDefined (https://clearlydefined.io/about) Open Source Initiative (https://opensource.org/) Open Source Initiative Mastodon (https://mastodon.social/@osi@opensource.org) Masae Shida LinkedIn (https://uk.linkedin.com/in/masae-shida) VMware (https://www.vmware.com/) VMware Twitter (https://twitter.com/VMware?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guests: Masae Shida and Nick Vidal.
Guest Andy Piper | Ana Meta Dolinar | Gemma Penson Panelist Richard Littauer Show Notes Hello and welcome to Sustain! The podcast where we talk about sustaining open source for the long haul. Richard is at the State of Open Con 2023 UK in London, and he's excited to have his first ever in-person podcasts. Today, he has three guests joining him. His first guest is Andy Piper, who volunteered to come here and represent the Open Source Initiative. We'll hear more about he's helping the OSI today, what changes he has seen with the OSI over the past decade, and his thoughts on the Cyber Resilience Act. His next two guests are Ana Meta Dolinar and Gemma Penson, who are both University students in Cambridge. They had a stall upstairs at the event for Women@CL, which is the initiative promoting inclusivity and community of women who do computer science, either as students or researchers at Cambridge. Today, we'll learn all about the Women @CL, how they're helping to fix the huge gender imbalance when it comes to open source and computer science, and their thoughts on the “leaky pipeline” metaphor. Download this episode now to hear much more! [00:00:46] Andy tells us why he's at the State of Open Con helping the OSI. [00:04:01] We hear Andy's perspective on how you can benefit from the OSD by being an enthusiast and what it gives you by having the OSD there. [00:06:25] We learn what Andy is currently doing with open source and being a member of the Python Software Foundation. [00:09:44] Since Andy's been a member for over ten years, he tells us what he has seen that has changed significantly in the past decade with the OSI. [00:11:26] Andy shares his first experience at FOSDEM 2023. [00:12:59] What are Andy's thoughts on the Cyber Resilience Act? He also mentions a website and blog to check out by Simon Phipps. [00:15:41] Find out where you can follow Andy and the OSI on the web. [00:17:56] There is a huge gender imbalance when it comes to open source and computer science, and Ana and Gemma share the statistics with us as well as what activities they do to help fix that imbalance. [00:19:14] Ana explains more about the Oxford Women in Computing Society. She mentions lobbying and explains how it requires a lot of background work. [00:21:20] We hear more about the Oxbridge Women in Computer Science Conference that takes place April 2023. [00:24:45] Tech has a higher representation of neuro divergent participants, and Ana and Gemma talk about how visible this population is at universities and in computer science programs and how supportive the university is. [00:27:19] We hear Gemma and Ana's thoughts on the “leaky pipeline” metaphor and why it may or may not work. [00:32:00] The last question is on the topic of governance and how they plan to keep the program existing and onboard new women to this important cause. They tell us about the initiative at Cambridge, and a Big Sister, Little Sister program they have. [00:35:28] Ana and Gemma explain the mentorship from the graduate school, postgraduates, assistant lecturers, etc. [00:36:25] If you're a company that wants to sponsor Women in CL, find out where you can reach out to them and where to get in touch with Ana and Gemma on the web. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) Richard Littauer Twitter (https://twitter.com/richlitt?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Andy Piper Website (https://andypiper.me/) Andy Piper Mastodon (https://mastodon.social/@andypiper) Open Source Initiative (https://opensource.org/) Cyber Resilience Act (https://digital-strategy.ec.europa.eu/en/library/cyber-resilience-act) The ultimate list of reactions to the Cyber Resilience Act by Simon Phipps (Voices of Open Source) (https://blog.opensource.org/the-ultimate-list-of-reactions-to-the-cyber-resilience-act/) Ana Meta Dolinar email (mailto:amd219@cam.ac.uk) Gemma Penson email (mailto:gp500@cam.ac.uk) Women@CL-Department of Computer Science and Technology-University of Cambridge (https://www.cst.cam.ac.uk/women) Women@CL Twitter (https://twitter.com/womencl1?lang=en) Women@CL Facebook (https://www.facebook.com/womenatCL/) Women @CL Instagram (https://www.instagram.com/womenatcl.cambridge/) Oxford Women in Computing Society (https://www.oxwocs.com/) Oxbridge Women in Computer Science Conference (https://www.oxbridge2023.com/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guests: Ana Meta Dolinar, Andy Piper, and Gemma Penson.
In this podcast episode, we discuss the driving forces for organizations to increase their open-source software usage and the biggest open-source trends today. Here is our conversation with Javier Perez, chief Open Source evangelist and senior director of product management at Perforce Software and he talks about the findings from the 2023 State of Open Source report by OpenLogic by Perforce and the Open Source Initiative.
Watch on YouTube About the show Sponsored by Microsoft for Startups Founders Hub. Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org - may be a minute or two late. Show: @pythonbytes@fosstodon.org Special guest: Pamela Fox - @pamelafox@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Michael #1: camply A tool to find campsites at sold out campgrounds through sites like recreation.gov and Yellowstone Finding reservations at sold out campgrounds can be tough. Searches the APIs of booking services like recreation.gov (which indexes thousands of campgrounds across the USA) to continuously check for cancellations and availabilities to pop up. Once a campsite becomes available, camply sends you a notification to book your spot! Want to camp in a tower in California? camply campgrounds --search "Fire Lookout Towers" --state CA Brian #2: hatch-fancy-pypi-readme Your ✨Fancy✨ Project Deserves a ✨Fancy✨ PyPI Readme!
Hello and welcome to CHAOSScast Community podcast, where we share use cases and experiences with measuring open source community health. Elevating conversations about metrics, analytics, and software from the Community Health Analytics Open Source Software, or short CHAOSS Project, to wherever you like to listen. We are super excited to have joining us, Maurice Hendriks, who works for the Municipality of Amsterdam as a policy maker, specifically on the topic of Open Source. He's here to share his journey into open source and to talk more about his views on open source. Download this episode now to find out much more, and don't forget to subscribe for free to this podcast on your favorite podcast app and share this podcast with your friends and colleagues! [00:01:56] Maurice shares his journey into open source and how he got into the field. [00:05:23] Sean wonders if misunderstandings affect the work that Maurice is trying to accomplish, and Maurice talks about the laws in Netherlands and how open source is essential for the morality of the city. [00:09:36] From the government perspective Maurice talked about, he explains different perspectives on what a healthy open source project or community is. [00:12:24] Are these other governments, other municipalities in Netherlands that are using the open source software, built in Amsterdam? [00:17:28] Maurice explains how policy would potentially influence this social system. [00:21:16] We find out the difference between open sourcing something and having something publicly available. [00:23:39] What bothers Maurice as a policy maker? [00:26:15] Sean brings up a point about if software is a social good open sourcing, there needs to be a way for that to be sustainable so it's not just Maurice that's maintaining a particular project, and he wonders how Maurice balances that. [00:29:08] We hear the main lesson people should get from Maurice's vision. [00:30:05] Find out where you can follow Maurice's and his work online. Value Adds (Picks) of the week: [00:32:53] Georg's pick is taking a family trip to Europe. [00:33:30] Sean's pick is witnessing the number of people in this country who are actively engaged in fixing the problems with the recent rulings by our Supreme Court. [00:34:05] Maurice's pick is his wife finishing her book, Akal-About life in the Dutch East Indies_ _by Lilja Anna Perdijk. Panelists: Georg Link Sean Goggins Guest: Maurice Hendriks Sponsor: SustainOSS (https://sustainoss.org/) Quotes: [00:04:27] “If there is no power, there is no software.” [00:07:11] “My mission is to use open source software to get transparency into Government information and technology.” [00:29:08] “The main lesson from my vision: Community built software is the cherry on the cake. You first need to get layers and components in place or you don't get a cake at all.” Links: CHAOSS (https://chaoss.community/) CHAOSS Project Twitter (https://twitter.com/chaossproj?lang=en) CHAOSScast Podcast (https://podcast.chaoss.community/) podcast@chaoss.community (mailto:podcast@chaoss.community) Ford Foundation (https://www.fordfoundation.org/) Georg Link Twitter (https://twitter.com/georglink) Sean Goggins Twitter (https://twitter.com/sociallycompute) The Universal Permissive License (UPL), Version 1.0 (Open Source Initiative) (https://opensource.org/licenses/UPL) European Union Public License, Version 1.2 (EUPL-1.2) (https://opensource.org/licenses/EUPL-1.2) Akal-Overleven in Nederlands-Indië (Dutch) (https://lilja.nl/) Akal-About life in the Dutch East Indies (English) (https://lilja.nl/) OpenNMT (https://opennmt.net/) Special Guest: Maurice Hendriks.
Guest Tracy Hinds Panelists Richard Littauer | Ben Nickolls Show Notes Hello and welcome to Sustain! The podcast where we talk about sustaining open source for the long haul. We are very excited with our guest today, Tracy Hinds, who's currently the CEO and Founder of Crow & Pitcher and serves as a CFO and board director at the Open Source Initiative. She's also a long-time open source practitioner, maker, creator, and a powerful woman of glory, and has founded tons of different communities. Tracy is also a non-profit leader, a career transitioner, and a forever conflict manager. Today, we'll learn more about Crow & Pitcher and the Community Committee (CommComm) in the Node.js Foundation. Also, we'll hear Tracy's thoughts on what she thinks the role is for Product Managers, Program Managers, and Project Managers in open source. Go ahead and download this episode to learn more! [00:02:05] Tracy tells us more about her journey to becoming the Founder of Crow & Pitcher. [00:04:25] Since Tracy was instrumental is the Node.js community for setting up Community Committee (CommComm), she tells us more about it. [00:09:25] Tracy mentions how having an understanding board is essential to the health of the organization. [00:12:51] We hear Tracy's thoughts on how she feels about the role for Product Managers, Program Managers, and Project Managers in open source. [00:16:19] Ben wonders if there was any work within CommComm to try and create that separation and is that something Tracy thinks is more of a challenge within open source. Tracy explains the criticism about core contributors not being open to input. [00:19:58] We hear Tracy's thoughts on what she thinks is the best way to talk to someone to let them know you want to be in a Project Manager role or Product Management role. [00:23:56] Ben wonders what can people do who are working in a code centric open source project, to make themselves and their work more open and amenable to people that come in a more product management or project management capacity? [00:27:24] Find out the difference between a Product Manager, Project Manager, and Program Manager. [00:30:47] Tracy tells us where you can follow her online. Quotes [00:08:12] “Everyone gets broken down by the amount of work and ambition in open source.” [00:11:07] “I kind of love when things get deprecated because one, it means people are paying attention enough to notice you don't need these things anymore, and it means that things are still changing, and I think that's an important sign in a project.” [00:14:06] “I think it's really interesting to think of many open source projects as products.” [00:18:30] “Every project needs documentation and people being compensated for documentation.” 00:20:40 (On how to get a role as PM in OSS): “It helps to clarify a problem.” [00:21:13] “They need to build trust.” [00:25:02] “A lot of people have open source code projects, but not open collaboration.” [00:28:58] “You're the goalie.” Spotlight [00:31:42] Ben's spotlight is making a Swamp Cooler. [00:32:12] Richard's spotlight is Bryan Hughes. [00:33:00] Tracy's spotlight is her exposure to JSConf's bringing her to where she is today. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) Richard Littauer Twitter (https://twitter.com/richlitt?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Ben Nickolls Twitter (https://twitter.com/BenJam?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Tracy Hinds LinkedIn (https://www.linkedin.com/in/tracyhinds) Tracy Hinds Twitter (https://twitter.com/hackygolucky?lang=en) Crow & Pitcher (https://crow-and-pitcher.com/crow-%26-pitcher) Crow & Pitcher Twitter (https://twitter.com/crowandpitcher) Node.js Community Committee (CommComm) (https://nodejs.org/en/about/community/) Sustain Open Source Design Podcast (https://sosdesign.sustainoss.org/) Let's Talk Docs Podcast (https://ltd-podcast.sustainoss.org/) Swamp Cooler (https://www.nytimes.com/wirecutter/blog/do-swamp-coolers-work/) Bryan Hughes Twitter (https://twitter.com/nebrius?lang=en) JSConf (https://jsconf.com/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Tracy Hinds.
What are the copyright implications for AI? Can artwork created by a machine register for copyright? These are some of the questions we answer in this episode of Deep Dive: AI, an Open Source Initiative that explores how Artificial Intelligence impacts the world around us. Here to help us unravel the complexities of today's topic is Pamela Chestek, an Open Source lawyer, Chair of the OSI License Committee, and OSI Board member. She is an accomplished business attorney with vast experience in free and open source software, trademark law, and copyright law, as well as in advertising, marketing, licensing, and commercial contracting. Pamela is also the author of various scholarly articles and writes a blog focused on analyzing existing intellectual property case law. She is a respected authority on the subject and has given talks concerning Open Source software, copyright, and trademark matters. In today's conversation, we learn the basics of copyright law and delve into its complexities regarding open source material. We also talk about the line between human and machine creations, whether machine learning software can be registered for copyright, how companies monetize Open Source software, the concern of copyright infringement for machine learning datasets, and why understanding copyright is essential for businesses. We also learn about some amazing AI technology that is causing a stir in the design world and hear some real-world examples of copyright law in the technology space. Tune in today to get insider knowledge with expert Pamela Chestek! Full transcript. Key Points From This Episode: Introduction and a brief background about today's guest, Pamela Chestek. Complexities regarding copyright for materials created by machines. Interesting examples of copyright rejection for non-human created materials. An outline of the standards required to register material for copyright. Hear a statement still used as a standard today made by the US copyright office in 1966. The fine line between what a human being is doing versus what the machine is doing. Learn about some remarkable technology creating beautiful artwork. She explains the complexities of copyright for art created by software or machines. We find out if machine learning software like SpamAssassin can register for copyright. Reasons why working hard, time, and resources do not meet copyright requirements. A discussion around the complexities of copyright concerning Open Source software. Pamela untangles the nuance of copyright when using datasets for machine learning. Common issues that her clients experience who are using machine learning. Whether AI will be a force to drive positive or negative change in the future. A rundown of some real-world applications of AI. Why understanding copyright law is essential to a company's business model. How companies make money by creating Open Source software. The move by big social media companies to make their algorithm Open Source. A final takeaway message that Pamela has for listeners. Links Mentioned in Today's Episode: Pamela Chestek on LinkedIn Pamela Chestek on Twitter Chestek Legal Pamela Chestek: Property intangible Blog Debian DALL·E Hacker News SpamAssassin European Pirate Party Open Source Definition Link Free Software Foundation Red Hat Jason Shaw, Audionautix Credits Special thanks to the volunteer producer, Nicole Martinelli.
JavaOne 2022 Speaker Preview In this conversation Oracle's Jim Grisanzio talks with Java developer and JavaOne 2022 speaker Bruno Souza from Brazil. Bruno is a Java Champion, he's been a board member of the Open Source Initiative, he's on the Executive Committee of the Java Community Process, and he leads the SouJava community in Brazil. Bruno has been building Java communities for decades, and in recent years he's been helping Java developers build their careers. That's the topic of this podcast and also Bruno's session at JavaOne in October in Las Vegas. JavaOne 2022 from October 17-20 in Las Vegas JavaOne 2022: Registration and Sessions JavaOne Update 1 JavaOne Update 2 Bruno Souza, Brazilian JavaMan @brjavaman Java Development and Community OpenJDK Inside Java Dev.Java @java on Twitter Java on YouTube Duke's Corner Podcast Host Jim Grisanzio, Oracle Java Developer Relations, @jimgris
Welcome to Deep Dive:AI, an online event from the Open Source Initiative. We'll be exploring how Artificial Intelligence impacts open source software, from developers to businesses to the rest of us. Episode notes An introduction to Deep Dive: AI, an event in three parts organized by the Open Source Initiative. With AI systems being so complex, concepts like “program” or “source code” in the Open Source Definition are challenged in new and surprising ways. The topic of AI is huge. For Open Source Initiative's Deep Dive, we'll be looking at how AI could affect the future of Open Source. This trailer episode is produced by the Open Source Initiative with the help of Nicole Martinelli. Music by Jason Shaw on Audionautix.com, Creative Commons BY 4.0 International license. Deep Dive: AI is made possible by the generous support of OSI individual members and sponsors. Donate or become a member of the OSI today.
AUSTIN, TEX. — In one of the most compelling keynote addresses at The Linux Foundation's Open Source Summit North America, held here in June, Aeva Black, a veteran of the open source community, said that a friend of theirs recently commented that, “I feel like all the trans women I know on Twitter, are software developers.” There's a reason for that, Black said. It's called “survivor bias”: The transgender software developers the friend knows on Twitter are only a small sample of the trans kids who survived into adulthood, or didn't get pushed out of mainstream society. “It's a pretty common trope, at least on the internet: transwomen are all software developers, we all have high-paying jobs, we're TikTok or on Twitter. And that's really a sampling bias, the transgender people who have the privilege to be loud,” said Black, in this On the Road episode of The New Stack Makers podcast. Black, whose keynote alerted the conference attendees about how the rights of transgender individuals are under attack around the United States, and the role tech can play, currently works in Microsoft Azure's Office of the Chief Technology Officer and holds seats on the boards of the Open Source Initiative and on the OpenSSF's Technical Advisory Council. In this episode of Makers, they unpacked the keynote's themes with Heather Joslyn, TNS features editor. Citing Pew Research Center data, released in June, reports that 5% of Americans under 30 identify as transgender or nonbinary — roughly the same percentage that have red hair. The Pew study, and the latest "Stack Overflow Developer Survey," reveal that younger people are more likely than their elders to claim a transgender or nonbinary identity. Failure to accept these people, Black said, could have an impact on open source work, and tech work more generally. “If you're managing a project, and you want to attract younger developers who could then pick it up and carry on the work over time, you need to make sure that you're welcoming of all younger developers,” they said.Rethinking Codes of ConductCodes of Conduct, must-haves for meetups, conferences and open source projects over the past few years, are too often thought of as tools for punishment, Black said in their keynote. For Makers, they advocated for thinking of those codes as tools for community stewardship. As a former member of the Kubernetes Code of Conduct committee, Black pointed out that “80% of what we did … while I served wasn't punishing people. It was stepping in when there was conflict, when people you know, stepped on someone else's toe, accidentally offended somebody. Like, ‘OK, hang on, Let's sort this out.' So it was much more stewardship, incident response mediation.” LGBT people are currently the targets of new legislation in several U.S. states. The tech world and its community leaders should protect community members who may be vulnerable in this new political climate, Black said. “The culture of a community is determined by the worst behavior its leaders tolerate, we have to understand and it's often difficult to do so how our actions impact those who have less privileged than us, the most marginalized in our community,” they said. For example, “When thinking of where to host a conference, think about the people in one's community, even those who may be new contributors. Will they be safe in that location?” Listen to the episode to hear more of The New Stack's conversation with Black.
Guest Stefano Maffulli Panelists Richard Littauer | Justin Dorfman Show Notes Hello and welcome to Sustain! The podcast where we talk about sustaining open source for the long haul. Today, we have joining us Stefano Maffulli, who's the new Executive Director for the Open Source Initiative (OSI). Our conversations center around Stefano taking us through what OSI can do and we learn more about how it's changing. He also tells us about the biggest debate that's happening in the community, a podcast series they are releasing called Deep Dive AI, and some things he's most excited about happening in the next few months with the OSI. Go ahead and download this episode now to find out much more! [00:02:03] Stefano fills us in on his background and how he got into his role at the OSI. [00:04:49] When coming into the ED role, Stefano explains what he was most excited about doing. [00:07:21] Stefano shares his ideas and what he's started since being at the OSI. [00:09:13] We hear Stefano's thoughts on dual licensing being part of the open source ecosystem that isn't negative, and ethical source licenses being big ten open source, and how he sees the OSD changing. [00:11:27] What are the biggest debates that are happening in the community? [00:17:35] A podcast series is mentioned by Stefano, and Justin wonders if this a new way to diversify the revenue that's coming in and if there's any other initiatives Stefano has that is going to increase that. [00:22:33] Richard wonders how Stefano expects to mitigate corporate interest ruling OSI's agenda. [00:29:33] We learn how Stefano is hoping to involve people for affiliates who don't have time to read all the legal stuff in his mailing list. [00:31:42] Stefano tells us what he's most excited happening in the next few months with the OSI. [00:34:07] Find out where you can follow Stefano on the web and become a member of the OSI. Quotes [00:09:26] “I do think that technology is not neutral.” [00:09:53] “We do need to think about how the software that we've created impacts the lives of people. And there's no easy answer.” [00:15:28] “Artificial Intelligence is a new thing. It's changing the boundary between data and software.” Spotlight [00:34:58] Justin's spotlight is No Secrets! [00:35:18] Richard's spotlight is Deb Nicholson. [00:36:12] Stefano's spotlight is Bruce Perens and IndieWeb. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) Richard Littauer Twitter (https://twitter.com/richlitt) Justin Dorfman Twitter (https://twitter.com/jdorfman) Stefano Maffulli Twitter (https://twitter.com/smaffulli) Stefano Maffulli LinkedIn (https://www.linkedin.com/in/maffulli) Stefano Maffulli Blog (http://maffulli.net/) Open Source Initiative (https://opensource.org/) OpenAI (https://openai.com/) Sustain Podcast-Episode 75: Deb Nicholson on the OSI, the future of open source and SeaGL (https://podcast.sustainoss.org/75) Sustain Podcast-Episode 37: AN Open Source History Lesson & More with Patrick Masson (https://podcast.sustainoss.org/37) Sustain Podcast-Episode 23: Why Companies Should Invest Money in Open Source with Josh Simmons (https://podcast.sustainoss.org/23) Sustain Podcast-Episode 110: Impactful Open Source: Teaching Open Source Technology Managers at Brandeis, with Ken Udas and Georg Link (https://podcast.sustainoss.org/110) Become an OSI Affiliate (https://opensource.org/affiliates/about) Open Source Initiative - Sign Up as a Member (https://opensource.org/donate) No Secrets! (https://sourcegraph-community.github.io/no-secrets/) Deb Nicholson Twitter (https://twitter.com/baconandcoconut) Bruce Perens Twitter (https://twitter.com/BrucePerens) IndieWeb (https://indieweb.org/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Stefano Maffulli.