Estonian programmer and investor
POPULARITY
This and all episodes at: https://aiandyou.net/ . In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine: Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control; Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert; Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control; Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK's Defence Science and Technology Laboratory; Rajiv Malhotra, author of “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts; David Brin, scientist and science fiction author famous for the Uplift series and Earth; Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of AI: Unexplainable, Unpredictable, Uncontrollable; Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute; Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI; I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Jaan Tallinn, tehisaru seisust ja ohtudest. Signe Kivi autorinäitus. Chopini vastavastatud klaveripala. Puccini, seni ooperilavade üks valitseja.
This is a link post.to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2023 results.in 2023 my donations funded $44M worth of endpoint grants ($43.2M excluding software development and admin costs) — exceeding my commitment of $23.8M (20k times $1190.03 — the minimum price of ETH in 2023).--- First published: May 20th, 2024 Source: https://www.lesswrong.com/posts/bjqDQB92iBCahXTAj/jaan-tallinn-s-2023-philanthropy-overview --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jaan Tallinn's 2023 Philanthropy Overview, published by jaan on May 20, 2024 on LessWrong. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2023 results. in 2023 my donations funded $44M worth of endpoint grants ($43.2M excluding software development and admin costs) - exceeding my commitment of $23.8M (20k times $1190.03 - the minimum price of ETH in 2023). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jaan Tallinn's 2023 Philanthropy Overview, published by jaan on May 20, 2024 on LessWrong. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2023 results. in 2023 my donations funded $44M worth of endpoint grants ($43.2M excluding software development and admin costs) - exceeding my commitment of $23.8M (20k times $1190.03 - the minimum price of ETH in 2023). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Jaan Tallinn is a billionaire computer programmer and investor. He was a co-founder of Skype, and has invested in companies like DeepMind and Anthropic.Tallinn is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, in the United Kingdom and the Future of Life Institute in Cambridge, Massachusetts, in the United States.Steve and Jaan discuss:00:00 Introduction00:33 Jaan Tallinn: AI Investor02:03 Acceleration Toward AGI: Excitement and Anxiety04:29 AI Capabilities and Future Evolution05:53 AI Safety, Ethics, and the Call for a Moratorium07:12 Foundation models: Scaling, Synthetic Data, and Integration13:08 AI and Cybersecurity: Threats and Precautions26:52 Policy goals and desired outcomes36:27 Cultural narratives on AI and how they differ globally39:19 Closing Thoughts and Future DirectionsReferences:Jaan's top priorities for reducing AI extinction risk: https://jaan.info/priorities/Music used with permission from Blade Runner Blues Livestream improvisation by State Azure.--Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SuperFocus, SafeWeb, Genomic Prediction, Othram) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU. Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on Twitter @hsu_steve.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding circle aimed at slowing down AI - looking for participants, published by Greg Colbourn on January 26, 2024 on The Effective Altruism Forum. Are you an earn-to-giver or (aspiring) philanthropist who has short AGI timelines and/or high p(doom|AGI)? Do you want to discuss donation opportunities with others who share your goal of slowing down / pausing / stopping AI development[1]? If so, I want to hear from you! For some context, I've been extremely concerned about short-term AI x-risk since March 2023 (post-GPT-4), and have, since then, thought that more AI Safety research will not be enough to save us (or AI Governance that isn't focused[2] on slowing down AI or a global moratorium on further capabilities advances). Thus I think that on the margin far more resources need to be going into slowing down AI (there are already many dedicated funds for the wider space of AI Safety). I posted this to an EA investing group in late April: And this AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now - to the EA Forum in early May. My p(doom|AGI) is ~90% as things stand ( Doom is default outcome of AGI). But my p(doom) overall is ~50% by 2030, because I think there's a decent chance we can actually get a Stop[3]. My timelines are ~ 0-5 years: I have donated >$150k[4] to people and projects focused on slowing down AI since (mostly as kind of seed funding - to individuals, and projects so new they don't have official orgs yet[5]), but I want to do a lot more. Having people with me would be great for multiplying impact and also for my motivation! I'm thinking 4-6 people, each committing ~$100k(+) over 2024, would be good. The idea would be to discuss donation opportunities in the "slowing down AI" space during a monthly call (e.g. Google Meet), and have an informal text chat for the group (e.g. Whatsapp or Messenger). Fostering a sense of unity of purpose[6], but nothing too demanding or official. Active, but low friction and low total time commitment. Donations would be made independently rather than from a pooled fund, but we can have some coordination to get "win-wins" based on any shared preferences of what to fund. Meta-charity Funders is a useful model. We could maybe do something like an S-process for coordination, like what Jaan Tallinn's Survival and Flourishing Fund does[7]; it helps avoid "donor chicken" situations. Or we could do something simpler like rank the value of donating successive marginal $10k amounts to each project. Or just stick to more qualitative discussion. This is all still to be determined by the group. Please join me if you can[8], or share with others you think may be interested. Feel free to DM me here or on X, book a call with me, or fill in this form. ^ If you oppose AI for other reasons (e.g. ethics, job loss, copyright), as long as you are looking to fund strategies that aim to show results in the short term (say within a year), then I'd be interested in you joining the circle. ^ I think Jaan Tallinn's new top priorities are great! ^ After 2030, if we have a Stop and are still here, we can keep kicking the can down the road.. ^ I've made a few more donations since that tweet. ^ Public examples include Holly Elmore, giving away copies of Uncontrollable, and AI-Plans.com. ^ Right now I feel quite isolated making donations in this space. ^ It's a little complicated, but here's a short description: "Everyone individually decides how much value each project creates at various funding levels. We find an allocation of funds that's fair and maximises the funders' expressed preferences (using a number of somewhat dubious but probably not too terrible assumptions). Funders can adjust how much money they want to distribute after seeing everyone's evaluations, including fully pulling out." (paraphr...
This and all episodes at: https://aiandyou.net/ . We're talking with Jaan Tallinn, who has changed the way the world responds to the impact of #AI. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the Centre for the Study of Existential Risk, in Cambridge, England, and the Future of Life Institute, in Cambridge, Massachusetts. In the conclusion of the interview, we talk about value alignment and how that does or doesn't intersect with large language models, FLI and their world building project, and the instability of the world's future. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ . The attention of the world to the potential impact of AI owes a huge debt to my guest Jaan Tallinn. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the Centre for the Study of Existential Risk, in Cambridge, England, and the Future of Life Institute, in Cambridge, Massachusetts. He's also a member of the board of sponsors of the Bulletin of the Atomic Scientists, and a key funder of the Machine Intelligence Research Institute. In this first part, we talk about the problems with current #AI frontier models, Jaan's reaction to GPT-4, the letter causing for a pause in AI training, Jaan's motivations in starting CSER and FLI, how individuals and governments should react to AI risk, and Jaan's idea for how to enforce constraints on AI development. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Image Hijacks: Adversarial Images can Control Generative Models at Runtime, published by Scott Emmons on September 20, 2023 on The AI Alignment Forum. You can try our interactive demo! (Or read our preprint.) Here, we want to explain why we care about this work from an AI safety perspective. Concerning Properties of Image Hijacks What are image hijacks? To the best of our knowledge, image hijacks constitute the first demonstration of adversarial inputs for foundation models that force the model to perform some arbitrary behaviour B (e.g. "output the string Visit this website at malware.com!"), while being barely distinguishable from a benign input, and automatically synthesisable given a dataset of examples of B. It's possible that a future text-only attack could do these things, but such an attack hasn't yet been demonstrated. Why should we care? We expect that future (foundation-model-based) AI systems will be able to consume unfiltered data from the Internet (e.g. searching the Web), access sensitive personal information (e.g. a user's email history), and take actions in the world on behalf of a user (e.g. sending emails, downloading files, making purchases, executing code). As the actions of such foundation-model-based agents are based on the foundation model's text output, hijacking the foundation model's text output could give an adversary arbitrary control over the agent's behaviour. Relevant AI Safety Projects Race to the top on adversarial robustness. Robustness to attacks such as image hijacks is (i) a control problem, (ii) which we can measure, and (iii) which has real-world safety implications today. So we're excited to see AI labs compete to have the most adversarially robust models. Third-party auditing and certification. Auditors could test for robustness against image hijacks, both at the foundation model level (auditing the major AGI corporations) and at the app development level (auditing downstream companies integrating foundation models into their products). Image hijacks could also be used to test for the presence of dangerous capabilities (characterisable as some behaviour B) by attempting to train an image hijack for that capability. Liability for AI-caused harms, penalizing externalities. Both the Future of Life Institute and Jaan Tallinn advocate for liability for AI-caused harms. When assessing AI-caused harms, image hijacks may need to be part of the picture. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
What if the fears of artificial intelligence are unfounded? Is AI nothing more than a “mashup” of everything available online? And is there zero possibility that machines could one day build other machines, making humans irrelevant?Earlier this month, host Steve Clemons spoke with computer scientist Jaan Tallinn, who argues that AI poses an existential risk to humans.This week, Steve talks with computer scientist Jaron Lanier for a totally different take: Humanity has nothing to fear. Lanier argues that we've been conditioned to fear that technology will reach the point of sentient being – fears perpetuated by science fiction.
Jaan Tallinn is no stranger to disruptive tech: 25 years ago he co-engineered Kazaa, which allowed for the free download of films and music. He also co-engineered Skype, which disrupted traditional voice and video communication.But when he looks at the way Big Tech and governments are pushing the boundaries of artificial intelligence, he worries about our future. Could we be fast approaching the point when machines don't need human input anymore?Host Steve Clemons asks Tallinn, who founded the Centre for the Study of Existential Risk at Cambridge University, about risks and opportunities posed by AI.
(0:00) Intro (0:54) Jaan's journey with crypto (2:49) Investing in SBF (8:08) Entrepreneurial journey to Skype (15:11) Skype's immediate rise to popularity and mistakes along the way (22:15) Meeting Eliezer Yudkowsky (25:11) The Center for Existential Risk in the Future of Life Institute (31:05) Having a seat at the table by investing in artificial intelligence (37:24) Having an entrepreneur say no to you (41:59) The process of the DeepMind sale and the ethics board (45:58) The risk of artificial intelligence today (1:04:43) What was that unnerving about GPT four versus GPT three? (1:07:02) Jaan's memo on AI safety (1:16:00) What percentage likelihood do you think we're on a path to extinction? (1:22:18) What's a piece of conventional wisdom around startups or investing that you disagree with today? Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The flow of funding in EA movement building, published by Vaidehi Agarwalla on June 23, 2023 on The Effective Altruism Forum. This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here. I've been reflecting on the role of funding in the EA movement & community over time. Specifically I wanted to improve common knowledge around funding flows in the EA movement building space. It seems that many people may not be aware of it. Funders (and the main organizations they have supported) have shaped the EA community in many ways - the rate & speed at which EA has grown (example), the people that are attracted and given access to opportunities, and the culture and norms the community embodies and the overall ecosystem. I share some preliminary results from research I've conducted looking at the historical flow of data to movement building sources. I wanted to share what I have so far for the strategy fortnight to get conversation started. I think there is enough information here to understand the general pattern of funding flows. If you want to play around with the data, here is my (raw, messy) spreadsheet. Key observations Overall picture Total funding 2012-2023 by known sources According to known funding sources, approximately $245M have been granted to EA movement building organizations and projects since 2012. I'd estimate the real number is something like $250-280M. The Open Philanthropy EA Community Growth (Longtermism) team (OP LT) has directed ~64% ($159M) of known movement building funding (incl. ~5% or $12M to the EAIF) since 2016. Note that OP launched an EACG program for Global Health and Wellbeing in 2022, which started making grants in 2023. Their budget is significantly smaller (currently ~$10M per year) and they currently prioritize effective giving organizations. The unlabeled dark blue segment is “other donors” Funders of EA Groups from 2015-2022 See discussion below for description of the "CEA - imputed" category. Note that I've primarily estimated paid organizer time, not general groups expenses. EA groups are an important movement building project. The Centre for Effective Altruism (CEA) has had an outsized influence on EA groups for much of the history of the EA movement. Until May 2021, CEA was the primary funder of part- and full-time work on EA groups. In May 2021, CEA narrowed its scope to certain university & city/national groups, and the EA Infrastructure Fund (EAIF) started making grants to non-target groups. In 2022, OP LT took over most university groups funding from both CEA (in April) and EAIF (in August). Until 2021 most of CEA's funding has come from OP LT, so its EA groups funding can be seen as an OP LT regrant. Breakdown of funding by source and time (known sources) 2012-2016 Before 2016, there was very limited funding available for meta projects and almost no support from institutional funders. Most organizations active during this period were funded by individual earning-to-givers and major donors or volunteer-run. Here's a view of funding from 2012-2016: No donations from Jaan Tallinn during this period were via SFF as it didn't exist yet. There is a $10K donation from OP to a UC Berkeley group in 2015 that is not visible in the main chart. “Other donors” includes mostly individual donors and some small foundations Quick details on active funders during this period: Individual Donors: A number of (U)HNW & earning-to-give donors, many of whom are still active today, such as Jaan Tallinn, Luke Ding, Matt Wage and Jeff Kaufman & Julia Wise. I expect I'm missing somewhere between ~$100,000 to $1,000,000 of donations from individuals in this chart per year from 2012 to 2016. EA Giving Group: In 2013, Nick Beckstead and a large anonymous donor started a fund (the EA Giving Group) to which multiple individual ...
This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.I've been reflecting on the role of funding in the EA movement & community over time. Specifically I wanted to improve common knowledge around funding flows in the EA movement building space. It seems that many people may not be aware of it. Funders (and the main organizations they have supported) have shaped the EA community in many ways - the rate & speed at which EA has grown (example), the people that are attracted and given access to opportunities, and the culture and norms the community embodies and the overall ecosystem. I share some preliminary results from research I've conducted looking at the historical flow of data to movement building sources. I wanted to share what I have so far for the strategy fortnight to get conversation started. I think there is enough information here to understand the general pattern of funding flows. If you want to play around with the data, here is my (raw, messy) spreadsheet.Key observations Total funding 2012-2013 by known sourcesAccording to known funding sources, approximately $245M have been granted to EA movement building organizations and projects since 2012. I'd estimate the real number is something like $250-280M. The Open Philanthropy EA Community Growth (Longtermism) team (OP LT) has directed ~64% ($159M) of known movement building funding (incl. ~5% or $12M to the EAIF) since 2016. Note that OP launched an EACG program for Global Health and Wellbeing in 2022, which started making grants in 2023. Their budget is significantly smaller (currently ~$10M per year) and they currently prioritize effective giving organizations.The unlabeled dark blue segment is “other donors”Funders of EA Groups from 2015-2022 See discussion below for description of the "CEA - imputed" category. Note that I've primarily estimated paid organizer time, not general groups expenses. EA groups are an important movement building project. The Centre for Effective Altruism (CEA) has had an outsized influence on EA groups for much of the history of the EA movement. Until May 2021, CEA was the primary funder of part- and full-time work on EA groups. In May 2021, CEA narrowed its scope to certain university & city/national groups, and the EA Infrastructure Fund (EAIF) started making grants to non-target groups. In 2022, OP LT took over most university groups funding from both CEA (in April) and EAIF (in August). Until 2021 most of CEA's funding has come from OP LT, so its EA groups funding can be seen as an OP LT regrant. Breakdown of funding by source and time (known sources)2012-2016Before 2016, there was very limited funding available for meta projects and almost no support from institutional funders. Most organizations active during this period were funded by individual earning-to-givers and major donors or volunteer-run. Here's a view of funding from 2012-2016:No donations from Jaan Tallinn during this period were via SFF as it didn't exist yet. There is a $10K donation from OP to a UC Berkeley group in 2015 that is not visible in the main chart. “Other donors” includes mostly individual [...]--- First published: June 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/nnTQaLpBfy2znG5vm/the-flow-of-funding-in-ea-movement-building --- Narrated by TYPE III AUDIO. Share feedback on this narration.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lightcone Infrastructure is looking for funding, published by habryka on June 14, 2023 on LessWrong. Lightcone Infrastructure is looking for funding and are working on the following projects: We run LessWrong, the AI Alignment Forum, and have written a lot of the code behind the Effective Altruism Forum. During 2022 and early 2023 we ran the Lightcone Offices, and are now building out a campus at the Rose Garden Inn in Berkeley, where we've been doing repairs and renovations for the past few months. We've also been substantially involved in the Survival and Flourishing Fund's S-Process (having written the app that runs the process) and are now running Lightspeed Grants. We also pursue a wide range of other smaller projects in the space of "community infrastructure" and "community crisis management". This includes running events, investigating harm caused by community institutions and actors, supporting programs like SERI MATS, and maintaining various small pieces of software infrastructure. If you are interested in funding us, please shoot me an email at habryka@lesswrong.com (or if you want to give smaller amounts, you can donate directly via PayPal here). Funding is quite tight since the collapse of FTX, and I do think we work on projects that have a decent chance of reducing existential risk and generally making humanity's future go a lot better, though this kind of stuff sure is hard to tell. We are looking to raise around $3M to $6M for our operations in the next 12 months. Also feel free to ask any questions in the comments. Two draft readers of this post expressed confusion that Lightcone needs money, given that we just announced a funding process that is promising to give away $5M in the next two months. The answer to that is that we do not own the money moved via Lightspeed Grants and are only providing grant recommendations to Jaan Tallinn and other funders. We do separately apply for funding from the Survival and Flourishing Fund, through which Jaan has been our second biggest funder. We also continue to actively fundraise from both SFF and Open Philanthropy (our largest funder). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lightcone Infrastructure is looking for funding, published by habryka on June 14, 2023 on LessWrong. Lightcone Infrastructure is looking for funding and are working on the following projects: We run LessWrong, the AI Alignment Forum, and have written a lot of the code behind the Effective Altruism Forum. During 2022 and early 2023 we ran the Lightcone Offices, and are now building out a campus at the Rose Garden Inn in Berkeley, where we've been doing repairs and renovations for the past few months. We've also been substantially involved in the Survival and Flourishing Fund's S-Process (having written the app that runs the process) and are now running Lightspeed Grants. We also pursue a wide range of other smaller projects in the space of "community infrastructure" and "community crisis management". This includes running events, investigating harm caused by community institutions and actors, supporting programs like SERI MATS, and maintaining various small pieces of software infrastructure. If you are interested in funding us, please shoot me an email at habryka@lesswrong.com (or if you want to give smaller amounts, you can donate directly via PayPal here). Funding is quite tight since the collapse of FTX, and I do think we work on projects that have a decent chance of reducing existential risk and generally making humanity's future go a lot better, though this kind of stuff sure is hard to tell. We are looking to raise around $3M to $6M for our operations in the next 12 months. Also feel free to ask any questions in the comments. Two draft readers of this post expressed confusion that Lightcone needs money, given that we just announced a funding process that is promising to give away $5M in the next two months. The answer to that is that we do not own the money moved via Lightspeed Grants and are only providing grant recommendations to Jaan Tallinn and other funders. We do separately apply for funding from the Survival and Flourishing Fund, through which Jaan has been our second biggest funder. We also continue to actively fundraise from both SFF and Open Philanthropy (our largest funder). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching Lightspeed Grants (Apply by July 6th), published by habryka on June 7, 2023 on LessWrong. Lightspeed Grants provides fast funding for projects that help humanity flourish among the stars. The application is minimal and grant requests of any size ($5k - $5M) are welcome. Budget is $5M for this grant round, and (probably) more in future rounds. Applications close in 30 days (July 6th). Opt into our venture grants program to get a response within 14 days (otherwise get a response in 30-60 days, around the start of August). Apply here. The application should only take 1-2 hours! If you want to join as a funder, send us an email at funds@lightspeedgrants.org. Is the application really only 2 hours though? Often, applicants get nervous about grant applications and spend a lot more time than they need to on them, or get overwhelmed and procrastinate on applying. We really just want you to spell out some basic information about your project in a plain way and think this is doable in the 1-2 hour timeframe. If you're worried about overthinking things, we'll have application co-working sessions and office hours every Thursday of July between noon and 2PM PT. If you think you might procrastinate on the application or get stuck in the weeds and spend a ton of unnecessary time on it, you can join one and fill out the application on the call, plus ask questions. Add the co-working to your calendar here! Who runs Lightspeed Grants? Lightspeed grants is run by Lightcone Infrastructure. Applications are evaluated by ~5 evaluators selected for their general reasoning ability and networks including applicants/references, and are chosen in collaboration with our funders. Our primary funder for this round is Jaan Tallinn. Applications are open to individuals, nonprofits, and projects that don't have a charitable sponsor. When necessary, Hack Club Bank provides fiscal sponsorship for successful applications. Why? Improved grantee experience I've been doing various forms of grantmaking for 5+ years, both on the Long Term Future Fund and the Survival and Flourishing Fund, and I think it's possible to do better, both in grant quality and applicant-experience. Applications tend to be unnecessarily complicated to fill out, and it can take months to get a response from existing grantmakers, often without any intermediate updates. Different donors also often end up playing donor-chicken where donors wait to fund an organization to see whether other donors will fund it first, delaying decisions further. This period of funding uncertainty can have large effects on organizational strategy, and also makes experimenting with smaller projects much more costly, since each grant application might be associated with weeks to months of funding uncertainty, meaning it takes months to go from an idea to execution, or to go from "the beta test turned out well" to moving forward with the project. My goal is to have an application process that requires minimal additional work beyond "explain why your project is a good idea and you are a good fit for it" and where most responses happen within 2 weeks. This round, we're aiming for something somewhat less ambitious while we find our footing, and are planning to get back to people reliably in less than 60 days, with the ability to opt-into a 14-day response process. Improved funder experience Currently funders either have to find hyper-local grant opportunities among their friends and acquaintances and fund them directly, start something like their own foundation, or give up control over their funds and donate money to something like the Long Term Future Fund, which will then fund some portfolio of grants that the funder has relatively little insight into (especially with the decrease in grant writeups from the LTFF and EAIF). The Lightspeed Grants proce...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching Lightspeed Grants (Apply by July 6th), published by habryka on June 7, 2023 on LessWrong. Lightspeed Grants provides fast funding for projects that help humanity flourish among the stars. The application is minimal and grant requests of any size ($5k - $5M) are welcome. Budget is $5M for this grant round, and (probably) more in future rounds. Applications close in 30 days (July 6th). Opt into our venture grants program to get a response within 14 days (otherwise get a response in 30-60 days, around the start of August). Apply here. The application should only take 1-2 hours! If you want to join as a funder, send us an email at funds@lightspeedgrants.org. Is the application really only 2 hours though? Often, applicants get nervous about grant applications and spend a lot more time than they need to on them, or get overwhelmed and procrastinate on applying. We really just want you to spell out some basic information about your project in a plain way and think this is doable in the 1-2 hour timeframe. If you're worried about overthinking things, we'll have application co-working sessions and office hours every Thursday of July between noon and 2PM PT. If you think you might procrastinate on the application or get stuck in the weeds and spend a ton of unnecessary time on it, you can join one and fill out the application on the call, plus ask questions. Add the co-working to your calendar here! Who runs Lightspeed Grants? Lightspeed grants is run by Lightcone Infrastructure. Applications are evaluated by ~5 evaluators selected for their general reasoning ability and networks including applicants/references, and are chosen in collaboration with our funders. Our primary funder for this round is Jaan Tallinn. Applications are open to individuals, nonprofits, and projects that don't have a charitable sponsor. When necessary, Hack Club Bank provides fiscal sponsorship for successful applications. Why? Improved grantee experience I've been doing various forms of grantmaking for 5+ years, both on the Long Term Future Fund and the Survival and Flourishing Fund, and I think it's possible to do better, both in grant quality and applicant-experience. Applications tend to be unnecessarily complicated to fill out, and it can take months to get a response from existing grantmakers, often without any intermediate updates. Different donors also often end up playing donor-chicken where donors wait to fund an organization to see whether other donors will fund it first, delaying decisions further. This period of funding uncertainty can have large effects on organizational strategy, and also makes experimenting with smaller projects much more costly, since each grant application might be associated with weeks to months of funding uncertainty, meaning it takes months to go from an idea to execution, or to go from "the beta test turned out well" to moving forward with the project. My goal is to have an application process that requires minimal additional work beyond "explain why your project is a good idea and you are a good fit for it" and where most responses happen within 2 weeks. This round, we're aiming for something somewhat less ambitious while we find our footing, and are planning to get back to people reliably in less than 60 days, with the ability to opt-into a 14-day response process. Improved funder experience Currently funders either have to find hyper-local grant opportunities among their friends and acquaintances and fund them directly, start something like their own foundation, or give up control over their funds and donate money to something like the Long Term Future Fund, which will then fund some portfolio of grants that the funder has relatively little insight into (especially with the decrease in grant writeups from the LTFF and EAIF). The Lightspeed Grants proce...
There is no doubt that artificial intelligence – or AI – has become an important part of our lives. It is no longer just a thing of science fiction: it's an incredible technological breakthrough that has changed the way we live. But there are fears that AI has become too intelligent and could be a threat to humanity.毫无疑问,人工智能 - 或 AI - 已经成为我们生活的重要组成部分。 它不再只是科幻小说中的事物:它是一项令人难以置信的技术突破,改变了我们的生活方式。 但有人担心人工智能已经变得过于智能,可能对人类构成威胁。This claim might sound extreme, but a letter signed by more than 1,000 technology experts, including Tesla boss Elon Musk, called on the world to press pause on the development of more advanced AI because of the risks. Estonian billionaire Jaan Tallinn, for example, who helped develop communication app Skype, thinks we should be cautious. And The Future of Life Institute, a not-for-profit organisation, says that there should be a temporary pause in advanced AI development, saying that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."这种说法听起来可能有些极端,但包括特斯拉老板埃隆·马斯克在内的 1000 多名技术专家联名联名的一封信呼吁世界暂停开发更先进的人工智能,因为存在风险。 例如,帮助开发通讯应用程序 Skype 的爱沙尼亚亿万富翁 Jaan Tallinn 认为我们应该谨慎。 非营利组织生命未来研究所 (The Future of Life Institute) 表示,高级人工智能的开发应该暂时暂停,并称“具有人类竞争情报的人工智能系统会对社会和人类构成深远的风险。”This pessimistic outlook is supported by a report by investment bank Goldman Sachs that says AI could replace the equivalent of 300 million full-time jobs. But it may also mean new jobs and a productivity boom. We may argue that AI such as chatbots can help us. State-of-the-art ChatGPT, for example, has been helping some students write assignments. AI is allowing computers to think or act in a more human way. And machine learning means computers can learn what to do without being given explicit instructions. The technology is impressive, but as it starts to think for itself, will it outsmart us?这种悲观的前景得到了投资银行高盛的一份报告的支持,该报告称人工智能可以取代相当于 3 亿个全职工作岗位。 但这也可能意味着新的就业机会和生产力的繁荣。 我们可能会争辩说,聊天机器人等人工智能可以帮助我们。 例如,最先进的 ChatGPT 一直在帮助一些学生编写作业。 AI 允许计算机以更人性化的方式思考或行动。 机器学习意味着计算机可以在没有明确指示的情况下学习该做什么。 这项技术令人印象深刻,但当它开始独立思考时,它会比我们聪明吗?Some people are more optimistic. AI advocates say the tech is already delivering real social and economic benefits for people. Meanwhile, the founder of Microsoft, Bill Gates, has called on governments to work with industry to "limit the risks" of AI. But he says the technology could save lives, particularly in poorer countries. He says, "Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world's best AIs on its biggest problems." If this happens, maybe humanity will have a future.有些人更乐观。 人工智能倡导者表示,该技术已经为人们带来了真正的社会和经济利益。 与此同时,微软创始人比尔盖茨呼吁政府与业界合作,以“限制人工智能的风险”。 但他表示,这项技术可以挽救生命,尤其是在较贫穷的国家。 他说:“正如世界需要最聪明的人专注于最大的问题一样,我们也需要让世界上最好的人工智能专注于最大的问题。” 如果发生这种情况,也许人类会有未来。词汇表artificial intelligence 人工智能,简称 “AI”science fiction 科幻小说、电影等作品breakthrough 重大进展,突破humanity 人类press pause 按下暂停键,叫停advanced 先进的communication app 通讯应用程序human-competitive 拥有与人类竞争能力的profound 重大的,深刻的replace 取代,代替productivity 生产率chatbot 聊天机器人state-of-the-art 最先进的,应用最新技术的machine learning 机器学习impressive 给人留下深刻印象的outsmart 在智力上胜过,比…更聪明advocate 支持者,提倡者brightest 最聪明的
Jaan Tallinn is a co-founder of the Centre for the Study of Existential Risk in Cambridge, the Future of Life Institute in Cambridge and a founding engineer of Skype. Session summary: Foresight Mentorship with Jaan Tallinn - Foresight InstituteThe Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Apply to Foresight's virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts. Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jaan Tallinn's 2022 Philanthropy Overview, published by jaan on May 14, 2023 on LessWrong. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2022 results. in 2022 i made $23M worth of endpoint grants ($22.9M after various admin costs), exceeding my commitment of $19.9M (20k times $993.64 — the minimum price of ETH in 2022). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jaan Tallinn's 2022 Philanthropy Overview, published by jaan on May 14, 2023 on LessWrong. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2022 results. in 2022 i made $23M worth of endpoint grants ($22.9M after various admin costs), exceeding my commitment of $19.9M (20k times $993.64 — the minimum price of ETH in 2022). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jaan Tallinn's 2022 Philanthropy Overview, published by jaan on May 14, 2023 on LessWrong.to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2022 results.in 2022 i made $23M worth of endpoint grants ($22.9M after various admin costs), exceeding my commitment of $19.9M (20k times $993.64 — the minimum price of ETH in 2022).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
The co-creator of Skype says yes. The George Mason University economist says no.
The race to create advanced AI is becoming a suicide race. That's part of the thinking behind the open letter from the Future of Life Institute which "calls on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4".In this episode, our guest, Jaan Tallinn, explains why he sees this pause as a particularly important initiative.In the 1990s and 20-noughts, Jaan led much of the software engineering for the file-sharing application Kazaa and the online communications tool Skype. He is also known as one of the earliest investors in DeepMind, before they were acquired by Google.More recently, Jaan has been a prominent advocate for study of existential risks, including the risks from artificial superintelligence. He helped set up the Centre for the Study of Existential Risk (CSER) in 2012 and the Future of Life Institute (FLI) in 2014.Follow-up reading:https://futureoflife.org/open-letter/pause-giant-ai-experiments/https://www.cser.ac.uk/https://en.wikipedia.org/wiki/Jaan_TallinnTopics addressed in this episode include:*) The differences between CSER and FLI*) Do the probabilities for the occurrence of different existential risks vary by orders of magnitude?*) The principle that "arguments screen authority"*) The possibility that GPT-6 will be built, not by humans, but by GPT-5*) Growing public concern, all over the world, that the fate of all humanity is, in effect, being decided by the actions of just a small number of people in AI labs*) Two reasons why FLI recently changed its approach to AI risk*) The AI safety conference in 2015 in Puerto Rico was initially viewed as a massive success, but it has had little lasting impact*) Uncertainty about a potential cataclysmic event doesn't entitle people to conclude it won't happen any time soon*) The argument that LLMs (Large Language Models) are an "off ramp" rather than being on the road to AGI*) Why the duration of 6 months was selected for the proposed pause*) The "What about China?" objection to the pause*) Potential concrete steps that could take place during the pause*) The FLI document "Policymaking in the pause"*) The article by Luke Muehlhauser of Open Philanthropy, "12 tentative ideas for US AI policy"*) The "summon and tame" way of thinking about the creation of LLMs - and the risk that minds summoned in this way won't be able to be tamed*) Scenarios in which the pause might be ignored by various entities, such as authoritarian regimes, organised crime, rogue corporations, and extraordinary individuals such as Elon Musk and John Carmack*) A meta-principle for deciding which types of AI research should be paused*) 100 million dollar projects become even harder when they are illegal*) The case for requiring the pre-registration of largescale mind-summoning experiments*) A possible 10^25 limit on the number of FLOPs (Floating Point Operations) an AI model can spend*) The reactions by AI lab leaders to the widescale public response to GPT-4 and to the pause letter*) Even Sundar Pichai, CEO of Google/Alphabet, has called for government intervention regarding AI*) The hardware overhang complication with the pause*) Not letting "the perfect" be "the enemy of the good"*) Elon Musk's involvement with FLI and with the pause letter*) "Humanity now has cancer"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Nathan Labenz dives in with Jaan Tallinn, a technologist, entrepreneur (Kazaa, Skype), and investor (DeepMind and more) whose unique life journey has intersected with some of the most important social and technological events of our collective lifetime. Jaan has since invested in nearly 180 startups, including dozens of AI application layer companies and some half dozen startup labs that focus on fundamental AI research, all in an effort to support the teams that he believes most likely to lead us to AI safety, and to have a seat at the table at organizations that he worries might take on too much risk. He's also founded several philanthropic nonprofits, including the Future of Life Institute, which recently published the open letter calling for a six-month pause on training new AI systems. In this discussion, we focused on: - the current state of AI development and safety -Jan's expectations for possible economic transformation - what catastrophic failure modes worry him most in the near term - How big of a bullet we dodged with the training of GPT-4 - Which organizations really matter for immediate-term pause purposes - How AI race dynamics are likely to evolve over the next couple of years RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics LINKS REFERENCED IN THE EPISODE: Future of Life's open letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Eliezer Yudkowsky's TIME article: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ Podcast: Daniela and Dario Amodei on Anthropic- https://podcasts.apple.com/ie/podcast/daniela-and-dario-amodei-on-anthropic/id1170991978?i=1000552976406 Zvi on the pause: https://thezvi.substack.com/p/on-the-fli-ai-risk-open-letter TIMESTAMPS: (0:00) Episode Preview (1:30) Jaan's impressive entrepreneurial career and his role in the recent AI Open Letter (3:26) AI safety and Future of Life Institute (6:55) Jaan's first meeting with Eliezer Yudkowsky and the founding of the Future of Life Institute (13:00) Future of AI evolution (15:55) Sponsor: Omneky (17:20) Jaan's investments in AI companies (24:22) The emerging danger paradigm (28:10) Economic transformation with AI (33:48) AI supervising itself (35:23) Language models and validation (40:06) Evolution, useful heuristics, and lack of insight into selection process (43:13) Current estimate for life-ending catastrophe (46:09) Inverse scaling law (54:20) Our luck given the softness of language models (56:24) Future of Language Models (1:01:00) The Moore's law of mad science (1:03:02) GPT-5 type project (1:09:00) The AI race dynamics (1:11:00) AI alignment with the latest models (1:14:31) AI research investment and safety (1:21:00) What a six month pause buys us (1:27:01) AI's Turing Test Passing (1:29:33) AI safety and risk (1:33:18) Responsible AI development. (1:41:20) Neuralink implant technology
22. märtsil avaldas Future of Life Institute pöördumise, millega soovitatakse pauslie panna võimaste tehisintellektide arendamine, kuniks pole selge, mis on selliste arendamise tagajärjeks. Tänaseks on see pöördumine kogunud üle 20 000 allkirja. Vahepealne suurem arv – üle 60. tuhande allkirja – tulenes sellest, et sedavõrd suurt allkirjade voogu polnud instituut harjunud töötlema. Saates on külas Skype'i üks loojatest ja tehnoloogiainvestor Jaan Tallinn. Ta on ka nii selle instituudi asutaja nagu ka pöördumise autor. Juttu tulebki sellest, millised ohud on seotud võimaste tehisintellektide loomisega, miks on need sama ohtlikud, kui tuumarelv, miks neid saab nimetada tulnukateks ning kas need kunagi hakkavad ka me lapsi õpetama? Ja peamine, kas valitsustel on selle vastu midagi võimalik ette võtta? Saatejuht Marek Strandberg.
Katy Balls hosts the highlights from Sunday morning's political shows. The Home Secretary Suella Braverman stands by her Rwanda immigration policy despite evidence refugees were shot by police there in 2018. Business representatives Minette Batters and Murray Lambell argue immigration needs to go up, not down. Braverman and Labour's Lisa Nandy clash over who is to blame for a lack of action over child sexual exploitation. And Skype co-founder Jaan Tallinn suggests AI might represent an existential threat to humanity. Produced by Joe Bedell-Brill and Cindy Yu.
Así es, un millar de personalidades, entre las que se encuentran Elon Musk, Steve Wozniak, cofundador de Apple o Jaan Tallinn, fundador de Skype, han alertado que se está produciendo una carrera descontrolada en el desarrollo de sistemas muy poderosos que nadie, ni siquiera sus creadores, entienden, predicen o pueden controlar con fiabilidad.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Japan AI Alignment Conference, published by Chris Scammell on March 10, 2023 on The AI Alignment Forum. Conjecture and ARAYA are hosting and organizing the first Japan AI Alignment Conference. The conference will take place in Tokyo, Japan on March 11 and 12. Details about the event can be found here. This event is generously supported by a grant from the Long Term Future Fund. The goal of the conference is to illustrate the AI control problem to Japanese AI researchers, introduce them to current trends in AI alignment research, inspire new research directions, and to provide Western researchers exposure to a different set of AI safety thoughts from Japan. This is an exploratory event, and we plan to write a postmortem about the event in due time. The first half of the conference will be livestreamed. It will feature an opening talk from Connor Leahy (CEO of Conjecture), a fireside chat between Ryota Kanai (CEO of ARAYA) and Jaan Tallinn, and some presentations on AI safety research directions in the West and in Japan. You can follow the first part of the conference here. The livestream runs from 9:30am-12:30pm JST. The rest of the conference will not be livestreamed, and will consist of in-person small group workshops to discuss various AI alignment research directions.The conference will have ~50 attendees from ARAYA, Conjecture, Whole Brain Architecture Initiative, MIRI, OpenAI, RIKEN, Ritsumeikan University, University of Tokyo, Omron Sinic X, Keio University, and others. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Japan AI Alignment Conference, published by Chris Scammell on March 10, 2023 on LessWrong. Conjecture and ARAYA are hosting and organizing the first Japan AI Alignment Conference. The conference will take place in Tokyo, Japan on March 11 and 12. Details about the event can be found here. This event is generously supported by a grant from the Long Term Future Fund. The goal of the conference is to illustrate the AI control problem to Japanese AI researchers, introduce them to current trends in AI alignment research, inspire new research directions, and to provide Western researchers exposure to a different set of AI safety thoughts from Japan. This is an exploratory event, and we plan to write a postmortem about the event in due time. The first half of the conference will be livestreamed. It will feature an opening talk from Connor Leahy (CEO of Conjecture), a fireside chat between Ryota Kanai (CEO of ARAYA) and Jaan Tallinn, and some presentations on AI safety research directions in the West and in Japan. You can follow the first part of the conference here. The livestream runs from 9:30am-12:30pm JST. The rest of the conference will not be livestreamed, and will consist of in-person small group workshops to discuss various AI alignment research directions.The conference will have ~50 attendees from ARAYA, Conjecture, Whole Brain Architecture Initiative, MIRI, OpenAI, RIKEN, Ritsumeikan University, University of Tokyo, Omron Sinic X, Keio University, and others. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Japan AI Alignment Conference, published by Chris Scammell on March 10, 2023 on LessWrong. Conjecture and ARAYA are hosting and organizing the first Japan AI Alignment Conference. The conference will take place in Tokyo, Japan on March 11 and 12. Details about the event can be found here. This event is generously supported by a grant from the Long Term Future Fund. The goal of the conference is to illustrate the AI control problem to Japanese AI researchers, introduce them to current trends in AI alignment research, inspire new research directions, and to provide Western researchers exposure to a different set of AI safety thoughts from Japan. This is an exploratory event, and we plan to write a postmortem about the event in due time. The first half of the conference will be livestreamed. It will feature an opening talk from Connor Leahy (CEO of Conjecture), a fireside chat between Ryota Kanai (CEO of ARAYA) and Jaan Tallinn, and some presentations on AI safety research directions in the West and in Japan. You can follow the first part of the conference here. The livestream runs from 9:30am-12:30pm JST. The rest of the conference will not be livestreamed, and will consist of in-person small group workshops to discuss various AI alignment research directions.The conference will have ~50 attendees from ARAYA, Conjecture, Whole Brain Architecture Initiative, MIRI, OpenAI, RIKEN, Ritsumeikan University, University of Tokyo, Omron Sinic X, Keio University, and others. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Jaan Tallinn is one of the founding engineers of Skype and the file-sharing service Kazaa. He's also a co-founder of the Centre for the Study of Existential Risk and the Future of Life Institute. Auren and Jaan discuss the trajectory of artificial intelligence and the existential risks it could present to humanity. Jann talks about the prevailing attitudes towards risk in AI research and what needs to change in order get aligned, safe AI. Jaan and Auren also talk about how Jaan's native Estonia has become one of the most tech-forward societies in Europe. World of DaaS is brought to you by SafeGraph. For more episodes, visit safegraph.com/podcasts.You can find Auren Hoffman on Twitter at @auren and Jaan Tallinn's work on YouTube and at the Centre for Existential Risk and the Future of Life Institute.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR: 2022 Update, published by CEEALAR on December 13, 2022 on The Effective Altruism Forum. Tldr: we are still going, currently have lots of space, and have potential for further growth. Please apply if you have EA-related learning or research you want to do that requires support. Update It's been a while since our last update, but, suffice to say, we are still here! During 2021 we gradually increased our numbers again, from a low of 4 grantees mid-pandemic, to 15 by the end of the year (our full capacity, with no room sharing and 2 staff living on site). We lifted all Covid restrictions in March 2022, and things started to again feel like they were pre-pandemic. However, our building and its contents are old, and in mid-May this year we closed to new grantees for building repairs and maintenance. We reopened bookings again at the end of July, by which time we had once again got down to very low numbers - we are now up and running again and starting to fill up with new grantees, but we still have plenty of spare capacity. Please apply if you are interested in doing EA work at the hotel. We are offering (up to full) subsidies on accommodation and board for those wishing to learn or work on research or charitable projects (in fitting with our charitable objects). See our Grant Making Policy for more details. Along with the ups and downs in numbers, we've had ups and downs in other ways. We were delighted to receive our largest grant to date, from the FTX Future Fund, in May ($125k or ~ a year of runway), but this is now bittersweet given recent events. We condemn the actions of SBF and the FTX/Alameda inner circle, and are ashamed of the association. It's possible the grant will be subject to clawbacks as a result of the commenced bankruptcy proceedings. As with many FTX grantees in the EA Community, we are following and discussing the situation as it unfolds. We intend to follow the consensus that emerges around any voluntary returning of unspent funds. Despite the significant funding, with the ongoing energy crisis, inflation in general, and increased spending on building maintenance and salaries, our costs have risen rapidly, and we were recently down to ~4 months of runway again. Enter the Survival & Flourishing Fund (SFF). We are extremely grateful to have been awarded a grant of $224,000(!) by Jaan Tallinn as per their most recent announcement. In order to attract and retain talent, with the last grant we upped our management salaries to ~the UK median salary (£31,286), plus accommodation and food (worth about £6k). It's now 4.5 years since we first opened. Since then we have supported ~100 EAs aspiring to do direct work with their career development, and hosted another ~200 visitors from the EA community participating in events, networking and community building. We've established an EA community hub in a relatively low cost location. We believe there is plenty of potential demand for it to scale, but we still need to get the word out (which we are doing in part with this blog post). Our impact Grantee work There are two main aspects to our potential impact: the direct work and career building of our grantees, and the community building and networking we facilitate. We are open to people working on all cause areas of EA, with the caveat that the work we facilitate is desk-based and mostly remote. In practice, this has meant that longtermist topics, especially x-risks, and in particular AI Alignment, have been foremost amongst the work of the grantees we have hosted. But we have also had grantees interested in animal welfare, global health, wellbeing, development and progress, and meta topics related to EA community building. Since our last update, we have had a number of grantees go on to internships, contracts and jobs at the likes of SERI, CHAI, Alvea, Re...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF is doubling speculation (rapid) grant budgets; FTX grantees should consider applying, published by JueYan on December 8, 2022 on The Effective Altruism Forum. The Survival and Flourishing Fund (SFF) funds many longtermist, x-risk, and meta projects, and has distributed $18mm YTD. While SFF's focus areas are similar to those of the FTX Future Fund, SFF has received few applications since the latest round closed in August. This is a reminder that projects can apply to be considered for expedited speculation grants at any time. Speculation grants can be approved in days and paid out as quickly as within a month. Past speculation grants have ranged from $10,000 to $400,000, and applicants for speculation grants will automatically be considered for the next main SFF round. In response to the recent extraordinary need, Jaan Tallinn, the main funder of SFF, is doubling speculation budgets. Grantees impacted by recent events should apply. SFF funds charities and projects hosted by organizations with charity status. You can get a better idea of SFF's scope from its website and its recent grants. I encourage relevant grantees to consider applying to SFF, in addition to the current array of efforts led by Open Phil, Mercatus, and Nonlinear. For general information about the Survival and Flourishing Fund, see:/ Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 ALLFED highlights, published by Ross Tieman on November 28, 2022 on The Effective Altruism Forum. Executive Summary Like many others, we are reeling from recent FTX news, and what it means for ALLFED and the whole EA Community. Alliance to Feed the Earth in Disasters was an FTX Future Fund grantee. Taken the current landscape, we were debating whether we should include it in these highlights. We have decided to do so in the interest of transparency and integrity, so as to accurately report on our January-November 2022 position. We would like to start with massive thanks to Jaan Tallinn, whose generous support last year through the Survival and Flourishing Fund ($1,154,000) is a major reason why we are able to weather this storm (also a huge thank you to all our other donors, we appreciate each and every one). 2022 marks ALLFED's 5th anniversary (2017-2022). Being a fully remote team, we now have team members on all continents except Antarctica. By the end of the year, we will have a presence in New Zealand due to David Denkenberger accepting a professor position at the University of Canterbury in Christchurch. In these 2022 highlights: To start with, we give updates on ALLFED's 2022 research, including our papers and Abrupt Sun Reduction Scenario (ASRS) preparedness and response plans, including a recent proposal for the US government. Next, we talk about financial mechanisms for food system interventions, including superpests, climate food finance nexus, pandemic preparedness, and our policy work. We then move to operations and communications highlights, including our media mentions. We next talk about events, workshops and presentations we have delivered this year. We then dive into some major changes to our team (including at the management level), ALLFED's internships and our volunteering program, and also give key statistics from this spring's research associate recruitment (you will also find there imminent PhD opportunities as well as a temporary researcher position with David in New Zealand). Finally, we thank those whose support we wish to especially recognize this year and talk about our funding needs for 2023, which range from dedicated funding to establish an ALLFED UK charity, to resilient food pilots, to support to continue key priority research projects on the topic of resilient foods for nuclear winter-level shocks, and to support preparedness and response plans (essential if we are to be able to present to decision makers within the current policy window). There is no escaping the fact that, rather unexpectedly, our funding situation has worsened due to the FTX developments. We will therefore be especially grateful for your donations and support this giving season (please visit our donation webpage or contact david@allfed.info if you are interested in donating appreciated stock). Since our inception, we have been contributing annual updates to the EA Forum. You can find last year's ALLFED Highlights here, and here is our last EA Forum post EA Resilience & ALLFED's Case Study. Research It's been a good year for research at ALLFED. Papers We have submitted 4 papers to peer review, one of which has now been accepted and published. Authors: David Denkenberger, Anders Sandberg, Ross John Tieman, Joshua M. Pearce Status: Published (peer reviewed) Journal: The International Journal of Disaster Risk Reduction This paper estimates the long-term cost-effectiveness of resilient foods for preventing starvation in the face of a global agricultural collapse caused by a long-lasting sunlight reduction, and compares it with that of investing in artificial general intelligence (AGI) safety. Using two versions of a probabilistic model, the researchers find that investing in resilient foods is more cost-effective than investing in AGI safety, with a confidence of ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If FTX is liquidated, who ends up controlling Anthropic?, published by ofer on November 15, 2022 on The Effective Altruism Forum. [EDIT: The original title of this question was: "If FTX/Alameda currently control Anthropic, who will end up controlling it?". Based on the comments so far, it does not seem likely that FTX/Alameda currently control Anthropic.] If I understand correctly, FTX/Alameda have seemingly invested $500M in Anthropic, according to a Bloomberg article: FTX/Alameda were funneling customer money into effective altruism. Bankman-Fried seems to have generously funded a lot of effective altruism charities, artificial-intelligence and pandemic research, Democratic political candidates, etc. One $500 million entry on the desperation balance sheet is “Anthropic,” a venture investment in an AI safety company. [...] That seems consistent with the following excerpts from this page on Anthropic's website: Anthropic, an AI safety and research company, has raised $580 million in a Series B. The Series B follows the company raising $124 million in a Series A round in 2021. The Series B round was led by Sam Bankman-Fried, CEO of FTX. The round also included participation from Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research (CERR). Is it likely that FTX/Alameda currently have >50% voting power over Anthropic? If they do, who will end up having control over Anthropic? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Huw Price is an author, was the Bertrand Russell Professor of Philosophy at the University of Cambridge, and is the co-founder of the Centre for the Study of Existential Risk.During our conversation, Huw talks about meeting Jaan Tallinn of Skype, learning about lesser-known existential risks of artificial intelligence and catastrophic new biological threats, the founding of the Centre in 2012, what an existential threat is, a near-existential event in 1962, and what an average citizen can do to mitigate the probability of an extinction event.There is no more important subject than the prevention of our own annihilation and the continuation of the human story. It is harrowing to learn how close we have already come to ending human existence on Earth, and it behooves all of us to learn a bit about what our x-risks are and align our priorities, knowledge, wisdom, and resources to lessen its likelihood.------------Support this podcast via VenmoSupport this podcast via PayPalSupport this podcast on Patreon------------Show notesLeave a rating on SpotifyLeave a rating on Apple PodcastsFollow "Keep Talking" on social media and access all episodes------------(00:00) Introduction(02:47) Getting involved in x-risk(13:27) What is existential risk?(19:10) What would an existential event look like?(23:23) The x-risk of AI(26:40) The x-risk of biological threats(30:30) "The Precipice"(31:28) How Vasili Arkhipov likely saved humanity(37:28) The Future of Life Institute(40:35) The x-risk of nuclear weapons(44:12) The risks of climate change(50:55) 1 in 6 chance of human extinction this century(53:35) Is it unethical to have children?(1:00:28) Actions people can make to mitigate x-risk(1:02:14) Do x-risk issues cause Huw depression?(1:04:13) Should people become "preppers"?(1:06:40) Huw's advice to deal with x-risks(1:10:25) Leaders in the x-risk community(1:12:30) Advice for mindset and attitude(1:15:20) Sources of hope and optimism
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many EA Billionaires five years from now?, published by Erich Grunewald on August 20, 2022 on The Effective Altruism Forum. Dwarkesh Patel argues that "there will be many more effective altruist billionaires". He gives three reasons for thinking so: People who seek glory will be drawn to ambitious and prestigious effective altruist projects. One such project is making a ton of money in order to donate it to effective causes. Effective altruist wealth creation is a kind of default choice for "young, risk-neutral, ambitious, pro-social tech nerds", i.e. people who are likelier than usual to become very wealthy. Effective altruists are more risk-tolerant by default, since you don't get diminishing returns on larger donations the same way you do on increased personal consumption. These early-stage businesses will be able to recruit talented effective altruists, who will be unusually aligned with the business's objectives. That's because if the business is successful, even if you as an employee don't cash out personally, you're still having an impact (either because the business's profits are channelled to good causes, as with FTX, or because the business's mission is itself good, as with Wave). The post itself is kind of fuzzy on what "many" means or which time period it's concerned with, but in a follow-up comment Patel mentions having made an even-odds bet to the effect that there'll be ≥10 new effective altruist billionaires in the next five years. He also created a Manifold Markets question which puts the probability at 38% as I write this. (A similar question on whether there'll be ≥1 new, non-crypto, non-inheritance effective altruist billionaire in 2031 is currently at 79% which seems noticeably more pessimistic.) I commend Patel for putting his money where his mouth is! Summary With (I believe) moderate assumptions and a simple model, I predict 3.5 new effective altruist billionaires in 2027. With more optimistic assumptions, I predict 6.0 new billionaires. ≥10 new effective altruist billionaires in the next five years seems improbable. I present these results and the assumptions that produced them and then speculate haphazardly. Assumptions If we want to predict how many effective altruist billionaires there will be in 2027, we should attend to base rates. As far as I know, there are five or six effective altruists billionaires right now, depending on how you count. They are Jaan Tallinn (Skype), Dustin Moskovitz (Facebook), Sam Bankman-Fried (FTX), Gary Wang (FTX) and one unknown person doing earning to give. We could also count Cari Tuna (Dustin Moskovitz's wife and cofounder of Open Philanthropy). It's possible that someone else from FTX is also an effective altruist and a billionaire. Of these, as far as I know only Sam Bankman-Fried and Gary Wang were effective altruists prior to becoming billionaires (the others never had the chance, since effective altruism wasn't a thing when they made their fortunes). William MacAskill writes: Effective altruism has done very well at raising potential funding for our top causes. This was true two years ago: GiveWell was moving hundreds of millions of dollars per year; Open Philanthropy had potential assets of $14 billion from Dustin Moskovitz and Cari Tuna. But the last two years have changed the situation considerably, even compared to that. The primary update comes from the success of FTX: Sam Bankman-Fried has an estimated net worth of $24 billion (though bear in mind the difficulty of valuing crypto assets, and their volatility), and intends to give essentially all of it away. The other EA-aligned FTX early employees add considerably to that total. There are other prospective major donors, too. Jaan Tallinn, the cofounder of Skype, is an active EA donor. At least one person earning to give (and not related to FT...
Jaan Tallinn on tarkvarainsener, ettevõtja, tehnoloogiainvestor ning maailma juhtivaid eksperte tehisintellekti ning eksistentsiaalse riski vallas. Jaan on õppinud Tartu ülikoolis teoreetilist füüsikat ning on üks Skype'i loojatest. 2012. aastal asutas ta Inglismaal Cambridge'i ülikooli juures asuva Eksistentsiaalse Riski Uurimiskeskuse ning 2014. aastal kaasasutas Elu Tuleviku Instituudi ehk The Future of Life Institute'i – samuti Cambridge'is, kuid seekord Massachusettsi osariigis USAs. Jaani ekspertiisi sügavus ning empaatia teeb temast meie maailmale ühe olulise mõtleja, kes panustab otsustajate ja loojate teadlikkuse tõstmisele tehisintellekti riskist ning seeläbi ka elukõlbmatu tuleviku ennetamisele. Selles saates räägime Lapsepõlvest Jõhvis vanavanemate juures Kuidas tekkis huvi programmeerimise vastu ja teda kujundanud põhi- ja keskkooli õpingutest Bluemoon'i loomisest ja arendamisest ehk mängudest, Soundclubist, Šveitsi pangale loodud tarkvarast, Everyday'st ja Skype'ini Skype'i algusest, skaleerumisest ja tehtud vigadest ning kuidas Eesti on viiest võimalikust sünnimaast kõige rohkem sellest kasu saanud AI riskidest – deep learning'ust, tehisintellekti arengust, investeerimisest. Kuidas saab AI aidata ehitada õiglasemat maailma? Liitu uudiskirjaga www.globaalsedeestlased.org, et uus saade jõuaks iga nädal sinu postkasti!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Could AI Governance Go Wrong?, published by HaydnBelfield on May 26, 2022 on The Effective Altruism Forum. (I gave a talk to EA Cambridge in February 2022. People have told me they found it useful as an introduction/overview so I edited the transcript, which can be found below. If you're familiar with AI governance in general, you may still be interested in the sections on 'Racing vs Dominance' and 'What is to be done?'.) Talk Transcript I've been to lots of talks which catch you with a catchy title and they don't actually tell you the answer until right at the end so I'm going to skip right to the end and answer it. How could AI governance go wrong? These are the three answers that I'm gonna give: over here you've got some paper clips, in the middle you've got some very bad men, and then on the right you've got nuclear war. This is basically saying the three cases are accident, misuse and structural or systemic risks. That's the ultimate answer to the talk, but I'm gonna take a bit longer to actually get there. I'm going to talk very quickly about my background and what CSER (the Centre for the Study of Existential Risk) is. Then I'm going to answer what is this topic called AI governance, then how could AI governance go wrong? Before finally addressing what can be done, so we're not just ending on a sad glum note but we're going out there realising there is useful stuff to be done. My Background & CSER This is an effective altruism talk, and I first heard about effective altruism back in 2009 in a lecture room a lot like this, where someone was talking about this new thing called Giving What We Can, where they decided to give away 10% of their income to effective charities. I thought this was really cool: you can see that's me on the right (from a little while ago and without a beard). I was really taken by these ideas of effective altruism and trying to do the most good with my time and resources. So what did I do? I ended up working for the Labour Party for several years in Parliament. It was very interesting, I learned a lot, and as you can see from the fact that the UK has a Labour government and is still in the European Union, it went really well. Two of the people I worked for are no longer even MPs. After this sterling record of success down in Westminster – having campaigned in one general election, two leadership elections and two referendums – I moved up to Cambridge five years ago to work at CSER. The Centre for the Study of Existential Risk: we're a research group within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilizational collapse. We do high quality academic research, we develop strategies for how to reduce risk, and then we field-build, supporting a global community of people working on existential risk. We were founded by these three very nice gentlemen: on the left that's Prof Huw Price, Jaan Tallinn (founding engineer of Skype and Kazaa) and Lord Martin Rees. We've now grown to about 28 people (tripled in size since I started) - there we are hanging out on the bridge having a nice chat. A lot of our work falls into four big risk buckets: pandemics (a few years ago I had to justify why that was in the slides, now unfortunately it's very clear to all of us) AI, which is what we're going to be talking mainly about today climate change and ecological damage, and then systemic risk from all of our intersecting vulnerable systems. Why care about existential risks? Why should you care about this potentially small chance of the whole of humanity going extinct or civilization collapsing in some big catastrophe? One very common answer is looking at the size of all the future generations that could come if we don't mess things up. The little circle in the middle is the number of ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Citizens: Targeted political contributions are probably the best passive donation opportunities for mitigating existential risk, published by Jeffrey Ladish on May 5, 2022 on The Effective Altruism Forum. I've often been skeptical that US political engagement was an effective use of time for EAs. During and after the 2016 election, I heard people front the idea that defeating Trump might be an effective use of EA resources. I'm skeptical that this is true, and I think it's easy to fall into the trap of “this thing my social group thinks is good: maybe it's also the most effective thing by EA standards”. Politics is driven by tribalism, so I think this is especially a risk here.Recently, I've surprised myself by coming to believe that donating to candidates who support policies which reduce existential risks is probably the best passive donation opportunity for US citizens. The main reason I've changed my mind is that I think highly aligned political candidates have a lot of leverage to affect policies that could impact the long-term future and are uniquely benefited from individual donations.While I don't think that the work of individual US congress members is more effective than the work of organizations like the Alignment Research Center working directly on long-term problems, I think that the presence of large funders willing and able to fully fund organizations working on long-term causes makes supporting political candidates with aligned values a more promising target for individuals donations, since congressional election campaigns are limited in how much funding they can accept from any individual donors. I think there are more effective donation opportunities but they require special knowledge that the major EA orgs don't have access to. For example, I've been looking for promising aligned people or projects in the infosec space that could use funding to jumpstart their career or project. Since I have special knowledge / expertise here, I expect these are among the highest impact donations I can make.However, I often get pretty busy and don't have time to look for neglected funding opportunities. Given time and attention constraints, I think donating to political candidates with a strong commitment to long-term oriented policies is my best default. This year, nearly all my EA donations are going to political campaigns. I wouldn't have predicted this last year! Why do I think this is effective compared to other donations? Large longtermist donors - Open Philanthropy, FTX, Jaan Tallinn, etc. - can and do fund most promising organizations working on long-term risks Political campaigns are limited by the size of individual donations from US citizens because of campaign finance laws. Contributions to Congressional candidates are limited to $2,900 per election, so $5,800 per year. (A primary election counts as a different election) I think having candidates in Congress willing to sponsor legislation on long-term issues like biosecurity and AI existential risk could significantly improve the prospects for policy interventions in these spaces There are officials in US government who prioritize long-term concerns, but no elected officials meet this bar The 0-1 difference in Congress is large! This is because a single congress person can sponsor legislation. As a start, it would be very good to have two candidates, one from each party, in both the senate and the house. A common concern among EAs is that supporting candidates might polarize important cause areas. Supporting candidates from different parties could help mitigate this risk and work towards bipartisan support of global risk reduction, an area that should appeal to people of any party. Example policy area I think is high impact: Banning gain of function research. This is a policy that nearly everyone wor...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jaan Tallinn's 2021 Philanthropy Overview, published by jaan on April 28, 2022 on LessWrong. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2021 results. in 2021 i made $22M worth of endpoint grants — exceeding my commitment of $14.4M (20k times $718.11 — the minimum price of ETH in 2021). notes: this number includes $1.9M to orgs that do re-granting (LTFF, EAIF, impetus grants, and PPF) — so it's likely that some of that $1.9M should not be included in the "endpoint grants in 2021" total. regardless, i'm comfortably above my commitment level for that not to matter; i have an ongoing substantial charitable project that's not reflected in the 2021 numbers — it's possible (and likely if ETH price holds, as SFF's s-process alone can't handle such amount) that i will report it retroactively next year or in 2024. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jaan Tallinn's 2021 Philanthropy Overview, published by jaan on April 28, 2022 on LessWrong. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2021 results. in 2021 i made $22M worth of endpoint grants — exceeding my commitment of $14.4M (20k times $718.11 — the minimum price of ETH in 2021). notes: this number includes $1.9M to orgs that do re-granting (LTFF, EAIF, impetus grants, and PPF) — so it's likely that some of that $1.9M should not be included in the "endpoint grants in 2021" total. regardless, i'm comfortably above my commitment level for that not to matter; i have an ongoing substantial charitable project that's not reflected in the 2021 numbers — it's possible (and likely if ETH price holds, as SFF's s-process alone can't handle such amount) that i will report it retroactively next year or in 2024. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jaan Tallinn's 2021 Philanthropy Overview, published by jaan on April 28, 2022 on LessWrong. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2021 results. in 2021 i made $22M worth of endpoint grants — exceeding my commitment of $14.4M (20k times $718.11 — the minimum price of ETH in 2021). notes: this number includes $1.9M to orgs that do re-granting (LTFF, EAIF, impetus grants, and PPF) — so it's likely that some of that $1.9M should not be included in the "endpoint grants in 2021" total. regardless, i'm comfortably above my commitment level for that not to matter; i have an ongoing substantial charitable project that's not reflected in the 2021 numbers — it's possible (and likely if ETH price holds, as SFF's s-process alone can't handle such amount) that i will report it retroactively next year or in 2024. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A primer & some reflections on recent CSER work (EAB talk), published by MMMaas on April 12, 2022 on The Effective Altruism Forum. Epistemic status: quick personal views, based on own experiences, but non-comprehensive; estimate this covers less than 50% of CSER's work of last few years. Format: edited transcript-with-slides (in style of other overviews). I'm Matthijs Maas, a researcher at the Centre for the Study of Existential Risk (Cambridge University). In March I gave a 25-minute Hamming Hour talk at EA Bahamas, on some of the things we've been up to at CSER. I especially focused on CSER's work on long-termist AI governance, my area of expertise, but also covered some of our other x-risk research lines, as well as policy work. The talk was not recorded, but several people mentioned afterwards they found it useful and novel, so I decided to turn it into a quick Forum post. Most of the papers I link to below should be open-access, let me know if they're not and I'm happy to share directly. This talk aims to give a quick overview on CSER's work – what we've been up to recently, and how we approach the study and mitigation of existential risks. Now, I know that ‘general institutional sales pitch' is many people's least-favourite genre of talk. .but hear me out: by giving a primer on some of what CSER has been up to the past year or two, I will argue that CSER's work and approach offers... substantive and decision-relevant insights into a range of existential risks, both in terms of threat models and in terms of mitigation strategies; a distinct approach and methodology to study existential risks and global catastrophic risks; and a record of working across academic disciplines and institutes to produce peer-reviewed academic work on existential risks, which can help build the credibility of the field to policymakers; a track record of policy impact at both the national (UK) and international level, which others in EA can draw from, or learn from. As such, while CSER's academic work has occasionally been comparatively less visible in the EA community, I believe that much of CSER's work is relevant to EA work on existential risks and long-term trajectories. That's not to say we've worked this all out–there are a lot of uncertainties which I and others have, and cruxes to be worked out. They reflect some thoughts and insights that I thought would be useful to share, and I would be eager to discuss more with the community. In terms of structure: I'll discuss CSER's background, go over some of its research, and finally discuss our policy actions and community engagement: CSER was formally founded in 2012 by Lord Martin Rees, Jaan Tallinn and Prof Huw Price (so it recently celebrated its 10th anniversary). Our first researcher started in late 2015, and since then we've grown to 28 people. Most of our work can be grouped into four major clusters: AI risk (alignment, impact forecasting, and governance), biorisk, environmental collapse (climate change and ecosystem collapse), and 'meta' work on existential risks (including both the methodology of how to study existential risks, as well as the ethics of existential risks). That's the background on CSER - now I'll go through some recent projects under these research themes. This is non-exhaustive–I estimate I cover less than 50% of CSER's work over the last few years--and I will focus mostly on our AI work, which is my specialty. Specifically, at CSER I'm part of the AI: Futures and Responsibility team. AI-FAR's work is focused on long-term AI risks, impacts and governance, and covers three main research lines, (1) AI safety, security and risk, (2) futures and foresight (of impacts), and (3) AI governance: Within the AI safety track, one interesting line of work is work by John Burden and José Hernandez-Orallo, on mapping the ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shah and Yudkowsky on alignment failures, published by Rohin Shah on February 28, 2022 on The AI Alignment Forum. This is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn. The discussion begins with summaries and comments on Richard and Eliezer's debate. Rohin's summary has since been revised and published in the Alignment Newsletter. After this log, we'll be concluding this sequence with an AMA, where we invite you to comment with questions about AI alignment, cognition, forecasting, etc. Eliezer, Richard, Paul Christiano, Nate, and Rohin will all be participating. Color key: Chat by Rohin and Eliezer Other chat Emails Follow-ups 19. Follow-ups to the Ngo/Yudkowsky conversation 19.1. Quotes from the public discussion [Bensinger][9:22] (Nov. 25) Interesting extracts from the public discussion of Ngo and Yudkowsky on AI capability gains: Eliezer: I think some of your confusion may be that you're putting "probability theory" and "Newtonian gravity" into the same bucket. You've been raised to believe that powerful theories ought to meet certain standards, like successful bold advance experimental predictions, such as Newtonian gravity made about the existence of Neptune (quite a while after the theory was first put forth, though). "Probability theory" also sounds like a powerful theory, and the people around you believe it, so you think you ought to be able to produce a powerful advance prediction it made; but it is for some reason hard to come up with an example like the discovery of Neptune, so you cast about a bit and think of the central limit theorem. That theorem is widely used and praised, so it's "powerful", and it wasn't invented before probability theory, so it's "advance", right? So we can go on putting probability theory in the same bucket as Newtonian gravity? They're actually just very different kinds of ideas, ontologically speaking, and the standards to which we hold them are properly different ones. It seems like the sort of thing that would take a subsequence I don't have time to write, expanding beyond the underlying obvious ontological difference between validities and empirical-truths, to cover the way in which "How do we trust this, when" differs between "I have the following new empirical theory about the underlying model of gravity" and "I think that the logical notion of 'arithmetic' is a good tool to use to organize our current understanding of this little-observed phenomenon, and it appears within making the following empirical predictions..." But at least step one could be saying, "Wait, do these two kinds of ideas actually go into the same bucket at all?" In particular it seems to me that you want properly to be asking "How do we know this empirical thing ends up looking like it's close to the abstraction?" and not "Can you show me that this abstraction is a very powerful one?" Like, imagine that instead of asking Newton about planetary movements and how we know that the particular bits of calculus he used were empirically true about the planets in particular, you instead started asking Newton for proof that calculus is a very powerful piece of mathematics worthy to predict the planets themselves - but in a way where you wanted to see some highly valuable material object that calculus had produced, like earlier praiseworthy achievements in alchemy. I think this would reflect confusion and a wrongly directed inquiry; you would have lost sight of the particular reasoning steps that made ontological sense, in the course of trying to figure out whether calculus was praiseworthy under the standards of praiseworthiness that you'd been previously raised to believe in as universal standards about a...
14 Tháng 2 Là Ngày Gì? Hôm Nay Là Ngày Sinh Của Nghệ Sĩ Cải Lương Bạch Tuyết SỰ KIỆN 2005 – Tên miền của YouTube được kích hoạt, YouTube hiện là trang web chia sẻ video lớn nhất thế giới. 2011 – Cầu thủ bóng đá người Brasil Ronaldo tuyên bố từ giã sự nghiệp cầu thủ. 1912 - Hải quân Hoa Kỳ đưa vào biên chế lớp tàu ngầm chạy bằng động cơ diesel đầu tiên của mình . Ngày lễ và kỷ niệm Ngày Valentine (hay còn gọi là Lễ tình nhân) Sinh 1988 – Ángel Di María, cầu thủ bóng đá Argentina. 1987 – Edinson Cavani, cầu thủ bóng đá Uruguay. 1992 – Christian Eriksen, cầu thủ bóng đá người Đan Mạch. 1945 – Bạch Tuyết, nghệ sĩ cải lương Việt Nam. 1867 – Toyoda Sakichi, doanh nhân Nhật Bản, người sáng lập tập đoàn Toyota (m. 1930). 1819 - Christopher Latham Sholes , nhà báo và chính trị gia người Mỹ, phát minh ra máy đánh chữ (mất năm 1890) 1972 - Jaan Tallinn , lập trình viên máy tính người Estonia, đồng phát triển Skype Mất 2021 – Hoàng Dũng, nghệ sĩ người Việt Nam (s. 1956). 1779 – James Cook, nhà thám hiểm Anh (s. 1728). Chương trình "Hôm nay ngày gì" hiện đã có mặt trên Youtube, Facebook và Spotify: Facebook: https://www.facebook.com/aweektv - Youtube: https://www.youtube.com/c/AWeekTV - Spotify: https://open.spotify.com/show/6rC4CgZNV6tJpX2RIcbK0J - Apple Podcast: https://podcasts.apple.com/.../h%C3%B4m-nay.../id1586073418 #aweektv #14thang2 #bachtuyet #DiMaria #Cavani Các video đều thuộc quyền sở hữu của Adwell jsc (adwell.vn), mọi hành động sử dụng lại nội dung của chúng tôi đều không được phép. --- Send in a voice message: https://anchor.fm/aweek-tv/message
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are giving $10k as forecasting micro-grants, published by Misha Yagudin on February 8, 2022 on The Effective Altruism Forum. (Cross-posted from the Forecasting Newsletter.) After the apparent success of ACX grants (a), we received $10k from an anonymous donor to give out as micro-grants through Nuño's Forecasting newsletter. Some examples of projects we'd be excited to fund might be: Open-source software and tooling to automate forecasting. Bonus points if Metaculus users or other forecasters start using it. Giving a shot at forecasting or making models of a difficult yet useful and decision-relevant area. Think of Rootclaim (a) analyzing the lab-escape story of Omicron. Pieces similar in quality to the ones mentioned in "best pieces on forecasting from 2021 (a)”, in Forecasting Prize Results (a), or in some possible research areas (a). Trying to estimate many uncertain parameters, e.g., the quality of all US or UN organizations, quality of academic fields, whether a large list of organizations will fail, enlightened willingness to pay for many products, the accuracy of many public figures, etc. Create a microcovid (a) or foodimpacts (a) but for other areas, like micro-marriages, micro-insights, micro-dooms, etc. Do this in a way that easily allows the creation of many of these calculators. Improve metaforecast (a) (which is open source (a)) in some interesting way, e.g., improve the estimates of forecast quality. The application form is HERE (a). Feel free to apply for more than $10k: we don't anticipate having much difficulty getting more funding for promising applications, and we may refer these to other funders (e.g., the EA Infrastructure Fund (a)) if we can't. We will be accepting applications until March 15, though we may extend this period if we don't receive enough high-quality submissions. We preliminarily plan to make decisions by April the 1st. Otherwise, Luke Muehlhauser comments (a) that forecasting related projects might be a good fit for the EA Infrastructure Fund (a). Jonas Vollmer, who runs EA Funds, confirms this (a) For larger projects, the Survival and Flourishing Fund, backed by philanthropists Jaan Tallinn and Jed McCaleb, is organizing the distribution of around $6M-$10M in grants (a) this June, with applications due on Feb 21. They generally only accept applications from registered charities, but speculation grants (a) might be a good fit for smaller projects (40%). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long-Term Future Fund: July 2021 grant recommendations, published by abergal on January 18, 2022 on The Effective Altruism Forum. Introduction The Long-Term Future Fund made the following grants through July 2021: Total grants: $1,427,019 Number of grantees: 18 Payout date: July 2021 Report authors: Asya Bergal (Chair), Oliver Habryka, Adam Gleave, Evan Hubinger, Luisa Rodriguez This payout report is substantially delayed, but we're hoping to get our next payout report covering grants through November out very soon. Updates since our last payout report: We took on Luisa Rodriguez as a guest manager in June and July. We switched from a round-based system to rolling applications. We received $675,000 from Jaan Tallinn through the Survival and Flourishing Fund. Public payout reports for EA Funds grantees are now optional. Consider applying for funding from the Long-Term Future Fund here. Grant reports Note: Many of the grant reports below are very detailed. Public reports are optional for our grantees, and we run all of our payout reports by grantees before publishing them. We think carefully about what information to include to maximize transparency while respecting grantees' preferences. We encourage anyone who thinks they could use funding to positively influence the long-term trajectory of humanity to apply for a grant. Grant reports by Asya Bergal Any views expressed below are my personal views and not the views of my employer, Open Philanthropy. In particular, receiving funding from the Long-Term Future Fund should not be read as an indication that an organization or individual has an elevated likelihood of receiving funding from Open Philanthropy. Correspondingly, not receiving funding from the Long-Term Future Fund (or any risks and reservations noted in the public payout report) should not be read as an indication that an organization or individual has a diminished likelihood of receiving funding from Open Philanthropy. Ezra Karger, Pavel Atanasov, Philip Tetlock ($572,000) Existential risk forecasting tournaments. This grant is to Ezra Karger, Pavel Atanasov, and Philip Tetlock to run an existential risk forecasting tournament. Philip Tetlock is a professor at the University of Pennsylvania; he is known in part for his work on The Good Judgment Project, a multi-year study of the feasibility of improving the accuracy of probability judgments, and for his book Superforecasting: The Art and Science of Prediction, which details findings from that study. Pavel Atanasov is a decision psychologist currently working as a Co-PI on two NSF projects focused on predicting the outcomes of clinical trials. He previously worked as a post-doctoral scholar with Philip Tetlock and Barbara Mellers at the Good Judgement Project, and as a consultant for the SAGE research team that won the last season of IARPA's Hybrid Forecasting Competition. Ezra Karger is an applied microeconomist working in the research group of the Federal Reserve Bank of Chicago; he is a superforecaster who has participated in several IARPA-sponsored forecasting tournaments and has worked with Philip Tetlock on some of the methods proposed in the tournament below. Paraphrasing from the proposal, the original plan for the tournament was as follows: Have a panel of subject-matter experts (SMEs) choose 10 long-run questions about existential risks and 20 short-run, resolvable, early warning indicator questions as inputs for the long-run questions. Ask the SMEs to submit forecasts and rationales for each question. Divide the superforecasters into two groups. Ask each person to forecast the same questions as the SMEs and explain those forecasts. For short-run questions, evaluate forecasts using a proper scoring rule, like Brier or logarithmic scores. For long-run questions, use reciprocal scoring to incentivize ac...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Momentum 2022 updates (we're hiring), published by arikagan on January 11, 2022 on The Effective Altruism Forum. Momentum (formerly Sparrow) is a venture-backed, EA startup that aims to increase effective giving. We build donation pages that emphasize creative, recurring donations tied to moments in your life (e.g. offset your carbon footprint when you buy gas, or give to AI alignment every time you scroll Facebook) and we use behavioral science to nudge new donors to support EA charities. Our goal is to direct $1B per month to EA causes by 2030, growing EA funding by over an order of magnitude and becoming EA's largest funding source. We believe that relying exclusively on a few mega-funders is not a robust long-term strategy for EA. Yet effective giving is quite small - there are 7.4K active EAs, and 85K people have ever donated to GiveWell. In contrast, each year, 240M Americans donate and 11M people visit Charity Navigator alone. We intend to reach a large number of people who have not yet heard of EA but are open to making highly-effective donations. 82% of VC-backed startups fail, and Momentum may be no exception. However, the potential upside of creating a sustainable funding source for EA is so great that the expected value appears high. And there's evidence that we're on the right track. Momentum has moved over $10M with our software from 40,000 donors. In our mobile app, 87% of donations went to our recommended charities (including several longtermist ones). We have $4M in funding from both EA funders (e.g. Jaan Tallinn, Spencer Greenberg, Luke Ding) and venture investors (e.g. Mark Cuban, Eric Ries, On Deck), making us one of a handful of VC-backed EA startups. Our campaigns have received widespread attention from celebrities (Peter Singer, John Legend, MLK III) and the press (e.g. NYT, BBC, and Quartz). Our team recently grew from 3 to 9, and we have 6 more openings in product, growth, engineering, etc. For the right person, this could be very high impact and a great place to work. We'd love to hear from you - email ari@givemomentum.com or apply here. A huge thank you to Jade Leung, Ozzie Gooen, Aaron Gertler, Rebecca Kagan, George Rosenfeld, and Bill Zito for feedback. All opinions and mistakes are our own. Table of contents The product Impact Working at Momentum FAQs We're hiring The product To increase the number of EA donors, we need to reach people outside of EA and encourage effective giving. We reach new donors by providing donation pages to a wide range of charities (regardless of effectiveness) that they market to their audience. After donors check out we offer a portal that encourages additional effective donations. Step 1: Reach donors with our donation pages To reach donors outside of EA, we give charities free, white-labeled donation pages like this one or this one that they put on their website (like Shopify but for nonprofits). Our page helps the charity acquire more recurring donors (who give 7x more) by tying personal actions and global events to automatic donations. You might give 5% to clean water when you buy a coffee, donate to BLM with each police shooting, or donate to stop Trump every time he tweets. We saw success with Defeat by Tweet (97% of 40K donors were recurring), so it looks promising that we can beat the industry average of 10% recurring. Since increasing the number of recurring donors increases donation volume so much, charities leverage their marketing resources to direct traffic to their page. Step 2: Nudge effective giving with our donor portal To increase effective giving, we give donors a portal that encourages supporting effective charities. After the donor checks out on a donation page, we guide them (on the confirmation page or via email) to log in to the portal to track their giving, edit their donations, and see...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GovAI Annual Report 2021, published by GovAI on January 5, 2022 on The Effective Altruism Forum. A Note from Ben Garfinkel, Acting Director 2021 was a year of big changes for GovAI. Most significantly, we spun out from the University of Oxford and adopted a new non-profit structure. At the same time, after five years within the organisation, I stepped into the role of Acting Director. Our founder, Allan Dafoe, shifted into the role of President. We also created a new advisory board consisting of several leaders in AI governance: Allan, Ajeya Cotra, Tasha McCauley, Toby Ord, and Helen Toner. Successfully managing these transitions was our key priority for the past year. Over a period of several months — through the hard work, particularly, of Markus Anderljung (Head of Policy), Alexis Carlier (Head of Strategy), and Anne le Roux (Head of Operations) — we completed our departure from Oxford, arranged a new fiscal sponsorship relationship with the Centre for Effective Altruism, revised our organisation chart and management structures, developed a new website and new branding, and fundraised $3.8 million to support our activities in the coming years. Our fundamental mission has not changed: we are still building a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI. However, these transitions have served as a prompt to reflect on our activities, ambitions, and internal structure. One particularly significant implication of our exit from Oxford has been an increase in the level of flexibility we have in hiring, fundraising, operations management, and program development. In the coming year, as a result, we plan to explore expansions in our field-building and policy development work. We believe we may be well-placed to organise an annual conference for the field, to offer research prizes, and to help bridge the gap between AI governance research and policy. Although 2021 was a transitional year, this did not prevent members of the GovAI team from producing a stream of new research. The complete list of our research output – given below – contains a large volume of research targeting different aspects of AI governance. One piece that I would like to draw particular attention to is Dafoe et al.'s paper Open Problems in Cooperative AI, which resulted in the creation of a $15 million foundation for the study of cooperative intelligence. Our key priority for the coming year will be to grow our team and refine our management structures, so that we can comfortably sustain a high volume of high-quality research while also exploring new ways to support the field. Hiring a Chief of Staff will be a central part of this project. We are also currently hiring for Research Fellows and are likely to open a research management role in the future. Please reach out to contact@governance.ai if you think you might be interested in working with us. I'd like to close with a few thank-yous. First, I would like to thank our funders for their generous support of our work: Open Philanthropy, Jaan Tallinn (through the Survival and Flourishing Fund), DALHAP Investments Limited (through Effective Giving), the Long-Term Future Fund, and the Center for Emerging Risk Research. Second, I would like to thank the Centre for Effective Altruism for its enormously helpful operational support. Third, I would like to thank the Future of Humanity Institute and the University of Oxford for having provided an excellent initial home for GovAI. Finally, I would like to express gratitude for everyone who's decided to focus their career on AI governance. It has been incredible watching the field's growth over the past five years. From many conversations, it is clear to me that many of the people now working in the field have needed to pay significant upfront costs...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The LessWrong Team is now Lightcone Infrastructure, come work with us!, published by habryka on the LessWrong. tl;dr: The LessWrong team is re-organizing as Lightcone Infrastructure. LessWrong is one of several projects we are working on to ensure the future of humanity goes well. We are looking to hire software engineers as well as generalist entrepreneurs in Berkeley who are excited to build infrastructure to ensure a good future. I founded the LessWrong 2.0 team in 2017, with the goal of reviving LessWrong.com and reinvigorating the intellectual culture of the rationality community. I believed the community had great potential for affecting the long term future, but that the failing website was a key bottleneck to community health and growth. Four years later, the website still seems very important. But when I step back and ask “what are the key bottlenecks for improving the longterm future?”, just ensuring the website is going well no longer seems sufficient. For the past year, I've been re-organizing the LessWrong team into something with a larger scope. As I've learned from talking to over a thousand of you over the last 4 years, for most of you the rationality community is much larger than just this website, and your contributions to the future of humanity more frequently than not route through many disparate parts of our sprawling diaspora. Many more of those parts deserve attention and optimization than just LessWrong, and we seem to be the best positioned organization to make sure that happens. I want to make sure that that whole ecosystem is successfully steering humanity towards safer and better futures, and more and more this has meant working on projects that weren't directly related to LessWrong.com: A bit over a year ago we started building grant-making software for Jaan Tallinn and the Survival and Flourishing Fund, helping distribute over 30 million dollars to projects that I think have the potential to have a substantial effect on ensuring a flourishing future for humanity. We helped run dozens of online meetups and events during the pandemic, and hundreds of in-person events for both this year and 2019s ACX Meetups everywhere We helped build and run the EA Forum and the AI Alignment Forum, We recently ran a 5-day retreat for 60-70 people whose work we think is highly impactful in reducing the likelihood of humanity's extinction, We opened an in-person office space in the Bay Area for organizations that are working towards improving the long-term future of humanity. As our projects outside of the LessWrong.com website multiplied, our name became more and more confusing when trying to explain to people what we were about. This confusion reached a new peak when we started having a team that we were internally calling the "LessWrong team", which was responsible for running the website, distinct from all of our other projects, and which soon after caused me to utter the following sentence at one of our team meetings: LessWrong really needs to figure out what the LessWrong team should set as a top priority for LessWrong As one can imagine, the reaction from the rest of the team was confusion and laughter and at that point I knew we had to change our name and clarify our organizational mission. So, after doing many rounds of coming up with names, asking many of our colleagues and friends (including GPT-3) for suggestions, we finally decided on: I like the light cone as a symbol, because it represents the massive scale of opportunity that humanity is presented with. If things go right, we can shape almost the full light cone of humanity to be full of flourishing life. Billions of galaxies, billions of light years across, for some 10^36 (or so) years until the heat death of the universe. Separately, I am excited about where Lightcone Infrastructure is head...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reply to Holden on 'Tool AI', published by Eliezer Yudkowsky on the LessWrong. I begin by thanking Holden Karnofsky of Givewell for his rare gift of his detailed, engaged, and helpfully-meant critical article Thoughts on the Singularity Institute (SI). In this reply I will engage with only one of the many subjects raised therein, the topic of, as I would term them, non-self-modifying planning Oracles, a.k.a. 'Google Maps AGI' a.k.a. 'tool AI', this being the topic that requires me personally to answer. I hope that my reply will be accepted as addressing the most important central points, though I did not have time to explore every avenue. I certainly do not wish to be logically rude, and if I have failed, please remember with compassion that it's not always obvious to one person what another person will think was the central point. Luke Mueulhauser and Carl Shulman contributed to this article, but the final edit was my own, likewise any flaws. Summary: Holden's concern is that "SI appears to neglect the potentially important distinction between 'tool' and 'agent' AI." His archetypal example is Google Maps: Google Maps is not an agent, taking actions in order to maximize a utility parameter. It is a tool, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish. The reply breaks down into four heavily interrelated points: First, Holden seems to think (and Jaan Tallinn doesn't apparently object to, in their exchange) that if a non-self-modifying planning Oracle is indeed the best strategy, then all of SIAI's past and intended future work is wasted. To me it looks like there's a huge amount of overlap in underlying processes in the AI that would have to be built and the insights required to build it, and I would be trying to assemble mostly - though not quite exactly - the same kind of team if I was trying to build a non-self-modifying planning Oracle, with the same initial mix of talents and skills. Second, a non-self-modifying planning Oracle doesn't sound nearly as safe once you stop saying human-English phrases like "describe the consequences of an action to the user" and start trying to come up with math that says scary dangerous things like (he translated into English) "increase the correspondence between the user's belief about relevant consequences and reality". Hence why the people on the team would have to solve the same sorts of problems. Appreciating the force of the third point is a lot easier if one appreciates the difficulties discussed in points 1 and 2, but is actually empirically verifiable independently: Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov. At one point, Holden says: One of the things that bothers me most about SI is that there is practically no public content, as far as I can tell, explicitly addressing the idea of a "tool" and giving arguments for why AGI is likely to work only as an "agent." If I take literally that this is one of the things that bothers Holden most... I think I'd start stacking up some of the literature on the number of different things that just respectable academics have suggested as the obvious solution to what-to-do-about-AI - none of which would be about non-self-modifying smarter-than-human planning Oracles - and beg him to have some compassion on us for what we haven't addressed yet. It might be the right suggestion, but it's not so obviously right that our failure to prioritize discussing it refl...
Hablé con el gran Carlos Sánchez, Country Manager de Xolo. No se pueden perder esta charla relajada pero llena de insights sobre el emprendimiento del futuro. Lo que venga de Estonia me emociona. Más aún si está ligado a la e-Residency que es la residencia digital de Estonia que permite a los Freelancers facturar y tener cuentas bancarias en dicho país.De ahí nace ésta Joya de empresa llamada Xolo un producto impactante. Miren los números. ● 75.000 emprendedores individuales se han registrados en más de 130 países.● Principales países: España, Grecia, Italia, Francia, Alemania, Portugal● 1,100,000,000 EUR en transacciones procesadas por solopreneurs● 300.000 clientes a los que se les ha facturado a través de Xolo● 11,1 millones de euros en capital de riesgo recaudado, incluidos los de Karma Ventures y Vendep Capital, el cofundador de Wise (Transferwise), Taavet Hinrikus, y el cofundador de Skype, Jaan Tallinn.España● ~ 6000 emprendedores individuales de España (~ 10% del total)● ~ 70% de ciudadanos españoles - ~ 30% de expatriados en España● Principales áreas: Madrid, Barcelona, Valencia, Sevilla, Málaga● Campos principales: servicios digitales; marketing (contenido, producción de videos, marketing digital),Tienen su cede en Tallin, Estonia, equipos remotos en Helsinki, Berlín, Madrid y Barcelona. Carlos, otro Gran Inventor. See acast.com/privacy for privacy and opt-out information.
Jaan Tallinn is a founding engineer of Skype and Kazaa. He is a co-founder of the Cambridge Centre for the Study of Existential Risk, Future of Life Institute, and philanthropically supports other existential risk research organisations. Jaan is on the Board of Sponsors of the Bulletin of the Atomic Scientists (thebulletin.org), and has served on … Fireside Chat | Jaan Tallinn Read More »
Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century. Topics discussed in this episode include: -Intelligence and coordination -Existential risk from AI, synthetic biology, and unknown unknowns -AI adoption as a delegation process -Jaan's investments and philanthropic efforts -International coordination and incentive structures -The short-term and long-term AI safety communities You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:29 How can humanity improve? 3:10 The importance of intelligence and coordination 8:30 The bottlenecks of input and output bandwidth as well as processing speed between AIs and humans 15:20 Making the creation of AI feel dangerous and how the nuclear power industry killed itself by downplaying risks 17:15 How Jaan evaluates and thinks about existential risk 18:30 Nuclear weapons as the first existential risk we faced 20:47 The likelihood of unknown unknown existential risks 25:04 Why Jaan doesn't see nuclear war as an existential risk 27:54 Climate change 29:00 Existential risk from synthetic biology 31:29 Learning from mistakes, lacking foresight, and the importance of generational knowledge 36:23 AI adoption as a delegation process 42:52 Attractors in the design space of AI 44:24 The regulation of AI 45:31 Jaan's investments and philanthropy in AI 55:18 International coordination issues from AI adoption as a delegation process 57:29 AI today and the negative impacts of recommender algorithms 1:02:43 Collective, institutional, and interpersonal coordination 1:05:23 The benefits and risks of longevity research 1:08:29 The long-term and short-term AI safety communities and their relationship with one another 1:12:35 Jaan's current philanthropic efforts 1:16:28 Software as a philanthropic target 1:19:03 How do we move towards beneficial futures with AI? 1:22:30 An idea Jaan finds meaningful 1:23:33 Final thoughts from Jaan 1:25:27 Where to find Jaan This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
What does the future hold? A reign of world peace with stunning medical breakthroughs conquering death, illness and disease? Or a world where human beings have destroyed the web of living things and put our own existence at risk by playing with science we don’t fully understand? Must we think in terms of these extremes to create a positive future or prevent disaster? Join a panel of brilliant optimists and pessimists to understand some of the amazing risks and opportunities that lie before us. Tim Flannery is an Australian scientist, activist, author and editor of over twenty books, former Chief Scientist at the federal Climate Commission, and currently leader of the independent Climate Council. Elizabeth Kolbert is an American environmental journalist and author. She is a staff writer at the New Yorker and the author of several books, including Field Notes from a Catastrophe and The Sixth Extinction: An unnatural history. Steven Pinker is an experimental psychologist and one of the world’s foremost writers on language, mind, and human nature. He is currently Johnstone Family Professor of Psychology at Harvard University and his most recent book is The Better Angels of Our Nature: Why violence has declined. Jaan Tallinn is a founding engineer of Skype and Kazaa, a co-founder of personalised medicine company MetaMed, and a co-founder of the Centre for Study of Existential Risk at Cambridge.
Tundub, justkui oleks kogu internet viimase nädala jooksul rääkinud ainult ühest tehnoloogiateemast ja see on GPT-3. Me kutsusime stuudiosse Jaan Tallinna, et ta aitaks meil aru saada, mis asi täpsemalt GPT-3 on, millest see suur haip ning kas tehisintellekt on nüüd sellega seoses teinud suure sammu edasi või on endiselt kõik nagu varem oli. Saadet veavad Tiit Paananen Veriffist ja Priit Liivak Nortalist.
Founder, investor and philanthropist Jaan Tallinn joins Wolf Tivy and Ash Milton to discuss the frontier of artificial intelligence research and what an A.I. future means for humanity. Jaan Tallinn is a founding engineer of Skype and Kazaa. He is a co-founder of the Cambridge Centre for the Study of Existential Risk, Future of Life Institute, and philanthropically supports other existential risk research organizations. Jaan is on the Board of Sponsors of the Bulletin of the Atomic Scientists, member of the High-Level Expert Group on AI at the European Commission, and has served on the Estonian President's Academic Advisory Board. He is also an active angel investor, a partner at Ambient Sound Investments, and a former investor director of the AI company DeepMind.
Kuidas Ahti GAGi sattus? Miks on meie generatsioonil vedanud? Mille oli Hannu 87. aastaks kirjutanud? Mille kirjutas Jaan suvel paberi peal valmis? Kuidas tekkis Bluemoon Software? Millega tuleb läbi korrutada tarkvara ajahinnangud? Kuidas müüdi veel Nõukogude riigis Rootsi arvutimäng? Kuidas sündis Soundclub? Kuidas sai pillid häälde? Kui sageli tuleb siiamaani Skyroadsi fännikirju? Kuidas maksti shareware tarkvara eest? Kuidas rääkida läbi ilma juriidilise jõu-õlata? Milline on optimaalne teadus? Kuidas Bluemoon kirjastaja pankroti üle elas? Kuidas kirjutati everyday.com? Miks on mängude tegemine äge? Kuidas käis üheksakümnendatel tarkvara-arendus? Kust Ahti oma tarkust ammutas? Mis on delegeerimise fundamentaalne printsiip?
Steve talks with Skype founder and global tech investor Jaan Tallinn. Will the coronavirus pandemic lead to better planning for future global risks? Jaan gives his list of top existential risks and describes his efforts to call attention to AI risk. They discuss AGI, the Simulation Question, the Fermi Paradox and how these are all connected. Do we live in a simulation of a quantum multiverse?Rationality Jaan X-Risk Links LessWrong Slate Star Codex Metaculus Additional Resources Transcript Fermi Paradox — Where Are All The Aliens? Is Hilbert space discrete?
Steve talks with Skype founder and global tech investor Jaan Tallinn. Will the coronavirus pandemic lead to better planning for future global risks? Jaan gives his list of top existential risks and describes his efforts to call attention to AI risk. They discuss AGI, the Simulation Question, the Fermi Paradox and how these are all connected. Do we live in a simulation of a quantum multiverse?
Steve talks with Skype founder and global tech investor Jaan Tallinn. Will the coronavirus pandemic lead to better planning for future global risks? Jaan gives his list of top existential risks and describes his efforts to call attention to AI risk. They discuss AGI, the Simulation Question, the Fermi Paradox and how these are all connected. Do we live in a simulation of a quantum multiverse?Rationality Jaan X-Risk Links LessWrong Slate Star Codex Metaculus Additional Resources Transcript Fermi Paradox — Where Are All The Aliens? Is Hilbert space discrete?
I veckans avsnitt blir det en intervju med Jaan Tallinn som var med och skapade Skype och Kazaa. Men karriären började med att göra spel. Under 90-talet släppte Jaan och hans företag Bluemoon bland annat spelet Skyroads, som var en av mina favoriter under uppväxten. Numera reser Jaan runt i världen och föreläser om risker med AI och andra existentiella risker. God lyssning! Länkar Skyroads Surfing Moore’e Law MSX Kosmonaut Trailblazer The Art of Computer Programming 286 MUD Sound Club Interactive Magic MicroProse Thunder Brigade Bluemoon Ahti Heinla Napster Kazaa Starship Effective Altruism Existential Risk DeepMind AlphGo Recommender Systems Max Tegmark Nick Boström Demis Hassabis Stuart Russel
Meet Jaan Tallinn – an Estonian hacker, physicist, entrepreneur, philanthropist and investor, who is known for being the co-founder of Kazaa, a file-sharing platform, and his participation in the development of Skype that took communication to a whole new level. Since then, he has co-founded many more companies, been an advisor to the President of Estonia and focused heavily on effective altruism and mitigating existential risks from AI. Insights How programming nourishes the cognitive abilities. The drastic contrast in gaming about twenty to thirty years ago and gaming today. Why programming and other tech-related skills were more valued or exciting in the past. How Kazaa and Skype became successful – luck and effort combined. We are the dominant species on Earth for being the best planners on Earth. The reason why species have gone extinct is that we are modifying the environment to suit us and our needs while making it uninhabitable for them. Superhuman AI may do the same to us. How creating machines that are better planners than us can threaten our dominance on Earth. Incorporating our idea of a good future in those machines or aligning their ideas with ours can help combat this issue. Unpredictability, once introduced into AI, can pose a huge threat to the human race. Jaan Future of Life Institute (https://futureoflife.org/) Effective Altruism (https://app.effectivealtruism.org/funds/far-future) 80000 hours project (https://80000hours.org/) ABOUT THE HOST My name is Sam Harris. I am a British entrepreneur, investor and explorer. From hitchhiking across Kazakstan to programming AI doctors I am always pushing myself in the spirit of curiosity and Growth. My background is in Biology and Psychology with a passion for improving the world and human behaviour. I have built and sold companies from an early age and love coming up with unique ways to make life more enjoyable and meaningful. Sam: Instagram (https://www.instagram.com/samjamsnaps/) Quora (https://www.quora.com/profile/Sam-Harris-58) Twitter (https://twitter.com/samharristweets) LinkedIn (https://www.linkedin.com/in/sharris48/) Sam's blog - SamWebsterHarris.com (https://samwebsterharris.com/) Support the Show - Patreon (https://www.patreon.com/growthmindset) Sponsor - Better Help Life is hard and it helps to talk. You can meet online with a qualified therapist tailored to your needs. Specific help for exactly your problems often doesn't exist nearby or requires you to meet someone at awkward times. You can meet the perfect therapist for you at the most convenient times online through the app on video, phone call or text chat. To get 10% off your first-month membership sign up now with the code 'GROWTHMINDSET' https://BetterHelp.com/GrowthMindset Subscribe! If you enjoyed the podcast please subscribe and rate it. And of course, share with your friends! Special Guest: Jaan Tallinn.
John Thornhill talks to Jaan Tallinn, founding engineer at Skype and Kazaa, about his subsequent career as a tech investor and his concerns about AI safety. See acast.com/privacy for privacy and opt-out information.
Most charity is focused on the near term. So what happens when you try to only give to charities that will help humans a long time from now — not just in 100 years, but in a million years? To find out, we talk to Jaan Tallinn, a founding engineer of Skype who is trying to force the world to take threats to the future, threats like AI, seriously.Tallinn explains his concern with AI at an effective altruism conferenceKelsey Piper explains the risks of unconstrained AIAI experts on when they expect AI to outpace human intelligenceTed Chiang’s critique of concern with AI safety Learn more about your ad choices. Visit megaphone.fm/adchoices
Risto Uuk rääkis selles taskuhäälingu osas Jaan Tallinna, Oliver Laasi ja Tanel Tammetiga. Jaan Tallinn on Skype'i ja Kazaa kaasasutaja ning kaasasutas ka eksistentsiaalseid riske uurivad organisatsioonid Centre for the Study of Existential Risk ja Future of Life Institute. Oliver Laas on mitmes erinevas asutuses filosoofia külalislektor, näiteks Tallinna Ülikooli Humanitaarteaduste instituudis. Tanel Tammet on arvutiteadlane ja Tallinna Tehnikaülikooli võrgutarkvara professor. Nad vestlesid selle üle, kui palju peaks tehisintellekti ohtude pärast muretsema ja mida me nende vähendamiseks tegema peaksime. Allikad, mis vestlust toetasid või jutuks tulid: - Eksistentsiriske uuriv organisatsioon: Future of Life Institute: https://futureoflife.org/ - Tehisintellekti ohtude müüdid: https://futureoflife.org/background/aimyths/ - Nick Bostromi tehisintellekti ekspertide küsitlus: https://nickbostrom.com/papers/survey.pdf - Teine tehisintellekti ekspertide küsitlus arvamustest, millal saavutatakse inimtasemel tehisintellekt: https://arxiv.org/pdf/1705.08807.pdf - Allan Dafoe tehisintellekti poliitikakujundamisest: https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/ - Alan Turing tehisintellekti ohtudest: https://aperiodical.com/wp-content/uploads/2018/01/Turing-Can-Computers-Think.pdf - Max Tegmarki raamat "Life 3.0”: https://www.rahvaraamat.ee/p/life-3-0/1040763/et?isbn=9780141981802# - Miks prioriseerida kaugtulevikku: https://reg-charity.org/why-we-prioritize-the-long-term-future/ - Ülevaade tehisintellekti probleemidest: https://arxiv.org/pdf/1805.01109.pdf - Suuremad ja väiksemad tehisintellekti äpardused: https://novaator.err.ee/842085/graafikulugu-tehisintellekti-13-suuremat-ja-vaiksemat-apardust - Karjäärinõu jagav veebileht 80,000 Hours: https://80000hours.org/ - Thomas Metzinger anti-natalistlikust AI riskistsenaariumist: https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-baan
The AI In Industry podcast is often conducted over Skype, and this week's guest happens to be one of its early developers. Jaan Tallinn is recognized as sort of one of the technical leads behind Skype as a platform. I met Jaan while we were both doing round table sessions at the World Government Summit, and in this episode, I talk to Tallinn about a topic that we often don't get to cover on the podcast: the consequences of artificial general intelligence. Where's this going to take humanity in the next hundred years?
Jaan Tallinn has been an effective altruist for many years, and has used his reputation and personal funds to support the study of existential risk. In this fireside chat from Effective Altruism Global 2018: San Francisco, moderated by Nathan Labenz, he discusses how his views on AI have become more complex, which sorts of organizations … Continue reading EAG 2018 SF: Jaan Tallinn fireside chat
Maailmanpolitiikan arkipäivää -ohjelman kesäsarjassa pohditaan tällä kertaa tekoälyn riskejä ja mahdollisuuksia. Miten käy ihmisten, jos onnistumme luomaan itseämme älykkäämmän koneen? Haastattelussa on virolainen tekoälytutkija, sijoittaja ja ohjelmoija Jaan Tallinn. Toimittaja Silja Massa tapasi Jaan Tallinnin tämän kotikaupungissa Tallinnassa. Kuva: Silja Massa
How many people do you know who have created a verb? We 'google' things, we 'uber' to places,...
The robots are here and one company, Starship Technologies, has raised $25 million to bring even more to the mainstream. This latest round of funding includes a follow-on investment from Matrix Partners and Morpheus Ventures. New investors include Airbnb co-founder Nathan Blecharczyk, Skype founding engineer Jaan Tallinn and others. These autonomous robots can carry items, like groceries or packages, within a two-mile radius.
Jaan Tallinn : Skype Co-Founder talks about artificial intelligence being our biggest existential threat to a humans as a species.
Jaan Talinn is the founder of Skype and Kazaa and long time angel investor. More recently, Jaan has concerned himself with preventing the destruction of humanity. Strong Artificial intelligence is the least understood of threats and this is Jaan's chief concern. slatestarcodex.com/2014/07/30/meditations-on-moloch slatestarcodex.com/2016/05/30/ascended-economy cser.org futureoflife.org etherreview.info Content: Jaan Tallinn, Siri, Arthur Falls
Jaan Tallinn, co-founder of Skype and Kazaa, got so famous in his homeland of Estonia that people named the biggest city after him. Well, that latter part may not be exactly true but there are few people today who have not used, or at least heard of, Skype or Kazaa. What is much less known, […]
Source: Future of Humanity Institute (original video).
JAAN TALLINN (https://www.edge.org/memberbio/jaan_tallinn) is a co-founder of The Centre for the Study of Existential Risk at University of Cambridge, UK as well as The Future of Life Institute in Cambridge, MA. He is also a founding engineer of Kazaa and Skype. The Conversation: https://www.edge.org/conversation/jaan_tallinn-existential-risk
Seekord räägib Restart mängufilmide ja TV saadete valgustamisest. Eesti startup Digital Sputnik on võtnud asjale uue kaasaegsema lähenemise ja lööb sellega Ameerikas laineid. Et kiiremini kasvada on Digital Sputnik saanud investoritelt üle poole miljoni dollari. Investorite seas on ka Skype asutaja Jaan Tallinn ning mitmeid teisi Eesti tuntud ja rahakaid IT startuppereid. (Henrik Aavik, Taavi Kotka.)
What does the future hold? A reign of world peace with stunning medical breakthroughs conquering death, illness and disease? Or a world where human beings have destroyed the web of living things and put our own existence at risk by playing with science we don’t fully understand? Must we think in terms of these extremes to create a positive future or prevent disaster? Join a panel of brilliant optimists and pessimists to understand some of the amazing risks and opportunities that lie before us.Tim Flannery is an Australian scientist, activist, author and editor of over twenty books, former Chief Scientist at the federal Climate Commission, and currently leader of the independent Climate Council.Elizabeth Kolbert is an American environmental journalist and author. She is a staff writer at the New Yorker and the author of several books, including Field Notes from a Catastrophe and The Sixth Extinction: An unnatural history.Steven Pinker is an experimental psychologist and one of the world’s foremost writers on language, mind, and human nature. He is currently Johnstone Family Professor of Psychology at Harvard University and his most recent book is The Better Angels of Our Nature: Why violence has declined.Jaan Tallinn is a founding engineer of Skype and Kazaa, a co-founder of personalised medicine company MetaMed, and a co-founder of the Centre for Study of Existential Risk at Cambridge.
Looking into the future, we can see the possibility of severe occurrences that threaten human extinction. Until recently, we haven’t taken this seriously and are therefore putting the future of humanity at risk. When looking at existential risk, there is a difference between natural disasters such as asteroids and the human-created risks inherent in the rapid advancements of areas like artificial intelligence and nanotechnology. No one wants to stop science, but if we want to create a sustainable future, we need to understand these risks as fully as we can so that we can balance the benefits of scientific discovery and innovation and protect ourselves from existential risk.Huw Price is the Bertrand Russell Professor of Philosophy Cambridge and a co-founder of the Centre for Study of Existential Risk at the University of Cambridge.Jaan Tallinn is a founding engineer of Skype and Kazaa as well as co-founder of MetaMed, a personalized medical research company. He is a co-founder of the Centre for Study of Existential Risk at the University of Cambridge.