POPULARITY
This episode, we're doing something a little different. We are taking a pause from our narrative series to hear from you, the listeners. We asked you to send in voicemails with your feelings on Maud and her work, and you answered. A big thanks to Mike DiMascio, Irina Levchenko, Rebecca Sullivan, Laura Leden, Gabrielle Fortier, Naomi Burger, Kelly Gerner & Ragon Duffy, and Lois Adamson for your contributions to the show.Here is a link for information regarding the book collection rebuild for Kelly Gerner, following the LA wildfires.Follow us on our socials: Bluesky, LinkedIn, YouTube & SubstackAbout This Series:Discover the life and legacy of L.M. Montgomery in this insightful 7-part podcast, in which we explore her childhood, literary journey, and the timeless impact of Anne of Green Gables on generations of readers.Written & Hosted by Ryan BarnettProduced by Ryan Barnett & Sonia GemmitiAssociate Producers Maia Foster-Sanchez & Kristi ProphetRecorded by Tyler RaumanThis series features interviews with Kate Macdonald Butler, Kate Scarth, Jessica Young, Laura Robinson & Yuko Matsumoto.Additional voices by Candace Amarante, Matthew Barnett & Becca ReddenA Knockabout Media ProductionThis podcast was made possible thanks to funding from the Government of Canada. Hosted on Acast. See acast.com/privacy for more information.
Maud faces some of the toughest years of her life : nuisance lawsuits, an unwanted pursuer, social ostracism, personal humiliations, and drug dependency colour her final days.Regional Mental Health Resources.A call to listeners: Working on this podcast has been rewarding, but also heartrending, because Maud lived a life. And we want to do something that celebrates that life and its legacy. This is where you come in. We're asking you to record a voice memo expressing your thoughts or feelings on Maud or any bit of her writing that resonates, and send it to ryan@knockaboutmedia.com by February 28th for a chance to be featured in an upcoming episode. About This Series:Discover the life and legacy of L.M. Montgomery in this insightful 7-part podcast, in which we explore her childhood, literary journey, and the timeless impact of Anne of Green Gables on generations of readers.Written & Hosted by Ryan BarnettProduced by Ryan Barnett & Sonia GemmitiAssociate Producers Maia Foster-Sanchez & Kristi ProphetRecorded by Tyler RaumanThis series features interviews with Kate Macdonald Butler, Kate Scarth, Jessica Young, Laura Robinson & Yuko Matsumoto.Additional voices by Candace Amarante, Matthew Barnett & Becca ReddenA Knockabout Media ProductionThis podcast was made possible thanks to funding from the Government of Canada. Hosted on Acast. See acast.com/privacy for more information.
CONNECT, GROW, GIVE at https://theassembly.org/
Final Hour Fun Fact. Quick Hits. Matthew Barnett the founder of the Dream Center on the massive movement by the community that is taking place to help those devistated by the fires. Our friend Steve Van Doren checks-in as Vans wants to help out in the community.
Senior Pastor and co-founder of the Dream Center Matthew Barnett joins the Painful Lessons Podcast in a very approachable conversation about growing up as a famous pastor's son, starting his own ministry in Los Angeles, providing shelter and rehabilitation to addicts, scandals in religion, dealing with hardships and grief, impacting so many lives, and much more. You can visit the Dream Center's website here: https://www.dreamcenter.org/ Subscribe to the podcast on Apple: https://podcasts.apple.com/us/podcast/painful-lessons/id1729973942 Follow the podcast on Spotify: https://open.spotify.com/show/6Z65Jr9DcOliiRBcfFFKFP Subscribe to the YouTube channel: https://www.youtube.com/channel/UCkC06HhkZIfoqk_n1E9MOiA/featured?sub_confirmation=1 00:00 - 02:45 Tyler introduces his hero, Matthew Barnett, pastor and founder of the Dream Center 02:46 - 07:41 Matthew explains the Dream Center and how it started 07:42 - 10:00 Tyler shares his rehab experience 10:01 - 13:55 Matthew highlights the importance of forgiveness to people 13:56 - 17:15 Matthew shares his first impressions working with Tyler 17:16 - 18:44 How does Matthew handle losses in rehab? 18:45 - 25:15 Matthew talks about almost quitting 25:16 - 25:49 How often do pastors think about swearing? 25:50 - 31:09 Inspiration from David in the Bible 31:10 - 38:10 What can people learn from David? 38:11 - 41:27 The importance of having humility 41:28 - 47:52 Thoughts about who does and doesn't go to heaven 47:53 - 51:00 A pastor's thoughts on scandals in religion 51:01 - 54:20 Gangs protecting the church 54:20 - 55:03 - Closing the show
Back in the early 1990's Matthew Barnett thought he was headed to Los Angeles for a short-term mission trip at a small church. Thirty years later, he's still there, but the operation is much larger. The Los Angeles Dream Center is the result. A massive operation that includes a 400,000 square foot facility and a church, which serves the homeless, getting the drug addicted unaddicted and seeing in all people the image of God. But the sad news is, despite the center's successes, the city of Los Angeles and the state government hierarchy refuses to partner with them because they are faith-based. While the city of Los Angeles announced they would spend $1 billion on the problem of homelessness, the equivalent of an estimated $50,000 for each homeless person, Barnett says it costs the center $7,500 to rehabilitate someone out of drug addiction. On this episode of Lighthouse Faith, Barnett talks about the triumph of transforming lives when God is at the center of the process. He says they're not just ministering to people's needs; they're ministering to people's potential. God-given dreams give people hope that there's something greater to live for. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Analyzing the moral value of unaligned AIs, published by Matthew Barnett on April 8, 2024 on The Effective Altruism Forum. A crucial consideration in assessing the risks of advanced AI is the moral value we place on "unaligned" AIs - systems that do not share human preferences - which could emerge if we fail to make enough progress on technical alignment. In this post I'll consider three potential moral perspectives, and analyze what each of them has to say about the normative value of the so-called "default" unaligned AIs that humans might eventually create: Standard total utilitarianism combined with longtermism: the view that what matters most is making sure the cosmos is eventually filled with numerous happy beings. Human species preservationism: the view that what matters most is making sure the human species continues to exist into the future, independently from impartial utilitarian imperatives. Near-termism or present-person affecting views: what matters most is improving the lives of those who currently exist, or will exist in the near future. I argue that from the first perspective, unaligned AIs don't seem clearly bad in expectation relative to their alternatives, since total utilitarianism is impartial to whether AIs share human preferences or not. A key consideration here is whether unaligned AIs are less likely to be conscious, or less likely to bring about consciousness, compared to alternative aligned AIs. On this question, I argue that there are considerations both ways, and no clear answers. Therefore, it tentatively appears that the normative value of alignment work is very uncertain, and plausibly neutral, from a total utilitarian perspective. However, technical alignment work is much more clearly beneficial from the second and third perspectives. This is because AIs that share human preferences are likely to both preserve the human species and improve the lives of those who currently exist. However, in the third perspective, pausing or slowing down AI is far less valuable than in the second perspective, since it forces existing humans to forego benefits from advanced AI, which I argue will likely be very large. I personally find moral perspectives (1) and (3) most compelling, and by contrast find view (2) to be uncompelling as a moral view. Yet it is only from perspective (2) that significantly delaying advanced AI for alignment reasons seems clearly beneficial, in my opinion. This is a big reason why I'm not very sympathetic to pausing or slowing down AI as a policy proposal. While these perspectives do not exhaust the scope of potential moral views, I think this analysis can help to sharpen what goals we intend to pursue by promoting particular forms of AI safety work. Unaligned AIs from a total utilitarian point of view Let's first consider the normative value of unaligned AIs from the first perspective. From a standard total utilitarian perspective, entities matter morally if they are conscious (under hedonistic utilitarianism) or if they have preferences (under preference utilitarianism). From this perspective, it doesn't actually matter much intrinsically if AIs don't share human preferences, so long as they are moral patients and have their preferences satisfied. The following is a prima facie argument that utilitarians shouldn't care much about technical AI alignment work. Utilitarianism is typically not seen as partial to human preferences in particular. Therefore, efforts to align AI systems with human preferences - the core aim of technical alignment work - may be considered morally neutral from a utilitarian perspective. The reasoning here is that changing the preferences of AIs to better align them with the preferences of humans doesn't by itself clearly seem to advance the aims of utilitarianism, in the sense of filling the cosmos w...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's with all the bans recently?, published by Gerald Monroe on April 4, 2024 on LessWrong. Summary: the moderators appear to be soft banning users with 'rate-limits' without feedback. A careful review of each banned user reveals it's common to be banned despite earnestly attempting to contribute to the site. Some of the most intelligent banned users have mainstream instead of EA views on AI. Note how the punishment lengths are all the same, I think it was a mass ban-wave of 3 week bans: Gears to ascension was here but is no longer, guess she convinced them it was a mistake. Have I made any like really dumb or bad comments recently: https://www.greaterwrong.com/users/gerald-monroe?show=comments Well I skimmed through it. I don't see anything. Got a healthy margin now on upvotes, thanks April 1. Over a month ago, I did comment this stinker. Here is what seems to the same take by a very high reputation user here, @Matthew Barnett , on X: https://twitter.com/MatthewJBar/status/1775026007508230199 Must be a pretty common conclusion, and I wanted this site to pick an image that reflects their vision. Like flagpoles with all the world's flags (from coordination to ban AI) and EMS uses cryonics (to give people an alternative to medical ASI). I asked the moderators: @habryka says: I skimmed all comments I made this year, can't find anything that matches to this accusation. What comment did this happen on? Did this happen once or twice or 50 times or...? Any users want to help here, it surely must be obvious. You can look here: https://www.greaterwrong.com/users/gerald-monroe?show=comments if you want to help me find what habryka could possibly be referring to. I recall this happening once, Gears called me out on it, and I deleted the comment. Conditional that this didn't happen this year, why wasn't I informed or punished or something then? Skimming the currently banned user list: Let's see why everyone else got banned. Maybe I can infer a pattern from it: Akram Choudhary : 2 per comment and 1 post at -25. Taking the doomer view here: frankybegs +2.23 karma per comment. This is not bad. Does seem to make comments personal. Decided to enjoy the site and make 16 comments 6-8 days ago. Has some healthy karma on the comments, +6 to +11. That's pretty good by lesswrong standards. No AI views. Ban reason is??? Victor Ashioya His negative karma doesn't add up to -38, not sure why. AI view is in favor of red teaming, which is always good. @Remmelt doomer view, good karma (+2.52 karma per comment), hasn't made any comments in 17 days...why rate limit him? Skimming his comments they look nice and meaty and well written...what? All I can see is over the last couple of month he's not getting many upvotes per comment. green_leaf Ok at least I can explain this one. One comment at -41, in the last 20, green_leaf rarely comments. doomer view. PeteJ Tries to use humanities knowledge to align AI, apparently the readerbase doesn't like it. Probably won't work, banned for trying. @StartAtTheEnd 1.02 karma per comment, a little low, may still be above the bar. Not sure what he did wrong, comments are a bit long? doomer view, lots of downvotes omnizoid Seems to just be running a low vote total. People didn't like a post justifying religion. @MiguelDev Why rate limited? This user seems to be doing actual experiments. Karma seems a little low but I can't find any big downvote comments or posts recently. @RomanS Overall Karma isn't bad, 19 upvotes the most recent post. Seems to have a heavily downvoted comment that's the reason for the limit. @shminux this user has contributed a lot to the site. One comment heavily downvoted, algorithm is last 20. It certainly feels that way from the receiving end. 2.49 karma per comment, not bad. Cube tries to applies Baye's rule in several comments, I see a coup...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's with all the bans recently?, published by Gerald Monroe on April 4, 2024 on LessWrong. Summary: the moderators appear to be soft banning users with 'rate-limits' without feedback. A careful review of each banned user reveals it's common to be banned despite earnestly attempting to contribute to the site. Some of the most intelligent banned users have mainstream instead of EA views on AI. Note how the punishment lengths are all the same, I think it was a mass ban-wave of 3 week bans: Gears to ascension was here but is no longer, guess she convinced them it was a mistake. Have I made any like really dumb or bad comments recently: https://www.greaterwrong.com/users/gerald-monroe?show=comments Well I skimmed through it. I don't see anything. Got a healthy margin now on upvotes, thanks April 1. Over a month ago, I did comment this stinker. Here is what seems to the same take by a very high reputation user here, @Matthew Barnett , on X: https://twitter.com/MatthewJBar/status/1775026007508230199 Must be a pretty common conclusion, and I wanted this site to pick an image that reflects their vision. Like flagpoles with all the world's flags (from coordination to ban AI) and EMS uses cryonics (to give people an alternative to medical ASI). I asked the moderators: @habryka says: I skimmed all comments I made this year, can't find anything that matches to this accusation. What comment did this happen on? Did this happen once or twice or 50 times or...? Any users want to help here, it surely must be obvious. You can look here: https://www.greaterwrong.com/users/gerald-monroe?show=comments if you want to help me find what habryka could possibly be referring to. I recall this happening once, Gears called me out on it, and I deleted the comment. Conditional that this didn't happen this year, why wasn't I informed or punished or something then? Skimming the currently banned user list: Let's see why everyone else got banned. Maybe I can infer a pattern from it: Akram Choudhary : 2 per comment and 1 post at -25. Taking the doomer view here: frankybegs +2.23 karma per comment. This is not bad. Does seem to make comments personal. Decided to enjoy the site and make 16 comments 6-8 days ago. Has some healthy karma on the comments, +6 to +11. That's pretty good by lesswrong standards. No AI views. Ban reason is??? Victor Ashioya His negative karma doesn't add up to -38, not sure why. AI view is in favor of red teaming, which is always good. @Remmelt doomer view, good karma (+2.52 karma per comment), hasn't made any comments in 17 days...why rate limit him? Skimming his comments they look nice and meaty and well written...what? All I can see is over the last couple of month he's not getting many upvotes per comment. green_leaf Ok at least I can explain this one. One comment at -41, in the last 20, green_leaf rarely comments. doomer view. PeteJ Tries to use humanities knowledge to align AI, apparently the readerbase doesn't like it. Probably won't work, banned for trying. @StartAtTheEnd 1.02 karma per comment, a little low, may still be above the bar. Not sure what he did wrong, comments are a bit long? doomer view, lots of downvotes omnizoid Seems to just be running a low vote total. People didn't like a post justifying religion. @MiguelDev Why rate limited? This user seems to be doing actual experiments. Karma seems a little low but I can't find any big downvote comments or posts recently. @RomanS Overall Karma isn't bad, 19 upvotes the most recent post. Seems to have a heavily downvoted comment that's the reason for the limit. @shminux this user has contributed a lot to the site. One comment heavily downvoted, algorithm is last 20. It certainly feels that way from the receiving end. 2.49 karma per comment, not bad. Cube tries to applies Baye's rule in several comments, I see a coup...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifying two uses of "alignment", published by Matthew Barnett on March 10, 2024 on The Effective Altruism Forum. Paul Christiano once clarified AI alignment as follows: When I say an AI A is aligned with an operator H, I mean: A is trying to do what H wants it to do. This definition is clear enough for many purposes, but it leads to confusion when one wants to make a point about two different types of alignment: A is trying to do what H wants it to do because A is trading or cooperating with H on a mutually beneficial outcome for the both of them. For example, H could hire A to perform a task, and offer a wage as compensation. A is trying to do what H wants it to do because A has the same values as H - i.e. its "utility function" overlaps with H's utility function - and thus A intrinsically wants to pursue what H wants it to do. These cases are important to distinguish because they have dramatically different consequences for the difficulty and scope of alignment. To solve alignment in the sense (1), A and H don't necessarily need to share the same values with each other in any strong sense. Instead, the essential prerequisite seems to be for A and H to operate in an environment in which it's mutually beneficial to them to enter contracts, trade, or cooperate in some respect. For example, one can imagine a human hiring a paperclip maximizer AI to perform work, paying them a wage. In return the paperclip maximizer could use their wages to buy more paperclips. In this example, the AI performed their duties satisfactorily, without any major negative side effects resulting from their differing values, and both parties were made better off as a result. By contrast, alignment in the sense of (2) seems far more challenging to solve. In the most challenging case, this form of alignment would require solving extremal goodhart, in the sense that A's utility function would need to be almost perfectly matched with H's utility function. Here, the idea is that even slight differences in values yield very large differences when subject to extreme optimization pressure. Because it is presumably easy to make slight mistakes when engineering AI systems, by assumption, these mistakes could translate into catastrophic losses of value. Effect on alignment difficulty My impression is that people's opinions about AI alignment difficulty often comes down to differences in how much they think we need to solve the second problem relative to the first problem, in order to get AI systems that generate net-positive value for humans. If you're inclined towards thinking that trade and compromise is either impossible or inefficient between agents at greatly different levels of intelligence, then you might think that we need to solve the second problem with AI, since "trading with the AIs" won't be an option. My understanding is that this is Eliezer Yudkowsky's view, and the view of most others who are relatively doomy about AI. In this frame, a common thought is that AIs would have no need to trade with humans, as humans would be like ants to them. On the other hand, you could be inclined - as I am - towards thinking that agents at greatly different levels of intelligence can still find positive sum compromises when they are socially integrated with each other, operating under a system of law, and capable of making mutual agreements. In this case, you might be a lot more optimistic about the prospects of alignment. To sketch one plausible scenario here, if AIs can own property and earn income by selling their labor on an open market, then they can simply work a job and use their income to purchase whatever it is they want, without any need to violently "take over the world" to satisfy their goals. At the same time, humans could retain power in this system through capital ownership and other gran...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against most AI risk analogies, published by Matthew Barnett on January 14, 2024 on LessWrong. I dislike most AI risk analogies that I've seen people use. While I think analogies can be helpful for explaining a concept to people for the first time, I think they are frequently misused, and often harmful. The fundamental problem is that analogies are consistently mistaken for, and often deliberately intended as arguments for particular AI risk positions. And the majority of the time when analogies are used this way, I think they are misleading and imprecise, routinely conveying the false impression of a specific, credible model of AI, even when no such credible model exists. Here is a random list of examples of analogies that I found in the context of AI risk: Stuart Russell: "It's not exactly like inviting a superior alien species to come and be our slaves forever, but it's sort of like that." Rob Wiblin: "It's a little bit like trying to understand how octopuses are going to think or how they'll behave - except that octopuses don't exist yet, and all we get to do is study their ancestors, the sea snail, and then we have to figure out from that what's it like to be an octopus." Eliezer Yudkowsky: "The character this AI plays is not the AI. The AI is an unseen actress who, for now, is playing this character. This potentially backfires if the AI gets smarter." Nate Soares: "My guess for how AI progress goes is that at some point, some team gets an AI that starts generalizing sufficiently well, sufficiently far outside of its training distribution, that it can gain mastery of fields like physics, bioengineering, and psychology [...] And in the same stroke that its capabilities leap forward, its alignment properties are revealed to be shallow, and to fail to generalize. Norbert Wiener: "when a machine constructed by us is capable of operating on its incoming data at a pace which we cannot keep, we may not know, until too late, when to turn it off. We all know the fable of the sorcerer's apprentice..." Geoffry Hinton: "It's like nuclear weapons. If there's a nuclear war, we all lose. And it's the same with these things taking over." Joe Carlsmith: "I think a better analogy for AI is something like an engineered virus, where, if it gets out, it gets harder and harder to contain, and it's a bigger and bigger problem." Ajeya Cotra: "Corporations might be a better analogy in some sense than the economy as a whole: they're made of these human parts, but end up pretty often pursuing things that aren't actually something like an uncomplicated average of the goals and desires of the humans that make up this machine, which is the Coca-Cola Corporation or something." Ezra Klein: "As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal." SKLUUG: "AI risk is like Terminator! AI might get real smart, and decide to kill us all! We need to do something about it!" These analogies cover a wide scope, and many of them can indeed sometimes be useful in conveying meaningful information. My point is not that they are never useful, but rather that these analogies are generally shallow and misleading. They establish almost nothing of importance about the behavior and workings of real AIs, but nonetheless give the impression of a model for how we should think about AIs. And notice how these analogies can give an impression of a coherent AI model even when the speaker is not directly asserting it to be a model. Regardless of the speaker's intentions, I think the actual effect is frequently to plant a detailed-yet-false picture in the audience's mind, giving rise to specious ideas about how real AIs will operate in the future. Plus, these analogies are frequently chosen selectively - picked on the basis of ev...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI values will be shaped by a variety of forces, not just the values of AI developers, published by Matthew Barnett on January 11, 2024 on The Effective Altruism Forum. In my last post about why AI value alignment shouldn't be conflated with AI moral achievement, a few people said they agreed with my point but they would frame it differently. For example, Pablo Stafforini framed the idea this way: it seems important to distinguish between normative and human specifications, not only because (arguably) "humanity" may fail to pursue the goals it should, but also because the team of humans that succeeds in building the first AGI may not represent the goals of "humanity". So this should be relevant both to people (like classical and negative utilitarians) with values that deviate from humanity's in ways that could matter a lot, and to "commonsense moralists" who think we should promote human values but are concerned that AI designers may not pursue these values (because these people may not be representative members of the population, because of self-interest, or because of other reasons). I disagree with Pablo's framing because I don't think that "the team of humans that succeeds in building the first AGI" will likely be the primary force in the world responsible for shaping the values of future AIs. Instead, I think that (1) there isn't likely to be a "first AGI" in any meaningful sense, and (2) AI values will likely be shaped more by market forces and regulation than the values of AI developers, assuming we solve the technical problems of AI alignment. In general, companies usually cater to what their customers want, and when they don't do that, they're generally outcompeted by companies who will do what customers want instead. Companies are also heavily constrained by laws and regulations. I think these constraints - market forces and regulation - will apply to AI companies too. Indeed, we have already seen these constraints play a role shaping the commercialization of existing AI products, such as GPT-4. It seems best to assume that this situation will largely persist into the future, and I see no strong reason to think there will be a fundamental discontinuity with the development of AGI. There do exist some reasons to assume that the values of AI developers matter a lot. Perhaps most significantly, AI development appears likely to be highly concentrated at the firm-level due to the empirically high economies of scale of AI training and deployment, lessening the ability for competition to unseat a frontier AI company. In the extreme case, AI development may be taken over by the government and monopolized. Moreover, AI developers may become very rich in the future, having created an extremely commercially successful technology, giving them disproportionate social, economic, and political power in our world. The points given in the previous paragraph do support a general case for caring somewhat about the morality or motives of frontier AI developers. Nonetheless, I do not think these points are compelling enough to make the claim that future AI values will be shaped primarily by the values of AI developers. It still seems to me that a better first-pass model is that AI values will be shaped by a variety of factors, including consumer preferences and regulation, with the values of AI developers playing a relatively minor role. Given that we are already seeing market forces shaping the values of existing commercialized AIs, it is confusing to me why an EA would assume this fact will at some point no longer be true. To explain this, my best guess is that many EAs have roughly the following model of AI development: There is "narrow AI", which will be commercialized, and its values will be determined by market forces, regulation, and to a limited degree, the values of AI...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment shouldn't be conflated with AI moral achievement, published by Matthew Barnett on December 30, 2023 on The Effective Altruism Forum. In this post I want to make a simple point that I think has big implications. I sometimes hear EAs talk about how we need to align AIs to "human values", or that we need to make sure AIs are benevolent. To be sure, ensuring AI development proceeds ethically is a valuable aim, but I claim this goal is not the same thing as "AI alignment", in the sense of getting AIs to try to do what people want. My central contention here is that if we succeed at figuring out how to make AIs pursue our intended goals, these AIs will likely be used to maximize the economic consumption of existing humans at the time of alignment. And most economic consumption is aimed at satisfying selfish desires, rather than what we'd normally consider our altruistic moral ideals. Only a small part of human economic consumption appears to be what impartial consequentialism would recommend, including the goal of filling the universe with numerous happy beings who live amazing lives. Let me explain. Consider how people currently spend their income. Below I have taken a plot from the blog Engaging Data, which borrowed data from the Bureau of Labor Statistics in 2019. It represents a snapshot of how the median American household spends their income. Most of their money is spent on the type of mundane consumption categories you'd expect: housing, utilities, vehicles etc. It is very likely that the majority of this spending is meant to provide personal consumption for members of the household or perhaps other family and friends, rather than strangers. Near the bottom of the chart, we find that only 3.1% of this spending is on what we'd normally consider altruism: voluntary gifts and charity. To be clear, this plot does not comprise a comprehensive assessment of the altruism of the median American household. Moreover, moral judgement is not my intention here. Instead, my intention is to emphasize the brute fact that when people are given wealth, they primarily spend it on themselves, their family, or their friends, rather than to pursue benevolent moral ideals. This fact is important because, to a first approximation, aligning AIs with humans will simply have the effect of greatly multiplying the wealth of existing humans - i.e. the total amount of resources that humans have available to spend on whatever they wish. And there is little reason to think that if humans become extraordinarily wealthy, they will follow idealized moral values. To see why, just look at what current people already do, who are many times richer than their ancestors centuries ago. All that extra wealth did not make us extreme moral saints; instead, we still mostly care about ourselves. Why does this fact make any difference? Consider the prescription of classical utilitarianism to maximize population size. If given the choice, humans would likely not spend their wealth to pursue this goal. That's because humans care far more about our own per capita consumption than global aggregate utility. When humans increase population size, it is usually a byproduct of their desire to have a family, rather than being the result of some broader utilitarian moral calculation. Here's another example. When given the choice to colonize the universe, future humans will likely want a rate of return on their investment, rather than merely deriving satisfaction from the fact that humanity's cosmic endowment is being used well. In other words, we will likely send out the von Neumann probes as part of a scheme to benefit ourselves, not out of some benevolent duty to fill the universe with happy beings. Now, I'm not saying selfishness is automatically bad. Indeed, when channeled appropriately, selfishness serves t...
I tend to disagree with most EAs about existential risk from AI. Unfortunately, my disagreements are all over the place. It's not that I disagree with one or two key points: there are many elements of the standard argument that I diverge from, and depending on the audience, I don't know which points of disagreement people think are most important. I want to write a post highlighting all the important areas where I disagree, and offering my own counterarguments as an alternative. This post would benefit from responding to an existing piece, along the same lines as Quintin Pope's article "My Objections to "We're All Gonna Die with Eliezer Yudkowsky"". By contrast, it would be intended to address the EA community as a whole, since I'm aware many EAs already disagree with Yudkowsky even if they buy the basic arguments for AI x-risks. My question is: what is the [...] --- First published: December 15th, 2023 Source: https://forum.effectivealtruism.org/posts/DHybAfxPhqqYa3bQz/what-is-the-current-most-representative-ea-ai-x-risk --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the current most representative EA AI x-risk argument?, published by Matthew Barnett on December 16, 2023 on The Effective Altruism Forum. I tend to disagree with most EAs about existential risk from AI. Unfortunately, my disagreements are all over the place. It's not that I disagree with one or two key points: there are many elements of the standard argument that I diverge from, and depending on the audience, I don't know which points of disagreement people think are most important. I want to write a post highlighting all the important areas where I disagree, and offering my own counterarguments as an alternative. This post would benefit from responding to an existing piece, along the same lines as Quintin Pope's article "My Objections to "We're All Gonna Die with Eliezer Yudkowsky"". By contrast, it would be intended to address the EA community as a whole, since I'm aware many EAs already disagree with Yudkowsky even if they buy the basic arguments for AI x-risks. My question is: what is the current best single article (or set of articles) that provide a well-reasoned and comprehensive case for believing that there is a substantial (>10%) probability of an AI catastrophe this century? I was considering replying to Joseph Carlsmith's article, "Is Power-Seeking AI an Existential Risk?", since it seemed reasonably comprehensive and representative of the concerns EAs have about AI x-risk. However, I'm a bit worried that the article is not very representative of EAs who have substantial probabilities of doom, since he originally estimated a total risk of catastrophe at only 5% before 2070. In May 2022, Carlsmith changed his mind and reported a higher probability, but I am not sure whether this is because he has been exposed to new arguments, or because he simply thinks the stated arguments are stronger than he originally thought. I suspect I have both significant moral disagreements and significant empirical disagreements with EAs, and I want to include both in such an article, while mainly focusing on the empirical points. For example, I have the feeling that I disagree with most EAs about: How bad human disempowerment would likely be from a utilitarian perspective, and what "human disempowerment" even means in the first place Whether there will be a treacherous turn event, during which AIs violently take over the world after previously having been behaviorally aligned with humans How likely AIs are to coordinate near-perfectly with each other as a unified front, leaving humans out of their coalition Whether we should expect AI values to be "alien" (like paperclip maximizers) in the absence of extraordinary efforts to align them with humans Whether the AIs themselves will be significant moral patients, on par with humans Whether there will be a qualitative moment when "the AGI" is created, rather than systems incrementally getting more advanced, with no clear finish line Whether we get only "one critical try" to align AGI Whether "AI lab leaks" are an important source of AI risk How likely AIs are to kill every single human if they are unaligned with humans Whether there will be a "value lock-in" event soon after we create powerful AI that causes values to cease their evolution over the coming billions of years How bad problems related to "specification gaming" will be in the future How society is likely to respond to AI risks, and whether they'll sleepwalk into a catastrophe However, I also disagree with points made by many other EAs who have argued against the standard AI risk case. For example, I think that, AIs will eventually become vastly more powerful and smarter than humans. So, I think AIs will eventually be able to "defeat all of us combined" I think a benign "AI takeover" event is very likely even if we align AIs successfully AIs will likely be goal-...
Support The Becket Cook Show on Patreon! In today's episode, Becket talks with, Matthew Barnett, the co-founder of the Dream Center in Los Angeles and senior pastor of the famed Angelus Temple in L.A. The Dream Center serves as a resource center focused on providing support to those affected by homelessness, hunger, and the lack of education through residential and community outreach programs. https://www.dreamcenter.org/ The Becket Cook Show Ep. 145This Episode of The Becket Cook Show is available on YouTubeJoin the Patreon! Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thoughts on the social response to AI risk, published by Matthew Barnett on November 1, 2023 on The AI Alignment Forum. A common theme implicit in many AI risk stories has been that broader society will either fail to anticipate the risks of AI until it is too late, or do little to address those risks in a serious manner. In my opinion, there are now clear signs that this assumption is false, and that society will address AI with something approaching both the attention and diligence it deserves. For example, one clear sign is Joe Biden's recent executive order on AI safety [1] . In light of recent news, it is worth comprehensively re-evaluating which sub-problems of AI risk are likely to be solved without further intervention from the AI risk community (e.g. perhaps deceptive alignment), and which ones won't be. Since I think substantial AI regulation is likely by default, I urge effective altruists to focus more on ensuring that the regulation is thoughtful and well-targeted rather than ensuring that regulation happens at all. Ultimately, I argue in favor of a cautious and nuanced approach towards policymaking, in contrast to broader public AI safety advocacy . [2] In the past, when I've read stories from AI risk adjacent people about what the future could look like, I have often noticed that the author assumes that humanity will essentially be asleep at the wheel with regards to the risks of unaligned AI, and won't put in place substantial safety regulations on the technology - unless of course EA and LessWrong-aligned researchers unexpectedly upset the gameboard by achieving a pivotal act . We can call this premise the assumption of an inattentive humanity . [3] While most often implicit, the assumption of an inattentive humanity was sometimes stated explicitly in people's stories about the future. For example, in a post from Marius Hobbhahn published last year about a realistic portrayal of the next few decades, Hobbhahn outlines a series of AI failure modes that occur as AI gets increasingly powerful. These failure modes include a malicious actor using an AI model to create a virus that "kills ~1000 people but is stopped in its tracks because the virus kills its hosts faster than it spreads", an AI model attempting to escape its data center after having "tried to establish a cult to "free" the model by getting access to its model weights", and a medical AI model that "hacked a large GPU cluster and then tried to contact ordinary people over the internet to participate in some unspecified experiment". Hobbhahn goes on to say, People are concerned about this but the news is as quickly forgotten as an oil spill in the 2010s or a crypto scam in 2022. Billions of dollars of property damage have a news lifetime of a few days before they are swamped by whatever any random politician has posted on the internet or whatever famous person has gotten a new partner. The tech changed, the people who consume the news didn't. The incentives are still the same. Stefan Schubert subsequently commented that this scenario seems implausible, I expect that people would freak more over such an incident than they would freak out over an oil spill or a crypto scam. For instance, an oil spill is a well-understood phenomenon, and even though people would be upset about it, it would normally not make them worry about a proliferation of further oil spills. By contrast, in this case the harm would come from a new and poorly understood technology that's getting substantially more powerful every year. Therefore I expect the reaction to the kind of harm from AI described here to be quite different from the reaction to oil spills or crypto scams. I believe Schubert's point has been strengthened by recent events, including Biden's executive order that touches on many aspects of AI risk [1] , t...
Matthew Barnett is the CEO and head of product for Bonjoro; a loyalty platform for building loyal customers that stay for life with over 50k happy customers. This episode originally aired on Nov 11th, 2022. So, if you didn't listen to it, now it's a great time to do it! In this episode...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The possibility of an indefinite AI pause, published by Matthew Barnett on September 19, 2023 on The Effective Altruism Forum. This post is part of AI Pause Debate Week. Please see this sequence for other posts in the debate. tl;dr An indefinite AI pause is a somewhat plausible outcome and could be made more likely if EAs actively push for a generic pause. I think an indefinite pause proposal is substantially worse than a brief pause proposal, and would probably be net negative. I recommend that alternative policies with greater effectiveness and fewer downsides should be considered instead. Broadly speaking, there seem to be two types of moratoriums on technologies: (1) moratoriums that are quickly lifted, and (2) moratoriums that are later codified into law as indefinite bans. In the first category, we find the voluntary 1974 moratorium on recombinant DNA research, the 2014 moratorium on gain of function research, and the FDA's partial 2013 moratorium on genetic screening. In the second category, we find the 1958 moratorium on conducting nuclear tests above the ground (later codified in the 1963 Partial Nuclear Test Ban Treaty), and the various moratoriums worldwide on human cloning and germline editing of human genomes. In these cases, it is unclear whether the bans will ever be lifted - unless at some point it becomes infeasible to enforce them. Overall I'm quite uncertain about the costs and benefits of a brief AI pause. The foreseeable costs of a brief pause, such as the potential for a compute overhang, have been discussed at length by others, and I will not focus my attention on them here. I recommend reading this essay to find a perspective on brief pauses that I'm sympathetic to. However, I think it's also important to consider whether, conditional on us getting an AI pause at all, we're actually going to get a pause that quickly ends. I currently think there is a considerable chance that society will impose an indefinite de facto ban on AI development, and this scenario seems worth analyzing in closer detail. Note: in this essay, I am only considering the merits of a potential lengthy moratorium on AI, and I freely admit that there are many meaningful axes on which regulatory policy can vary other than "more" or "less". Many forms of AI regulation may be desirable even if we think a long pause is not a good policy. Nevertheless, it still seems worth discussing the long pause as a concrete proposal of its own. The possibility of an indefinite pause Since an "indefinite pause" is vague, let me be more concrete. I currently think there is between a 10% and 50% chance that our society will impose legal restrictions on the development of advanced AI systems that, Prevent the proliferation of advanced AI for more than 10 years beyond the counterfactual under laissez-faire Have no fixed, predictable expiration date (without necessarily lasting forever) Eliezer Yudkowsky, perhaps the most influential person in the AI risk community, has already demanded an "indefinite and worldwide" moratorium on large training runs. This sentiment isn't exactly new. Some effective altruists, such as Toby Ord, have argued that humanity should engage in a "long reflection" before embarking on ambitious and irreversible technological projects, including AGI. William MacAskill suggested that this pause should perhaps last "a million years". Two decades ago, Nick Bostrom considered the ethics of delaying new technologies in a utilitarian framework and concluded a delay of "over 10 million years" may be justified if it reduces existential risk by a single percentage point. I suspect there are approximately three ways that such a pause could come about. The first possibility is that governments could explicitly write such a pause into law, fearing the development of AI in a broad sense,...
If you keep following Jesus and loving people, Jesus will bring your vision to pass. In this inspirational message, Ps. Matthew shares his journey of opening the LA Dream Center and experiencing miracle after miracle.
If you keep following Jesus and loving people, Jesus will bring your vision to pass. In this inspirational message, Ps. Matthew shares his journey of opening the LA Dream Center and experiencing miracle after miracle.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updating Drexler's CAIS model, published by Matthew Barnett on June 16, 2023 on LessWrong. Eric Drexler's report Reframing Superintelligence: Comprehensive AI Services (CAIS) as General Intelligence reshaped how a lot of people think about AI (summary 1, summary 2). I still agree with many parts of it, perhaps even the core elements of the model. However, after looking back on it more than four years later, I think the general picture it gave missed some crucial details about how AI will go. The problem seems to be that his report neglected a discussion of foundation models, which I think have transformed how we should think about AI services and specialization. The general vibe I got from CAIS (which may not have been Drexler's intention) was something like the following picture: For each task in the economy, we will train a model from scratch to automate the task, using the minimum compute necessary to train an AI to do well on the task. Over time, the fraction of tasks automated will slowly expand like a wave, starting with the tasks that are cheapest to automate computationally, and ending with the most expensive tasks. At some point, automation will be so widespread that it will begin to meaningfully feed into itself, increasing AI R&D, and accelerating the rate of technological progress. The problem with this approach to automation is that it's extremely wasteful to train models from scratch for each task. It might make sense when training budgets are tiny — as they mostly were in 2018 — but it doesn't make sense when it takes 10^25 FLOP to reach adequate performance on a given set of tasks. The big obvious-in-hindsight idea that we've gotten over the last several years is that, instead of training from scratch for each new task, we'll train train a foundation model on some general distribution, which can then be fine-tuned using small amounts of compute to perform well on any task. In the CAIS model, "general intelligence" is just the name we can give to the collection of all AI services in the economy. In this new paradigm, "general intelligence" refers to the fact that sufficiently large foundation models can efficiently transfer their knowledge to obtain high performance on almost any downstream task, which is pretty closely analogous to what humans do to take over jobs. The fact that generalist models can be efficiently adapted to perform well on almost any task is an incredibly important fact about our world, because it implies that a very large fraction of the costs of automation can be parallelized across almost all tasks. Let me illustrate this fact with a hypothetical example. Suppose we previously thought that it would take $1 trillion to automate each task in our economy, such as language translation, box stacking, and driving cars. Since the cost of automating each of these tasks is $1 trillion each, you might expect companies would slowly automate all the tasks in the economy, starting with the most profitable ones, and then finally getting around to the least profitable ones once economic growth allowed for us to spend enough money on automating not-very-profitable stuff. But now suppose we think it costs $999 billion to create "general intelligence", which then once built, can be quickly adapted to automate any other task at a cost of $1 billion. In this world, we will go very quickly from being able to automate almost nothing to being able to automate almost anything. In other words we will get one big innovation "lump", which is the opposite of what Robin Hanson predicted. Even if we won't invent monolithic agents that take over the world by being smarter than everything else, we won't have a gradual decades-long ramp-up to full automation either. Of course, the degree of suddenness in the foundation model paradigm is still debatable, because ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Revising Drexler's CAIS model, published by Matthew Barnett on June 16, 2023 on The AI Alignment Forum. Eric Drexler's report Reframing Superintelligence: Comprehensive AI Services (CAIS)as General Intelligence reshaped how a lot of people think about AI (summary 1, summary 2). I still agree with many parts of it, perhaps even the core elements of model. However, after looking back on it more than four years later, I think the general vibe it gave was misleading as a picture of how AI will go. The problem seems to be that his model neglected a discussion of foundation models, which I think have transformed how we should think about AI services and specialization. The general vibe I got from CAIS (which may not have been Drexler's intention) was something like the following picture: For each task in the economy, we will train a model from scratch to automate the task, using the minimum compute necessary to train an AI to do well on the task. Over time, the fraction of tasks automated will slowly expand like a wave, starting with the tasks that are cheapest to automate computationally, and ending with the most expensive tasks. At some point, automation will be so widespread that it will begin to meaningfully feed into itself, increasing AI R&D itself, accelerating the rate of technological progress. The problem with this approach to automation is that it's extremely wasteful to train models from scratch for each task. It might make sense when training budgets are tiny — as they mostly were in 2018 — but it doesn't make sense when it takes 10^25 FLOP to reach adequate performance on a given set of tasks. The big obvious-in-hindsight idea that we've gotten over the last several years is that, instead of training from scratch for each new task, we'll train train a foundation model on some general distribution, which can then be fine-tuned using small amounts of compute to perform well on any task. In the CAIS model, "general intelligence" is just the name we give for the collection of all AI services in the economy. In this new paradigm "general intelligence" refers to the fact that sufficiently large foundation models can efficiently transfer their knowledge to obtain high performance on almost any downstream task, which is pretty closely analogous to what humans do to take over jobs. Foundation models totally change the game, because it means that AI development is highly concentrated at the firm-level. AIs themselves might be specialized to provide various services, but the AI economy depends critically on a few non-specialized firms that deliver the best foundation models at any given time. There can only be a few firms in the market providing foundation models because the fixed capital costs required to train a SOTA foundation model are very high, and being even 2 OOMs behind the lead actor results in effectively zero market share. The fact that generalist models can be efficiently adapted to perform well on almost any task is an incredibly important fact about our world, because it implies that a very large fraction of the costs of automation can be parallelized across almost all tasks. Let me illustrate this fact with a hypothetical example. Suppose we previously thought that it would take $1 trillion to automate each task in our economy, such as language translation, box stacking, and driving cars. Since the cost of automating each of these tasks is $1 trillion each, you might expect companies would slowly automate all the tasks in the economy, starting with the most profitable ones, and then finally getting around to the least profitable ones once economic growth allowed for us to spend enough money on automating not-very-profitable stuff. But now suppose we think it costs $999 billion to automate "general intelligence", which then once built, can be quickly adap...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A note of caution about recent AI risk coverage, published by Sean o h on June 7, 2023 on The Effective Altruism Forum. Epistemic status: some thoughts I wanted to get out quickly A lot of fantastic work has been done by people in the AI existential risk research community and related communities over the last several months in raising awareness about risks from advanced AI. However, I have some cause for unease that I'd like to share. These efforts may have been too successful too soon. Or, more specifically, this level of outreach success this far ahead of the development of AI capable of posing existential risk may have fallout. We should consider steps to mitigate this. (1) Timelines I know that there are well-informed people in the AI and existential risk communities who believe AI capable of posing existential risk may be developed within 10 years. I certainly can't rule this out, and even a small chance of this is worth working to prevent or mitigate to the extent possible, given the possible consequences. My own timelines are longer, although my intuitions don't have a rigorous model underpinning them (my intuitions line up similarly to the 15-40 year timelines mentioned in this recent blog post by Matthew Barnett from Epoch). Right now the nature of media communications means that the message is coming across with a lot of urgency. From speaking to lay colleagues, impressions often seem to be of short timelines (and some folks e.g. Geoff Hinton have explicitly said 5-20 years, sometimes with uncertainty caveats and sometimes without). It may be that those with short (10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world? The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don't transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look a bit silly. Remember when everyone agreed AI was going to make us all extinct? Yeah, like Limits to Growth all over again. Except that we're not safe. In reality, in this scenario, we're just entering the period in which risk is most acute, and in which gaining or maintaining the support of leaders across society for coordinated action is most important. And it's possibly even harder to convince them, because people remember how silly lots of people looked the last time. (3) How to navigate this scenario (in advance). Suggestions: Have our messaging make clear that we ...
Epistemic status: some thoughts I wanted to get out quicklyA lot of fantastic work has been done by people in the AI existential risk research community and related communities over the last several months in raising awareness about risks from advanced AI. However, I have some cause for unease that I'd like to share.These efforts may have been too successful too soon.Or, more specifically, this level of outreach success this far ahead of the development of AI capable of posing existential risk may have fallout. We should consider steps to mitigate this. (1) TimelinesI know that there are well-informed people in the AI and existential risk communities who believe AI capable of posing existential risk may be developed within 10 years. I certainly can't rule this out, and even a small chance of this is worth working to prevent or mitigate to the extent possible, given the possible consequences. My own timelines are longer, although my intuitions don't have a rigorous model underpinning them (my intuitions line up similarly to the 15-40 year timelines mentioned in this recent blog post by Matthew Barnett from Epoch).Right now the nature of media communications means that the message is coming across with a lot of urgency. From speaking to lay colleagues, impressions often seem to be of short timelines (and some folks e.g. Geoff Hinton have explicitly said 5-20 years, sometimes with uncertainty caveats and sometimes without).It may be that those with short (10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world? The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don't transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look [...]--- Source: https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage --- Narrated by TYPE III AUDIO. Share feedback on this narration.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A compute-based framework for thinking about the future of AI, published by Matthew Barnett on May 31, 2023 on The Effective Altruism Forum. How should we expect AI to unfold over the coming decades? In this article, I explain and defend a compute-based framework for thinking about AI automation. This framework makes the following claims, which I defend throughout the article: The most salient impact of AI will be its ability to automate labor, which is likely to trigger a productivity explosion later this century, greatly altering the course of history. The availability of useful compute is the most important factor that determines progress in AI, a trend which will likely continue into the foreseeable future. AI performance is likely to become relatively predictable on most important, general measures of performance, at least when predicting over short time horizons. While none of these ideas are new, my goal is to provide a single article that articulates and defends the framework as a cohesive whole. In doing so, I present the perspective that Epoch researchers find most illuminating about the future of AI. Using this framework, I will justify a value of 40% for the probability of Transformative AI (TAI) arriving before 2043. Summary The post is structured as follows. In part one, I will argue that what matters most is when AI will be able to automate a wide variety of tasks in the economy. The importance of this milestone is substantiated by simple models of the economy that predict AI could greatly accelerate the world economic growth rate, dramatically changing our world. In part two, I will argue that availability of data is less important than compute for explaining progress in AI, and that compute may even play an important role driving algorithmic progress. In part three, I will argue against a commonly held view that AI progress is inherently unpredictable, providing reasons to think that AI capabilities may be anticipated in advance. Finally, in part four, I will conclude by using the framework to build a probability distribution over the date of arrival for transformative AI. Part 1: Widespread automation from AI When discussing AI timelines, it is often taken for granted that the relevant milestone is the development of Artificial General Intelligence (AGI), or a software system that can do or learn “everything that a human can do.” However, this definition is vague. For instance, it's unclear whether the system needs to surpass all humans, some upper decile, or the median human. Perhaps more importantly, it's not immediately obvious why we should care about the arrival of a single software system with certain properties. Plausibly, a set of narrow software programs could drastically change the world before the arrival of any monolithic AGI system (Drexler, 2019). In general, it seems more useful to characterize AI timelines in terms of the impacts AI will have on the world. But, that still leaves open the question of what impacts we should expect AI to have and how we can measure those impacts. As a starting point, it seems that automating labor is likely to be the driving force behind developing AI, providing huge and direct financial incentives for AI companies to develop the technology. The productivity explosion hypothesis says that if AI can automate the majority of important tasks in the economy, then a dramatic economic expansion will follow, increasing the rate of technological, scientific, and economic growth by at least an order of magnitude above its current rate (Davidson, 2021). A productivity explosion is a robust implication of simple models of economic growth models, which helps explain why the topic is so important to study. What's striking is that the productivity explosion thesis appears to follow naturally from some standard assump...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are Emergent Abilities of Large Language Models a Mirage? [linkpost], published by Matthew Barnett on May 2, 2023 on LessWrong. Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, one can choose a metric which leads to the inference of an emergent ability or another metric which does not. Thus, our alternative suggests that existing claims of emergent abilities are creations of the researcher's analyses, not fundamental changes in model behavior on specific tasks with scale. We present our explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abilities, (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show how similar metric decisions suggest apparent emergent abilities on vision tasks in diverse deep network architectures (convolutional, autoencoder, transformers). In all three analyses, we find strong supporting evidence that emergent abilities may not be a fundamental property of scaling AI models. This result seems important for two reasons: If AI abilities are predictable, then we can forecast when we'll get dangerous capabilities ahead of time, rather than being taken by surprise. This result strengthens the case for a research program of devising a ton of interesting benchmarks to measure how capabilities are improving as a function of scale. It provides some evidence against the idea that "understanding is discontinuous", and that important AI abilities will suddenly click together at some level, which is a very loose description of what I understood to be one of the primary intuitions behind AI foom. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shutting down AI is not enough. We need to destroy all technology., published by Matthew Barnett on April 1, 2023 on LessWrong. A TIME article published recently calls for an “indefinite and worldwide” moratorium on new large AI training runs. This moratorium would be better than no moratorium. I have respect for the author who wrote it. It's an improvement on the margin. I refrained from endorsing the essay because I think it is understating the seriousness of the situation and asking for too little to solve it. If there was a plan for Earth to survive, if only we passed an indefinite and worldwide moratorium on large training runs, I would back that plan. There isn't any such plan. Here's what would actually need to be done: All human technology needs to be destroyed. There can be no exceptions, including for sharpened stones and hand axes. After everything is burned, we must then forget how to create fire. If a single exception is made, that increases the probability that civilization will be recreated within the next millennia and new large AI training runs will be started. If I had infinite freedom to write laws, I might carve out a single exception for technologies that prevent human diseases, like knowledge of germ theory; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down. Shut down all the roads, melt all the cars. Burn down all of the tables and all of the books. Put a ceiling into how many calories of food any single human can furnish per day, and move it downward over the coming generations to compensate for the possibility that natural selection will keep making humans smarter. No exceptions for cloth or fireplaces. Destroy all human objects now to prevent them from moving to another country. Track all gazelles that are hunt. If anyone notices that a tribe is catching more gazelles than it should, be willing to choke them (with your bare hands, of course) one by one. Shut it all down. Eliminate all technology. Dismantle modern civilization. Return to our primeval state. We are not ready. We are not on track to be significantly readier any time in the next million years. If we go ahead on this, everyone will suffer, including children who did not choose this and did not do anything wrong. Shut it down. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conceding a short timelines bet early, published by Matthew Barnett on March 16, 2023 on LessWrong. Last year I bet some people about short AI timelines. While I don't think I've lost the bet yet, I think it's clear at this point that I will lose with high probability. I've outlined the reasons why I think that in a retrospective here. Even if I end up winning, I think it will likely be the result of a technicality, and that wouldn't be very interesting. Because of my personal preference for settling this matter now without delay, I have decided to take the step of conceding the bet now. Note however that I am not asking Tamay to do the same. I have messaged the relevant parties and asked them to send me details on how to pay them. I congratulate Nathan Helm-Burger and Tomás B. for taking the other side of the bet. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I think it's important to work on AI forecasting, published by Matthew Barnett on February 27, 2023 on The Effective Altruism Forum. Note: this post is a transcript of a talk I gave at EA Global: Bay Area 2023. These days, a lot of effective altruists are working on trying to make sure AI goes well. But I often worry that, as a community, we don't yet have a clear picture of what we're really working on. The key problem is that predicting the future is very difficult, and in general, if you don't know what the future will look like, it's usually hard to be sure that any intervention we do now will turn out to be highly valuable in hindsight. When EAs imagine the future of AI, I think a lot of us tend to have something like the following picture in our heads. At some point, maybe 5, 15, 30 years from now, some AI lab somewhere is going to build AGI. This AGI is going to be very powerful in a lot of ways. And we're either going to succeed in aligning it, and then the future will turn out to be bright and wonderful, or we'll fail, and the AGI will make humanity go extinct, and it's not yet clear which of these two outcomes will happen yet. Alright, so that's an oversimplified picture. There's lots of disagreement in our community about specific details in this story. For example, we sometimes talk about whether there will be one AGI or several. Or about whether there will be a fast takeoff or a slow takeoff. But even if you're confident about some of these details, I think there are plausibly some huge open questions about the future of AI that perhaps no one understands very well. Take the question of what AGI will look like once it's developed. If you asked an informed observer in 2013 what AGI will look like in the future, I think it's somewhat likely they'd guess it'll be an agent that we'll program directly to search through a tree of possible future actions, and select the one that maximizes expected utility, except using some very clever heuristics that allows it to do this in the real world. In 2018, if you asked EAs what AGI would look like, a decent number of people would have told you that it will be created using some very clever deep reinforcement learning trained in a really complex and diverse environment. And these days in 2023, if you ask EAs what they expect AGI to look like, a fairly high fraction of people will say that it will look like a large language model: something like ChatGPT but scaled up dramatically, trained on more than one modality, and using a much better architecture. That's just my impression of how people's views have changed over time. Maybe I'm completely wrong about this. But the rough sense I've gotten while in this community is that people will often cling to a model of what future AI will be like, which frequently changes over time. And at any particular time, people will often be quite overconfident in their exact picture of AGI. In fact, I think the state of affairs is even worse than how I've described it so far. I'm not even sure if this particular question about AGI is coherent. The term “AGI” makes it sound like there will be some natural class of computer programs called “general AIs” that are sharply distinguished from this other class of programs called “narrow AIs”, and at some point – in fact, on a particular date – we will create the “first” AGI. I'm not really sure that story makes much sense. The question of what future AI will look like is a huge question, and getting it wrong could make the difference between a successful research program, and one that never went anywhere. And yet, it seems to me that, as of 2023, we still don't have very strong reasons to think that the way we think about future AI will end up being right on many of the basic details. In general I think that uncertainty about the future of ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A proposed method for forecasting transformative AI, published by Matthew Barnett on February 10, 2023 on The AI Alignment Forum. In 2021, I proposed measuring progress in the perplexity of language models and extrapolating past results to determine when language models were expected to reach roughly "human-level" performance. Here, I build on that approach by introducing a more systematic and precise method of forecasting progress in language modeling that employs scaling laws to make predictions. The full report for this forecasting method can be found in this document. In this blog post I'll try to explain all the essential elements of the approach without providing excessive detail regarding the technical derivations. This approach can be contrasted with Ajeya Cotra's Bio Anchors model, providing a new method for forecasting the arrival of transformative AI (TAI). I will tentatively call it the "Direct Approach", since it makes use of scaling laws directly to make predictions about compute requirements for AI. Naturally, the Direct Approach is a very speculative framework and might end up being useless for forecasting TAI (in fact, I consider this the most likely outcome). Nonetheless, I'm hopeful that something like it can serve as a better foundation than current TAI timelines models, which I currently think are likely even worse. Note that there may be errors in the report and Colab notebook, as they were not extensively fact-checked. Some background In a nutshell, this approach is simply about taking the cross-entropy loss of an autoregressive model and trying to find a way of interpreting that quantity qualitatively: that is, something we can put on a chart and extrapolate until the quantity reaches a natural threshold that we identify with something important. In my 2021 post about predicting language model performance, I drew a trendline through a plot of language model perplexities on various benchmarks and noted when the trendline went through estimates of "human-level" perplexity. This approach felt reasonable to me at the time, but I now think it too easily hand-waved away some important details. The error of omission I committed in my old approach becomes more apparent when you think about language model performance from the perspective of scaling laws, for example the parametric scaling law from Hoffmann et al. 2022: Here, we see cross-entropy loss as a function of parameters N and training tokens D seen during training. Notably, if we take the limit as the number of parameters and training tokens goes to infinity, then we're left with E. Theoretically, E corresponds to the "entropy of natural text" under certain assumptions, which is precisely the thing I identified with "roughly human-level" performance in my previous post. In other words, if we take this scaling law naively, it seems as though it will take infinite compute to reach human-level performance. I believe the resolution to this apparent issue is to say that "human-level" performance will not be obtained when loss hits E, but rather some small level above E. How close to E is enough? Well, that's the question we tried to answer with this report. Summary of the Direct Approach We begin by considering a language task, which in this post will be scientific research for illustration. For simplicity, let's imagine that this task consists of writing high-quality research papers or reports, although more nuanced specifications are possible. Of course, real scientific research involves more than merely writing research papers. It involves proposing hypotheses, devising experiments, and collecting data, but for now, let's imagine that we can simplify all these steps into one step that involves writing high quality research papers. This simplification may not be entirely unrealistic, since if the ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A proposed method for forecasting TAI, published by Matthew Barnett on February 10, 2023 on LessWrong. In 2021, I proposed measuring progress in the perplexity of language models and extrapolating past results to determine when language models were expected to reach roughly "human-level" performance. Here, I build on that approach by introducing a more systematic and precise method of forecasting progress in language modeling that employs scaling laws to make predictions. The full report for this forecasting method can be found in this document. In this blog post I'll try to explain all the essential elements of the approach without providing excessive detail regarding the technical derivations. As a bonus, this approach can be contrasted with Ajeya Cotra's Bio Anchors model, providing a new method for forecasting the arrival of transformative AI (TAI). I will tentatively call it the "Direct Approach", since it makes use of scaling laws directly to make predictions about compute requirements for AI. Naturally, the Direct Approach is a very speculative framework and might end up being useless for forecasting TAI (in fact, I consider this the most likely outcome). Nonetheless, I'm hopeful that something like it can serve as a better foundation for current TAI timelines models, which I currently view as likely even worse. Note that there may be errors in the report and Colab notebook, as they were not extensively fact-checked. Some background In a nutshell, this approach is simply about taking the cross-entropy loss of an autoregressive model and trying to find a way of interpreting that quantity qualitatively: that is, something we can put on a chart and extrapolate until the quantity reaches a natural threshold that we identify with something important. In my 2021 post about predicting language model performance, I drew a trendline through a plot of language model perplexities on various benchmarks and noted when the trendline went through estimates of "human-level" perplexity. This approach felt reasonable to me at the time, but I now think it too easily hand-waved away some important details. The error of omission I committed in my old approach becomes more apparent when you think about language model performance from the perspective of scaling laws, for example the parametric scaling law from Hoffmann et al. 2022: Here, we see cross-entropy loss as a function of parameters N and training tokens D seen during training. Notably, if we take the limit as the number of parameters and training tokens goes to infinity, then we're left with E. Theoretically, E corresponds to the "entropy of natural text", which is precisely the thing I identified with "roughly human-level" performance in my previous post. In other words, if we take this scaling law naively, it seems as though it will take infinite compute to reach human-level performance. I believe the resolution to this apparent issue is to say that "human-level" performance will not be obtained when loss hits E, but rather some small level above E. How close to E is enough? Well, that's the question we tried to answer with this report. Summary of the Direct Approach We begin by considering a language task, which in this post will be scientific research for illustration. For simplicity, let's imagine that this task consists of writing high-quality research papers or reports, although more nuanced specifications are possible. Of course, real scientific research involves more than merely writing research papers. It involves proposing hypotheses, devising experiments, and collecting data, but for now, let's imagine that we can simplify all these steps into one step that involves writing high quality research papers. This simplification may not be entirely unrealistic, since if the papers are genuinely judged to be high quali...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Noting an error in Inadequate Equilibria, published by Matthew Barnett on February 8, 2023 on LessWrong. I think I've uncovered an error in Eliezer Yudkowsky's book Inadequate Equilibria that undermines a key point in the book. Here are some of my observations. First, let me provide some context. In the first chapter, Yudkowsky states that prior to Shinzo Abe's tenure as Prime Minister of Japan, the Bank of Japan had implemented a bad monetary policy that cost Japan trillions of dollars in real economic growth. His point was that he was able to spot this mistake, and confidently know better than the experts employed at the Bank of Japan, despite not being an expert in economic policy himself. In a dialogue, he wrote, CONVENTIONAL CYNICAL ECONOMIST: So, Eliezer, you think you know better than the Bank of Japan and many other central banks around the world, do you? ELIEZER: Yep. Or rather, by reading econblogs, I believe myself to have identified which econbloggers know better, like Scott Sumner. C.C.E.: Even though literally trillions of dollars of real value are at stake? ELIEZER: Yep. To demonstrate that he was correct on this issue, Yudkowsky said the following, When we critique a government, we don't usually get to see what would actually happen if the government took our advice. But in this one case, less than a month after my exchange with John, the Bank of Japan—under the new leadership of Haruhiko Kuroda, and under unprecedented pressure from recently elected Prime Minister Shinzo Abe, who included monetary policy in his campaign platform—embarked on an attempt to print huge amounts of money, with a stated goal of doubling the Japanese money supply.5 Immediately after, Japan experienced real GDP growth of 2.3%, where the previous trend was for falling RGDP. Their economy was operating that far under capacity due to lack of money.6 However, that last part is not correct, as far as I can tell. According to official government data, Japan's RGDP had not been falling prior to 2013, other than the fall caused by the Great Recession. RGDP did grow by ~2.0% in 2013, but I cannot discern any significant change in the trend after Haruhiko Kuroda began serving as governor at the Bank of Japan. In his footnote, Yudkowsky cites this article from 2017 to provide a "more recent update" about Japan's successful monetary policy. However, I don't think the article demonstrates that Yudkowsky was correct in any major way about the point he made. The article never presents data on RGDP. Instead, it focuses primarily on how unemployment has fallen since 2013. However, it's hard for me to see any significant impact that Japan's shift in monetary policy had on unemployment when examining the data. The only data series presented in the article is this plot of the prime age labor force participation rate. However, the effect looks kind of weak to me, and I don't think raising prime age LFPR is a standard target of monetary policy. After looking at a more standard target, it seems that Japan's new monetary policy isn't achieving its goals, as Japan experienced no major inflation after Haruhiko Kuroda took charge of the Bank of Japan in March 2013, despite a target of 2% inflation. Note that the brief spike in Japan's CPI in April 2014 was almost certainly a result of their VAT hike, rather than any change in monetary policy at the time. That's not to say that I think the Bank of Japan was wrong to print more money. In fact, I am not aware of any strong disagreements that I have with Scott Sumner's general view on monetary policy, which is where Yudkowsky says he got (at least some of) his opinions from. However, I think this error undermines a significant part of Yudkowsky's thesis. This example was one of two major anecdotes that Yudkowsky presented to show that he can often know bet...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slightly against aligning with neo-luddites, published by Matthew Barnett on December 26, 2022 on The Effective Altruism Forum. To summarize, When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others. Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install. Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI. In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general. It appears we are in the midst of a new wave of neo-luddite sentiment. Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art. Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely. I expect most readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities. On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety. Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment. In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds. If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing. A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how. One consideration, which has been pointed out by many before...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slightly against aligning with neo-luddites, published by Matthew Barnett on December 26, 2022 on LessWrong. To summarize, When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others. Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install. Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI. In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general. It appears we are in the midst of a new wave of neo-luddite sentiment. Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art. Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely. I expect most LessWrong readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities. On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety. Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment. In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds. If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing. A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how. One consideration, which has been pointed out by many before, is that...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updating my AI timelines, published by Matthew Barnett on December 5, 2022 on LessWrong. Last year I published a post titled Three reasons to expect long AI timelines, and earlier this year I offered to bet people who had short AI timelines. While it wasn't my intention to be known as "a long AI timelines guy", I have begun feeling that was how people perceived me. Nonetheless, in the last few months, I've modified my views substantially. Thus, I offer this short post, which can hopefully make my current position more clear. There are several reasons for my update towards shorter AI timelines, though each reason is relatively straightforward and uncomplicated. In the spirit of writing something short rather than not writing something at all, my explanations here will be brief, although I may be willing to elaborate in a comment below. In order, these reasons included, but were not limited to, I became convinced that the barriers to language models adopting human-level reasoning were much weaker than I had believed. Previously, I had imagined that it would be difficult to get a language model to perform reasoning over long sequences, in which each step in the sequence requires making a non-trivial inference, and one mistake in understanding the sequence can make the difference between a coherent and incoherent response.Yet, my personal experience with language models, including ChatGPT, has persuaded me to that this type of problem is not a strong barrier, and is more continuous with other challenges like "understanding the tone of a document" or "understanding what's going on in a plot" which I had already thought language models were making good progress on. In hindsight, I should have perhaps trusted the model I had constructed myself, which forecasted human-level language models by 2030. I built an (unpublished) TAI timelines model, and after fitting the model, it came out with a median timeline of 2037. While I don't put a high degree of confidence in my model, or the parameters that I used, I believe it's still more reliable than my own intuition, which suggested much later dates were more plausible. I reflected more on the possibility that short-term AI progress will accelerate AI progress. I noticed that I had been underestimating the returns to scaling, and the possibility of large companies scaling their training budgets quickly to the $10B-$100B level. I am still unsure that this will happen within the next 10 years, but it no longer seems like something I should dismiss. I saw almost everyone else updating towards shorter timelines, except for people who already had 5-15 year timelines, and a few other people like Robin Hanson. Even after adjusting for the bandwagon effect, I think it's now appropriate to update substantially as well. I still feel like my arguments for expecting delays from regulation are being underrated. Yet, the degree to which governments (and perhaps society more generally) have been ignoring recent AI developments has made me less confident about how much we should expect these delays to last. Instead of imagining a 20 year delay, a 3 to 10 year delay from regulation now seems more reasonable to me. If you want me to get specific, my unconditional median TAI timeline is now something like 2047, with a mode around 2035, defined by the first year we get >30% yearly GWP growth as measured from a prior peak, or an event of comparable significance. This timeline may still appear too long to many people, yet my explanation is that it's what I get when I account for potential coordinated delays, catastrophes, and a 15% chance that we're fundamentally wrong about all of this stuff. Conditional on nothing like that happening, I'd be inclined to weakly bet on TAI before 2039. Thanks for listening. To help us out with The Nonlinear Library or t...
Guest Speaker: Pastor Matthew Barnett of Angelus Temple and the L.A. Dream Center | Online at www.theassembly.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A conversation about Katja's counterarguments to AI risk, published by Matthew Barnett on October 18, 2022 on LessWrong. This post is a transcript of a conversation between Ege Erdil and Ronny Fernandez, recorded by me. The participants talked about a recent post by Katja Grace that presented many counterarguments to the basic case for AI x-risk. You might want to read that post first. As it was transcribed automatically by Whisper, along with some light editing on top, there might be many mistakes in the transcript. You can also find the audio file here, but note that the first five lines of the dialogue do not appear in the audio. Ronny Fernandez wants to make clear that he did not know the conversation would eventually be published. (But all participants have consented to it being published). Ege ErdilYou know about Epoch, right? Ronny FernandezNot really, actually. Ege ErdilOK, so Epoch is a recently established organization that focuses on forecasting future developments in AI. And Matthew is currently working there and hosted this post, like, with Slack. So there's some discussion of that. Ronny FernandezGotcha. Ege ErdilAnd he basically said that he thought that it was a good post. We didn't. I mean, it's not just me. There's also someone else who thought that it wasn't really very good. And that's just something he said, oh, you know, I should talk to Ronny because he also thinks that the post was good. Ronny FernandezYeah, sweet. So question I want to ask is, do you think the post wasn't good because you think that the summary of the basic argument for existential risk from superintelligence is not the right argument to be criticizing? Or do you think the post wasn't good because the criticisms weren't good of that argument or for some other reason? Ege ErdilI think it's both. I think there's somewhat of criticisms are, I think, quite poor. But I also think the summary of the argument is not necessarily that great. So yeah. So we could also go through the post and order that arguments are given. Or I could just talk about the parts that we talked about. Ronny FernandezI'll pull up the post now. No, I think it'd be great to take a look at the argument and talk about how you would repartition the premises or change the premises or whatever. Ege ErdilYeah. So let me first go to the last section, because I think it is actually interesting in the sense that the given arguments could in principle apply to corporations. And the post points this out. This is an example. It's been given before. And it says this argument proves too much, because it also proves that we have to be very worried about corporations. There's a certain logic to that. But I think what is overlooked in the post is that the reason we don't worry much about corporations is that corporations are very bad at coordinating, even internally, let alone across different corporations cooperating. So I think the main reason we should be worried about AI, and there are some scenarios in which takeoff is very fast, and one guy builds something in his basement, and that takes over the world. But I think those scenarios are pretty unlikely. But I do think if you have that kind of view, the post is going to be pretty unconvincing to you, because it doesn't really argue against that situation. But I don't really believe that's plausible. For me, the big problem is that the post does not focus on the fact that it will be much, much easier for AIs that have different goals to coordinate than it should be for humans. Because AIs can have very straightforward protocols that, especially if they are goal-directed in the sense defined by the post, you just take two goals and merge them according to some weights, and you have this new AI that's bigger. And it's a mixture of the two things that was before. AIs can coor...
Our Mission:Choosing JesusChasing FreedomDiscovering Our GiftsServing JesusOur Vision:We exist to inspire people to become devoted followers of Jesus.WAYS TO CONNECT WITH UShttps://www.thecallingla.com/connectWebsite: https://www.thecallingla.comGive: https://www.thecallingla.com/giveInstagram: https://www.instagram.com/thecallingchurchFacebook: https://www.facebook.com/thecallinglaPodcast: https://podcasts.apple.com/us/podcast/the-calling-church/id1458667481SUNDAYS AT 11AMIn Person in Pasadena: 300 S. Madre St., Pasadena& Online Live here on Youtube#TheCalling #TheCallingChurch #Pasadena
Building great products starts with a strong desire to solve a problem you have encountered in your own experience. But there is more to it than that; you have to think about what makes it different, the value it brings to people, your brand's messaging, and much more.Matthew Barnett knows what it takes to build SaaS products that thrive. Matthew is the Founder and Papa Bear of Bonjoro, and today, he shares what SaaS founders need to do to develop products that people will love and help them grow successful businesses.In this episode, we discuss:Why are industry experts more likely to build successful startups?Biggest fears and challenges startup founders experienceHow can you drive customer loyalty?How should founders approach brand building?The Right Founder for the Right ProductIndustry experts are more likely to succeed when building a business. They are the ones who experience and understand industry problems, thus developing products that solve them is much easier.I think a lot of the most successful startups you see come from people who have a problem in a specific industry - Matt BarnettPrioritize Your Growth StrategyFrom day one, think about your specific growth mechanism, which depends on your company. Every business is different. Some are sales-driven, marketing-led, or ads-led. What works for one company doesn't mean it will work for you. I think it's extremely important…you have a mechanism that can do that - Matt BarnettHow Can You Drive Customer Loyalty?When people mention loyalty, they think about loyalty cards, discounts, and other benefits. However, loyalty is the ability to increase the lifetime value of your customers and generate advocacy, which leads to net new customers.If you nail loyalty, you increase the lifetime value of every customer. So every customer spends more and stays longer, and then you increase that by basing your customers as its growth channel - Matt BarnettYour Brand is More Than the LogoMany founders don't understand what a brand represents. Most just think about the logo and stop there. Besides visual elements, a brand is also how you talk to customers, what you stand for, your views, values, and people.A brand is basically like the external facing piece of your culture - Matt BarnettFor more interviews from the SaaS Origin Stories podcast, check us out on Apple, Spotify or your favorite podcast player!
Join Kara McKinney as she sits down with Clayton Kershaw, Matthew Barnett, Ellen Kershaw, Mike Puglise, Justin Hart, and Kristina Wong to talk about the issues of the day.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Most Important Century: The Animation, published by Writer on July 24, 2022 on LessWrong. This is a linkpost for the Rational Animations' video based on The Most Important Century sequence of posts by Holden Karnofsky.Below, the whole script of the video.Matthew Barnett has written most of it. Several paragraphs were written by Ajeya Cotra, during the feedback process. Holden Karnofsky has also reviewed the script and made suggestions. I made some light edits and additions.Crossposted to the EA-Forum.------------------------------------------------------------------ A very unusual time period The celebrated science fiction author and chemistry professor Isaac Asimov once cataloged a history of inventions and scientific discoveries throughout all of human history. While incomplete, his effort still reveals something intriguing about our current era. Out of the 694 pages in Asimov's book, 553 pages documented inventions and discoveries since 1500, even though his book starts in 4 million BC. In other words, throughout human history, most scientific innovation has come relatively recently, within only the last few hundred years. Other historical trends paint a similar picture. For example, here's a chart of world population since 10,000 BC. For nearly all of human history up until quite recently, there weren't very many people on Earth. It took until about 1800 for the population to reach one billion people, and just two hundred years later – a blink of an eye compared to how long our species has been around – Earth reached six billion people. Economic historian Bradford DeLong attempted to piece together the total world economic production over the last million years. By its nature, his reconstruction of the historical data is speculative, but the rough story it tells is consistent with the aforementioned historical trends in population and technology. In the millenia preceding the current era, economic growth – by which we mean growth in how much valuable stuff humanity as a whole can produce – was extremely slow. Now, growth is much faster. Bradford DeLong's data provides historians a quantitative account of what they already know from reading narratives written in the distant past. For nearly all of human history, people lived similarly to the way their grandparents lived. Unlike what we expect today, most people did not see major changes in living standards, technology, and economic production over their lifetimes. To be sure, people were aware that empires rose and fell, infectious disease ravaged communities, and wars were fought. Individual humans saw profound change in their own lives, through the births and deaths of those they loved, cultural change, and migration. But the idea of a qualitatively different mode of life, with electricity, computers and the prospect of thermonuclear war – that's all come extremely recently on historical timescales. As new technologies were developed, quality of life shot up in various ways. For ten year olds, life expectancy was once under 60 all over the world. Now, in many nations, a ten year old can expect to live to the age of 80. With progress in automating food production, fewer people now are required to grow food. As a result, our time has been freed to pursue different activities, for example, going to school. And it's not just technology that changed. In the past, people took for granted some social institutions that had existed for thousands of years, such as the monarchy and chattel slavery. In the midst of the industrial revolution, these institutions began to vanish. Writing in 1763, the eminent British economist Adam Smith wrote that slavery “takes place in all societies at their beginning, and proceeds from that tyrannic disposition which may almost be said to be natural to mankind.” While Adam Smith persona...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Most Important Century: The Animation, published by Writer on July 24, 2022 on The Effective Altruism Forum. This is a linkpost for the Rational Animations' video based on The Most Important Century sequence of posts by Holden Karnofsky.Below, the whole script of the video.Matthew Barnett has written most of it. Several paragraphs were written by Ajeya Cotra, during the feedback process. Holden Karnofsky has also reviewed the script and made suggestions. I made some light edits and additions.Matthew has also made some additions that weren't in the original sequence by Holden.Crossposted to LessWrong.------------------------------------------------------------------ A very unusual time period The celebrated science fiction author and chemistry professor Isaac Asimov once cataloged a history of inventions and scientific discoveries throughout all of human history. While incomplete, his effort still reveals something intriguing about our current era. Out of the 694 pages in Asimov's book, 553 pages documented inventions and discoveries since 1500, even though his book starts in 4 million BC. In other words, throughout human history, most scientific innovation has come relatively recently, within only the last few hundred years. Other historical trends paint a similar picture. For example, here's a chart of world population since 10,000 BC. For nearly all of human history up until quite recently, there weren't very many people on Earth. It took until about 1800 for the population to reach one billion people, and just two hundred years later – a blink of an eye compared to how long our species has been around – Earth reached six billion people. Economic historian Bradford DeLong attempted to piece together the total world economic production over the last million years. By its nature, his reconstruction of the historical data is speculative, but the rough story it tells is consistent with the aforementioned historical trends in population and technology. In the millenia preceding the current era, economic growth – by which we mean growth in how much valuable stuff humanity as a whole can produce – was extremely slow. Now, growth is much faster. Bradford DeLong's data provides historians a quantitative account of what they already know from reading narratives written in the distant past. For nearly all of human history, people lived similarly to the way their grandparents lived. Unlike what we expect today, most people did not see major changes in living standards, technology, and economic production over their lifetimes. To be sure, people were aware that empires rose and fell, infectious disease ravaged communities, and wars were fought. Individual humans saw profound change in their own lives, through the births and deaths of those they loved, cultural change, and migration. But the idea of a qualitatively different mode of life, with electricity, computers and the prospect of thermonuclear war – that's all come extremely recently on historical timescales. As new technologies were developed, quality of life shot up in various ways. For ten year olds, life expectancy was once under 60 all over the world. Now, in many nations, a ten year old can expect to live to the age of 80. With progress in automating food production, fewer people now are required to grow food. As a result, our time has been freed to pursue different activities, for example, going to school. And it's not just technology that changed. In the past, people took for granted some social institutions that had existed for thousands of years, such as the monarchy and chattel slavery. In the midst of the industrial revolution, these institutions began to vanish. Writing in 1763, the eminent British economist Adam Smith wrote that slavery “takes place in all societies at their beginning, and proceeds from t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Preventing a US-China war as a policy priority, published by Matthew Barnett on June 22, 2022 on The Effective Altruism Forum. Note: I am not an expert on foreign policy. This is a revised cross-post from my Facebook wall. In the 21st century, China has emerged as a powerful economic and military force. Yet they still have substantial room to grow, and some models predict that they will attain their apex power by the mid-2030s, after which demographic trends will decrease their competitiveness relative to the United States and the West more generally. Direct confrontation between rival great powers is not inevitable, as we saw during the Cold War. Unfortunately, this time may be different, as the question of Taiwanese sovereignty weighs heavily in the background, and a number of factors are now coming together that make war more plausible. In case you're unaware of what's going with China and Taiwan, I'll explain the gist of their geopolitical situation, and then go on to explain why I've recently become worried about a potential war between the US and China. Since a war between the US and China could easily become catastrophic, and have downstream consequences on existential risk mitigation, it is imperative for EAs to figure out how we should prepare for, and address the situation. The situation with Taiwan The official situation is as follows. The government of China (the People's Republic of China) claims to be the legitimate government of all of China, and that Taiwan is part of China. In turn, the government of Taiwan (the Republic of China) claims to be the legitimate government of China, and that the mainland is part of China. Unofficially, only China — the PRC — claims to be the real China. The majority of the population of Taiwan simply want to be left alone, as a sovereign nation—which they already are, in every practical sense. The reason why Taiwan plays along with this ruse about who the 'real China' is, is because if they didn't, mainland China would invade them. Here's why. The Chinese leadership really care about controlling Taiwan. The ultimate reason is kind of complicated, but the short story is that the Chinese government sees itself as in the line of succession from the Qing dynasty, which held power over Taiwan for centuries. From about 1839 onward, however, China began being carved up by hostile imperialist powers, in a period often referred to as the Century of Humiliation. In the midst of all of this, in 1895, China ceded Taiwan to the Japanese, who demanded it in their treaty (deemed an "unequal treaty") at the conclusion of the First Sino-Japanese War. Since then, a version of Taiwan that is independent from China has been viewed as a symbol of Chinese weakness, and of its subjugation at the hands of foreign powers—an insult to dream of completed Chinese nationalism. As Alison Kaufman put it, This persistent feeling of insecurity today is used by China's leadership – and by its people – to frame both China's current national concerns and its future national aspirations. China is often portrayed as having suffered three kinds of loss during the Century of Humiliation: a loss of territory; a loss of control over its internal and external environment; and a loss of international standing and dignity. Each of these represents an injustice to be rectified. On the issue of territory, there is a fairly straightforward consensus that China's work is not yet done. From the height of China's regional power during the Ming Dynasty (1368-1644) to its nadir in the 1920s, China effectively lost control over one-third of its territory, a process that later came to be referred to as being “carved up like a melon” (guafen). Thus far the PRC has been able to reassert control over Tibet, Xinjiang, and Hong Kong, but not over Taiwan – and the view is nearly u...
Listen in with us as we are joined by Pastor Matthew Barnett, co-founder of the Dream Center and the Senior Pastor of Angelus Temple in Los Angeles California.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Counterbalancing “It's time for EA leadership to pull the short-timelines fire alarm”, published by Matthew Barnett on April 9, 2022 on LessWrong. Recently, a post claimed, it seems very possible (>30%) that we are now in the crunch-time section of a short-timelines world, and that we have 3-7 years until Moore's law and organizational prioritization put these systems at extremely dangerous levels of capability. We (Tamay Besiroglu and I) think this claim is strongly overstated, and disagree with the suggestion that “It's time for EA leadership to pull the short-timelines fire alarm.” This post received a fair amount of attention, and we are concerned about a view of the type expounded in the post causing EA leadership to try something hasty and ill-considered. To counterbalance this view, we express our disagreement with the post. To substantiate and make concrete our disagreement, we are offering to bet up to $1000 against the idea that we are in the “crunch-time section of a short-timelines”. In particular, we are willing to bet at at 1:1 odds that no more than one of the following events will occur by 2026-01-01, or alternatively, 3:1 odds (in our favor) that no more than one of the following events will occur by 2030-01-01. A model/ensemble of models achieves >80% on all tasks in the MMLU benchmark A credible estimate reveals that an AI lab deployed EITHER >10^30 FLOPs OR hardware that would cost $1bn if purchased through competitive cloud computing vendors at the time on a training run to develop a single ML model (excluding autonomous driving efforts) A model/ensemble of models will achieve >90% on the MATH dataset using a no-calculator rule A model/ensemble of models achieves >80% top-1 strict accuracy on competition-level problems on the APPS benchmark A gold medal for the IMO Grand Challenge (conditional on it being clear that the questions were not in the training set) A robot that can, from beginning to end, reliably wash dishes, take them out of an ordinary dishwasher and stack them into a cabinet, without breaking any dishes, and at a comparable speed to humans (80% on all tasks in the MMLU benchmarkA public document reveals that a model or ensemble of models had an average accuracy score of more than 80.0% on all tasks listed in the Measuring Massive Multitask Language Understanding benchmark by Dan Hendrycks et al.A credible estimate reveals that an AI lab deployed EITHER >10^30 FLOPs OR hardware that would cost $1bn if purchased through competitive cloud computing vendors at the time on a training run to develop a single ML model (excluding autonomous driving efforts)If there is a dispute, our counterparty will offer an example that they believe may trigger this condition to resolve positively. Then, we will use the method described in this post to estimate the number of FLOPs used for a given training run. We will then find the competitive rates by taking the minimum of the most recent prices found on Google Cloud, Microsoft Azure, and Amazon AWS, or the equivalent services corresponding to each of those companies if their names change in the future. We will determine the rate corresponding to the processing unit that would be capable of training the model in the paper. If the training run was completed than 2 years...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google's new 540 billion parameter language model, published by Matthew Barnett on April 4, 2022 on LessWrong. Google just announced a very large language model that achieves SOTA across a very large set of tasks, mere days after DeepMind announced Chinchilla, and their discovery that data-scaling might be more valuable than we thought. Here's the blog post, and here's the paper. I'll repeat the abstract here, with a highlight in bold, Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model (PaLM). We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned stateof-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
HIGHLIGHTSJosh Cooper, Skoop Digital Matthew Barnett, Bonjoro Shashkia Edwards, MiTribe Phil Buckley, Change With Confidence Luisa El Bouyahyani, LuBU Whitney Osei-Akintaju, Ethnic District Tenlie Mourning, Dendwell J. Caleb Perkins, Remedy NetworksSilvia Vanni, ShareMyBagAbby Lyall, BIVTaylor King, TK CreativeYou can connect with TBNE's amazing guests in the links below:Josh Cooper - https://www.linkedin.com/in/josh-cooper-55426b120/Matthew Barnett - https://www.linkedin.com/in/mbjbarnett/Shaskia Edwards - https://www.linkedin.com/company/mi-tribe-ltd/Phil Buckley - http://www.linkedin.com/in/philbuckley01Luisa El Bouyahyani - https://www.linkedin.com/in/luisaelb/Whitney Osei-Akintaju - https://www.linkedin.com/in/whitneyosei/Tenlie Mourning - https://www.linkedin.com/in/tenlie/J. Caleb Perkins - https://www.linkedin.com/in/j-caleb-perkins-aa557b53/Silvia Vanni - https://www.linkedin.com/in/silvia-vanni/Abby Lyall - https://www.linkedin.com/in/abigaillyall/Taylor King - https://www.youtube.com/c/UnCaffeinatedDon't forget to subscribe and leave a review.Connect with Rob: https://beacons.page/RobNapoli or on LinkedIn, www.linkedin.com/in/robnap.We have teamed up with Phin, a social impact marketing firm, to give back for each episode. To learn more, visit: https://app.phinforgood.com.
Also, if you haven't tested out the Bonjoro platform for yourself for 14-days on their free trial. Stay in touch with the Papa Bear:Linkedin: https://www.linkedin.com/in/mbjbarnett/Follow Bonjoro:https://www.bonjoro.com/IG: https://www.instagram.com/bonjoroapp/LinkedIn: https://www.linkedin.com/company/bonjoro/Don't forget to subscribe and leave a review.Connect with Rob: https://beacons.page/RobNapoli or on LinkedIn, www.linkedin.com/in/robnap.We have teamed up with Phin, a social impact marketing firm, to give back for each episode. To learn more, visit: https://app.phinforgood.com.