POPULARITY
What if there's something it's like to be a shrimp — or a chatbot?For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of others — both the animals we share this planet with and the artificial intelligences we're creating?We've pulled together clips from past conversations with researchers and philosophers who've spent years trying to make sense of animal consciousness, artificial sentience, and moral consideration under deep uncertainty.Links to learn more and full transcript: https://80k.info/nhsChapters:Cold open (00:00:00)Luisa's intro (00:00:57)Robert Long on what we should picture when we think about artificial sentience (00:02:49)Jeff Sebo on what the threshold is for AI systems meriting moral consideration (00:07:22)Meghan Barrett on the evolutionary argument for insect sentience (00:11:24)Andrés Jiménez Zorrilla on whether there's something it's like to be a shrimp (00:15:09)Jonathan Birch on the cautionary tale of newborn pain (00:21:53)David Chalmers on why artificial consciousness is possible (00:26:12)Holden Karnofsky on how we'll see digital people as... people (00:32:18)Jeff Sebo on grappling with our biases and ignorance when thinking about sentience (00:38:59)Bob Fischer on how to think about the moral weight of a chicken (00:49:37)Cameron Meyer Shorb on the range of suffering in wild animals (01:01:41)Sébastien Moro on whether fish are conscious or sentient (01:11:17)David Chalmers on when to start worrying about artificial consciousness (01:16:36)Robert Long on how we might stumble into causing AI systems enormous suffering (01:21:04)Jonathan Birch on how we might accidentally create artificial sentience (01:26:13)Anil Seth on which parts of the brain are required for consciousness (01:32:33)Peter Godfrey-Smith on uploads of ourselves (01:44:47)Jonathan Birch on treading lightly around the “edge cases” of sentience (02:00:12)Meghan Barrett on whether brain size and sentience are related (02:05:25)Lewis Bollard on how animal advocacy has changed in response to sentience studies (02:12:01)Bob Fischer on using proxies to determine sentience (02:22:27)Cameron Meyer Shorb on how we can practically study wild animals' subjective experiences (02:26:28)Jeff Sebo on the problem of false positives in assessing artificial sentience (02:33:16)Stuart Russell on the moral rights of AIs (02:38:31)Buck Shlegeris on whether AI control strategies make humans the bad guys (02:41:50)Meghan Barrett on why she can't be totally confident about insect sentience (02:47:12)Bob Fischer on what surprised him most about the findings of the Moral Weight Project (02:58:30)Jeff Sebo on why we're likely to sleepwalk into causing massive amounts of suffering in AI systems (03:02:46)Will MacAskill on the rights of future digital beings (03:05:29)Carl Shulman on sharing the world with digital minds (03:19:25)Luisa's outro (03:33:43)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
This and all episodes at: https://aiandyou.net/ . In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine: Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control; Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert; Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control; Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK's Defence Science and Technology Laboratory; Rajiv Malhotra, author of “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts; David Brin, scientist and science fiction author famous for the Uplift series and Earth; Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of AI: Unexplainable, Unpredictable, Uncontrollable; Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute; Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI; I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Radio Davos is 5 years old - and a lot has happened in that time - the end of COVID, the dawn of gen-AI, geopolitical upheaval. We look back on highlights from the Forum's weekly podcast that looks for solutions to the world's biggest challenges. This episode includes clips from the very first episode, and interviews with actor Matt Damon on getting water to the poorest; musician Nile Rodgers on generative AI; and an astronaut speaking to us from space. Episodes featured: World Water Day with Matt Damon and Gary White: https://www.weforum.org/podcasts/radio-davos/episodes/world-water-day-with-matt-damon-and-gary-white/ Space - how advances up there can help life down here: https://www.weforum.org/podcasts/radio-davos/episodes/space-how-advances-up-there-can-help-life-down-here/ Don't Look Up: https://www.weforum.org/podcasts/radio-davos/episodes/dont-look-up/ In the age of the 'manosphere', what's the future for feminism? With Jude Kelly of the WOW Festival: https://www.weforum.org/podcasts/radio-davos/episodes/jude-kelly-wow-foundation/ The promises and perils of AI - Stuart Russell on Radio Davos: https://www.weforum.org/podcasts/radio-davos/episodes/ai-stuart-russell/ AI vs Art: Will AI rip the soul out of music, movies and art, or help express our humanity?: https://www.weforum.org/podcasts/radio-davos/episodes/ai-vs-art-nile-rodgers-hollywood/ Check out all our podcasts on wef.ch/podcasts: YouTube: - https://www.youtube.com/@wef/podcasts Radio Davos - subscribe: https://pod.link/1504682164 Meet the Leader - subscribe: https://pod.link/1534915560 Agenda Dialogues - subscribe: https://pod.link/1574956552 Join the World Economic Forum Podcast Club: https://www.facebook.com/groups/wefpodcastclub
In the Sunday Book Review, Tom Fox considers books that would interest the compliance professional, the business executive, or anyone who might be curious. These could be books about business, compliance, history, leadership, current events, or anything else that might interest Tom. Today, we have a five-book look at the top books on AI for 2025. Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig The Singularity Is Nearer: When We Merge with AI by Ray Kurzweil The Alignment Problem: Machine Learning and Human Values by Brian Christian Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Harari Resources: The Best Books on AI in 2025. In FiveBooks.com For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Künstliche Intelligenz wird mächtiger sein als wir Menschen. Also ziemlich sicher. Und wenn das passiert, dann gibt es ein paar Szenarien, was die KI mit uns Menschen macht...leider sind nicht alle davon positiv. Deshalb beschäftigt sich eine wachsende Community damit, zu erforschen, wie wir eine superintelligente Künstliche Intelligenz kontrollieren könnten. Welche Probleme bedacht werden müssen und was es für Ideen zur Kontrolle gibt, behandeln wir in dieser Folge. Wir haben viele praktische Beispiele dabei, damit ihr auch wirklich versteht, was abgeht. Hört rein und teilt die Folge mit euren Freunden/innen.Wir raten vom Kauf des Buch ab! Wenn ihr das Buch trotzdem kaufen wollt. Human Compatible: Artificial Intelligence and the Problem of Control ---Sponsor diese Woche: Gemsjaeger.ski - nachhaltige Holzski--- Du willst mehr lesen und dich mit Gleichgesinnten austauschen? Dann komm in unseren SW Podcast Buchclub Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidence against Learned Search in a Chess-Playing Neural Network, published by p.b. on September 14, 2024 on LessWrong. Introduction There is a new paper and lesswrong post about "learned look-ahead in a chess-playing neural network". This has long been a research interest of mine for reasons that are well-stated in the paper: Can neural networks learn to use algorithms such as look-ahead or search internally? Or are they better thought of as vast collections of simple heuristics or memorized data? Answering this question might help us anticipate neural networks' future capabilities and give us a better understanding of how they work internally. and further: Since we know how to hand-design chess engines, we know what reasoning to look for in chess-playing networks. Compared to frontier language models, this makes chess a good compromise between realism and practicality for investigating whether networks learn reasoning algorithms or rely purely on heuristics. So the question is whether Francois Chollet is correct with transformers doing "curve fitting" i.e. memorisation with little generalisation or whether they learn to "reason". "Reasoning" is a fuzzy word, but in chess you can at least look for what human players call "calculation", that is the ability to execute moves solely in your mind to observe and evaluate the resulting position. To me this is a crux as to whether large language models will scale to human capabilities without further algorithmic breakthroughs. The paper's authors, which include Erik Jenner and Stuart Russell, conclude that the policy network of Leela Chess Zero (a top engine and open source replication of AlphaZero) does learn look-ahead. Using interpretability techniques they "find that Leela internally represents future optimal moves and that these representations are crucial for its final output in certain board states." While the term "look-ahead" is fuzzy, the paper clearly intends to show that the Leela network implements an "algorithm" and a form of "reasoning". My interpretation of the presented evidence is different, as discussed in the comments of the original lesswrong post. I argue that all the evidence is completely consistent with Leela having learned to recognise multi-move patterns. Multi-move patterns are just complicated patterns that take into account that certain pieces will have to be able to move to certain squares in future moves for the pattern to hold. The crucial different to having learned an algorithm: An algorithm can take different inputs and do its thing. That allows generalisation to unseen or at least unusual inputs. This means that less data is necessary for learning because the generalisation power is much higher. Learning multi-move patterns on the other hand requires much more data because the network needs to see many versions of the pattern until it knows all specific details that have to hold. Analysis setup Unfortunately it is quite difficult to distinguish between these two cases. As I argued: Certain information is necessary to make the correct prediction in certain kinds of positions. The fact that the network generally makes the correct prediction in these types of positions already tells you that this information must be processed and made available by the network. The difference between lookahead and multi-move pattern recognition is not whether this information is there but how it got there. However, I propose an experiment, that makes it clear that there is a difference. Imagine you train the model to predict whether a position leads to a forced checkmate and also the best move to make. You pick one tactical motive and erase it from the checkmate prediction part of the training set, but not the move prediction part. Now the model still knows which the right moves are to make i.e. it would pl...
How can scientists leverage AI and machine learning to more effectively research, develop, and deliver new drugs? This week, Reid and Aria talk drug development with renowned computer scientist and executive Dr. Daphne Koller, whose company, insitro, uses machine learning to improve the quality and speed of drug discovery. She addresses several ways AI and ML are already being used to redefine diseases and create better therapeutic interventions. Plus, she shares her experience transitioning from academia to industry. Read the transcript of this episode here: https://www.possible.fm/podcasts/dkoller For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/ Topics: 00:01: Cold open 00:43: Hellos and intros 3:20: Early involvement in AI 5:00: Overview of drug discovery and its evolution over the last five years 7:50: AI and machine learning impact on biology and drug development 9:09: Pi aside defining AlphaFold 11:37: Areas of acceleration and opportunity in AI and drug discovery 17:21: Synthetic data 21:21: AI implications on therapeutic hypotheses 23:54: Personalized vs. precision medicine 27:04: Closing the data feedback loop 30:31: Exciting announcements 34:24: Privacy and data 36:41: GPT-4 story 38:48: Stuart Russell's comments on the applications of Koller's thesis 43:36: AI as a moving target 47:24: How can academia and industry positively impact society and humanity 51:58: insitro's culture 54:18: - Rapid-fire questions Select mentions: Not the End of the World: How We Can Be the First Generation to build a Sustainable Planet by Hannah Ritchie Stuart Russell Possible is an award-winning podcast that sketches out the brightest version of the future—and what it will take to get there. Most of all, it asks: what if, in the future, everything breaks humanity's way? Tune in for grounded and speculative takes on how technology—and, in particular, AI—is inspiring change and transforming the future. Hosted by Reid Hoffman and Aria Finger, each episode features an interview with an ambitious builder or deep thinker on a topic, from art to geopolitics and from healthcare to education. These conversations also showcase another kind of guest: AI. Whether it's Inflection's Pi, OpenAI's ChatGPT or other AI tools, each episode will use AI to enhance and advance our discussion about what humanity could possibly get right if we leverage technology—and our collective effort—effectively.
Our guest today is Pedro Domingos, who is joining an elite group of repeat guests – he joined us before in episode 34 in April 2023.Pedro is Professor Emeritus Of Computer Science and Engineering at the University of Washington. He has done pioneering work in machine learning, like the development of Markov logic networks, which combine probabilistic reasoning with first-order logic. He is probably best known for his book "The Master Algorithm" which describes five different "tribes" of AI researchers, and argues that progress towards human-level general intelligence requires a unification of their approaches.More recently, Pedro has become a trenchant critic of what he sees as exaggerated claims about the power and potential of today's AI, and of calls to impose constraints on it.He has just published “2040: A Silicon Valley Satire”, a novel which ridicules Big Tech and also American politics.Selected follow-ups:Pedro Domingos - University of WashingtonPrevious London Futurists Podcast episode featuring Pedro Domingos2040: A Silicon Valley SatireThe Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our WorldThe Bonfire of the VanitiesRon HowardMike JudgeMartin ScorsesePandora's BrainTranscendenceFuture of Life Institute moratorium open letterOpenAI working on new reasoning technology under code name ‘Strawberry'Artificial Intelligence: A Modern Approach - by Stuart Russell and Peter NorvigGoogle's AI reasons its way around the London Underground - NatureConsciumIs LaMDA Sentient? — an Interview - by Blake LemoineCould a Large Language Model be Conscious? - Talk by David Chalmers at NeurIPS 2022Jeremy BenthamThe Extended Phenotype - 1982 book by Richard DawkinsClarion West: Workshops for people who are serious about writingMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ten arguments that AI is an existential risk, published by KatjaGrace on August 13, 2024 on LessWrong. This is a snapshot of a new page on the AI Impacts Wiki. We've made a list of arguments[1] that AI poses an existential risk to humanity. We'd love to hear how you feel about them in the comments and polls. Competent non-aligned agents Summary: 1. Humans will build AI systems that are 'agents', i.e. they will autonomously pursue goals 2. Humans won't figure out how to make systems with goals that are compatible with human welfare and realizing human values 3. Such systems will be built or selected to be highly competent, and so gain the power to achieve their goals 4. Thus the future will be primarily controlled by AIs, who will direct it in ways that are at odds with long-run human welfare or the realization of human values Selected counterarguments: It is unclear that AI will tend to have goals that are bad for humans There are many forms of power. It is unclear that a competence advantage will ultimately trump all others in time This argument also appears to apply to human groups such as corporations, so we need an explanation of why those are not an existential risk People who have favorably discussed[2] this argument (specific quotes here): Paul Christiano (2021), Ajeya Cotra (2023), Eliezer Yudkowsky (2024), Nick Bostrom (2014[3]). See also: Full wiki page on the competent non-aligned agents argument Second species argument Summary: 1. Human dominance over other animal species is primarily due to humans having superior cognitive and coordination abilities 2. Therefore if another 'species' appears with abilities superior to those of humans, that species will become dominant over humans in the same way 3. AI will essentially be a 'species' with superior abilities to humans 4. Therefore AI will dominate humans Selected counterarguments: Human dominance over other species is plausibly not due to the cognitive abilities of individual humans, but rather because of human ability to communicate and store information through culture and artifacts Intelligence in animals doesn't appear to generally relate to dominance. For instance, elephants are much more intelligent than beetles, and it is not clear that elephants have dominated beetles Differences in capabilities don't necessarily lead to extinction. In the modern world, more powerful countries arguably control less powerful countries, but they do not wipe them out and most colonized countries have eventually gained independence People who have favorably discussed this argument (specific quotes here): Joe Carlsmith (2024), Richard Ngo (2020), Stuart Russell (2020[4]), Nick Bostrom (2015). See also: Full wiki page on the second species argument Loss of control via inferiority Summary: 1. AI systems will become much more competent than humans at decision-making 2. Thus most decisions will probably be allocated to AI systems 3. If AI systems make most decisions, humans will lose control of the future 4. If humans have no control of the future, the future will probably be bad for humans Selected counterarguments: Humans do not generally seem to become disempowered by possession of software that is far superior to them, even if it makes many 'decisions' in the process of carrying out their will In the same way that humans avoid being overpowered by companies, even though companies are more competent than individual humans, humans can track AI trustworthiness and have AI systems compete for them as users. This might substantially mitigate untrustworthy AI behavior People who have favorably discussed this argument (specific quotes here): Paul Christiano (2014), Ajeya Cotra (2023), Richard Ngo (2024). See also: Full wiki page on loss of control via inferiority Loss of control via speed Summary: 1. Advances in AI will produce...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AXRP Episode 33 - RLHF Problems with Scott Emmons, published by DanielFilan on June 12, 2024 on The AI Alignment Forum. YouTube link Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them 'aligned'. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk with Scott Emmons about his work categorizing the problems that can show up in this setting. Topics we discuss: Deceptive inflation Overjustification Bounded human rationality Avoiding these problems Dimensional analysis RLHF problems, in theory and practice Scott's research program Following Scott's research Daniel Filan: Hello, everybody. In this episode I'll be speaking with Scott Emmons. Scott is a PhD student at UC Berkeley, working with the Center for Human-Compatible AI on AI safety research. He's previously co-founded far.ai, which is an AI safety non-profit. For links to what we're discussing, you can check the description of the episode, and for a transcript you can read it at axrp.net. Well, welcome to AXRP. Scott Emmons: Great to be here. Deceptive inflation Daniel Filan: Sure. So today we're talking about your paper, When Your AIs Deceive You: Challenges With Partial Observability of Human Evaluators in Reward Learning, by Leon Lang, Davis Foote, Stuart Russell, Erik Jenner, and yourself. Can you just tell us roughly what's going on with this paper? Scott Emmons: Yeah, I could start with the motivation of the paper. Daniel Filan: Yeah, sure. Scott Emmons: We've had a lot of speculation in the x-risk community about issues like deception. So people have been worried about what happens if your AIs try to deceive you. And at the same time, I think for a while that's been a theoretical, a philosophical concern. And I use "speculation" here in a positive way. I think people have done really awesome speculation about how the future of AI is going to play out, and what those risks are going to be. And deception has emerged as one of the key things that people are worried about. I think at the same time, we're seeing AI systems actually deployed, and we're seeing a growing interest of people in what exactly do these risks look like, and how do they play out in current-day systems? So the goal of this paper is to say: how might deception play out with actual systems that we have deployed today? And reinforcement learning from human feedback [RLHF] is one of the main mechanisms that's currently being used to fine-tune models, that's used by ChatGPT, it's used by Llama, variants of it are used by Anthropic. So what this paper is trying to do is it's trying to say, "Can we mathematically pin down, in a precise way, how might these failure modes we've been speculating about play out in RLHF?" Daniel Filan: So in the paper, the two concepts you talk about on this front are I think "deceptive inflation" and "overjustification". So maybe let's start with deceptive inflation. What is deceptive inflation? Scott Emmons: I can give you an example. I think examples from me as a child I find really helpful in terms of thinking about this. So when I was a child, my parents asked me to clean the house, and I didn't care about cleaning the house. I just wanted to go play. So there's a misalignment between my objective and the objective my parents had for me. And in this paper, the main failure cases that we're studying are cases of misalignment. So we're saying: when there is misalignment, how does that play out? How does that play out in the failure modes? So [with] me as a misaligned child, one strategy I would have for cleaning the house would be just to sweep any dirt or any debris under the furniture. So I'm cleaning the house, I just sweep some debris...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do coherence arguments actually prove about agentic behavior?, published by sunwillrise on June 1, 2024 on LessWrong. In his first discussion with Richard Ngo during the 2021 MIRI Conversations, Eliezer retrospected and lamented: In the end, a lot of what people got out of all that writing I did, was not the deep object-level principles I was trying to point to - they did not really get Bayesianism as thermodynamics, say, they did not become able to see Bayesian structures any time somebody sees a thing and changes their belief. What they got instead was something much more meta and general, a vague spirit of how to reason and argue, because that was what they'd spent a lot of time being exposed to over and over and over again in lots of blog posts. Maybe there's no way to make somebody understand why corrigibility is "unnatural" except to repeatedly walk them through the task of trying to invent an agent structure that lets you press the shutdown button (without it trying to force you to press the shutdown button), and showing them how each of their attempts fails; and then also walking them through why Stuart Russell's attempt at moral uncertainty produces the problem of fully updated (non-)deference; and hope they can start to see the informal general pattern of why corrigibility is in general contrary to the structure of things that are good at optimization. Except that to do the exercises at all, you need them to work within an expected utility framework. And then they just go, "Oh, well, I'll just build an agent that's good at optimizing things but doesn't use these explicit expected utilities that are the source of the problem!" And then if I want them to believe the same things I do, for the same reasons I do, I would have to teach them why certain structures of cognition are the parts of the agent that are good at stuff and do the work, rather than them being this particular formal thing that they learned for manipulating meaningless numbers as opposed to real-world apples. And I have tried to write that page once or twice (eg "coherent decisions imply consistent utilities") but it has not sufficed to teach them, because they did not even do as many homework problems as I did, let alone the greater number they'd have to do because this is in fact a place where I have a particular talent. Eliezer is essentially claiming that, just as his pessimism compared to other AI safety researchers is due to him having engaged with the relevant concepts at a concrete level ("So I have a general thesis about a failure mode here which is that, the moment you try to sketch any concrete plan or events which correspond to the abstract descriptions, it is much more obviously wrong, and that is why the descriptions stay so abstract in the mouths of everybody who sounds more optimistic than I am. This may, perhaps, be confounded by the phenomenon where I am one of the last living descendants of the lineage that ever knew how to say anything concrete at all"), his experience with and analysis of powerful optimization allows him to be confident in what the cognition of a powerful AI would be like. In this view, Vingean uncertainty prevents us from knowing what specific actions the superintelligence would take, but effective cognition runs on Laws that can nonetheless be understood and which allow us to grasp the general patterns (such as Instrumental Convergence) of even an "alien mind" that's sufficiently powerful. In particular, any (or virtually any) sufficiently advanced AI must be a consequentialist optimizer that is an agent as opposed to a tool and which acts to maximize expected utility according to its world model to purse a goal that can be extremely different from what humans deem good. When Eliezer says "they did not even do as many homework problems as I did," I ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper in Science: Managing extreme AI risks amid rapid progress, published by JanB on May 23, 2024 on The AI Alignment Forum. https://www.science.org/doi/10.1126/science.adn0117 Authors: Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner*, Sören Mindermann* Abstract: Artificial intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI's impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI, there is a lack of consensus about how to manage them. Society's response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness and barely address autonomous systems. Drawing on lessons learned from other safety-critical technologies, we outline a comprehensive plan that combines technical research and development with proactive, adaptive governance mechanisms for a more commensurate preparation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, published by Joar Skalse on May 17, 2024 on The AI Alignment Forum. I want to draw attention to a new paper, written by myself, David "davidad" Dalrymple, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, and Joshua Tenenbaum. In this paper we introduce the concept of "guaranteed safe (GS) AI", which is a broad research strategy for obtaining safe AI systems with provable quantitative safety guarantees. Moreover, with a sufficient push, this strategy could plausibly be implemented on a moderately short time scale. The key components of GS AI are: 1. A formal safety specification that mathematically describes what effects or behaviors are considered safe or acceptable. 2. A world model that provides a mathematical description of the environment of the AI system. 3. A verifier that provides a formal proof (or some other comparable auditable assurance) that the AI system satisfies the safety specification with respect to the world model. The first thing to note is that a safety specification in general is not the same thing as a reward function, utility function, or loss function (though they include these objects as special cases). For example, it may specify that the AI system should not communicate outside of certain channels, copy itself to external computers, modify its own source code, or obtain information about certain classes of things in the external world, etc. The safety specifications may be specified manually, generated by a learning algorithm, written by an AI system, or obtained through other means. Further detail is provided in the main paper. The next thing to note is that most useful safety specifications must be given relative to a world model. Without a world model, we can only use specifications defined directly over input-output relations. However, we want to define specifications over input-outcome relations instead. This is why a world model is a core component of GS AI. Also note that: 1. The world model need not be a "complete" model of the world. Rather, the required amount of detail and the appropriate level of abstraction depends on both the safety specification(s) and the AI system's context of use. 2. The world model should of course account for uncertainty, which may include both stochasticity and nondeterminism. 3. The AI system whose safety is being verified may or may not use a world model, and if it does, we may or may not be able to extract it. However, the world model that is used for the verification of the safety properties need not be the same as the world model of the AI system whose safety is being verified (if it has one). The world model would likely have to be AI-generated, and should ideally be interpretable. In the main paper, we outline a few potential strategies for producing such a world model. Finally, the verifier produces a quantitative assurance that the base-level AI controller satisfies the safety specification(s) relative to the world model(s). In the most straightforward form, this could simply take the shape of a formal proof. However, if a direct formal proof cannot be obtained, then there are weaker alternatives that would still produce a quantitative guarantee. For example, the assurance may take the form of a proof that bounds the probability of failing to satisfy the safety specification, or a proof that the AI system will converge towards satisfying the safety specification (with increasing amounts of data or computational resources, for example). Such proofs are of course often very hard to obtain. However, further progress in automated theorem proving (and relat...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, published by Gunnar Zarncke on May 16, 2024 on LessWrong. Authors: David "davidad" Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, Joshua Tenenbaum Abstract: Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Super intelligent AI is coming that will make today's AI seem primitive. It will be vastly more powerful than humans, have access to extensive real-time data about almost everyone, and be able to control our lives. A world-leading authority on the topic, Stuart Russell, reveals what must be done for humanity to get the best from AI and avoid the worst.“The arrival of super intelligent AI is like the arrival of a superior alien civilization…”
This and all episodes at: https://aiandyou.net/ . Literally writing the book on AI is my guest Peter Norvig, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. (The other author, Stuart Russell, was on this show in episodes 86 and 87.) Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at the University of Southern California, Stanford University, and the University of California at Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006. He's also the author of the world's longest palindromic sentence. In this first part of the interview, we talk about the evolution of AI from the symbolic processing paradigm to the connectionist paradigm, or neural networks, how they layer on each other in humans and AIs, and Peter's experiences in blending the worlds of academic and business. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
The World Economic Forum has been happening in Davos Switzerland this week – a gathering of the world's muckety mucks, world leaders and billionaires. The setting is posh, the ski slopes are freshly powdered and the champagne is (presumably) ice cold. New this year? The streets are lined with AI. Stuart Russell, a leading academic in the field of artificial intelligence and at Davos this week, talks with host Steven Overly.
We are honoured to have as our guest in this episode Professor Stuart Russell. Stuart is professor of computer science at the University of California, Berkeley, and the traditional way to introduce him is to say that he literally wrote the book on AI. Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig, was first published in 1995, and the fourth edition came out in 2020.Stuart has been urging us all to take seriously the dramatic implications of advanced AI for longer than perhaps any other prominent AI researcher. He also proposes practical solutions, as in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control.In 2021 Stuart gave the Reith Lectures, and was awarded an OBE. But the greatest of his many accolades was surely in 2014 when a character with a background remarkably like his was played in the movie Transcendence by Johnny Depp. The conversation covers a wide range of questions about future scenarios involving AI, and reflects on changes in the public conversation following the FLI's letter calling for a moratorium on more powerful AI systems, and following the global AI Safety Summit held at Bletchley Park in the UK at the beginning of November.Selected follow-ups:Stuart Russell's page at BerkeleyCenter for Human-Compatible Artificial Intelligence (CHAI)The 2021 Reith Lectures: Living With Artificial IntelligenceThe book Human Compatible: Artificial Intelligence and the Problem of ControlMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
While the gang take a little holiday break, we thought it was worth revisiting Andy's conversation with AI researcher and UC Berkeley Professor of Computer Science Stuart Russell from wayyyyy back in 2019. Now that we're well into the era of generative artificial intelligence, it's interesting to look back at what experts were saying about AI alignment just a few years ago, when it seemed to many of us like an issue we wouldn't have to tackle directly for a long time to come. As we face down a future where LLMs and other generative models only appear to be getting more capable, it's worth pausing to reflect on what needs to be done to usher in a world that's more utopian than dystopian. Happy holidays!
Artificial General Intelligence (AGI) Show with Soroush Pour
We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI's mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more.We talk to Adam about:* The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI) and academia.* Their current research directions & how they're going* Promising agendas & notable gaps in the AI safety researchHosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- About Adam --Adam Gleave is the CEO of FAR, one of the most prominent not-for-profits focused on research towards AI safety & alignment. He completed his PhD in artificial intelligence (AI) at UC Berkeley, advised by Stuart Russell, a giant in the field of AI. Adam did his PhD on trustworthy machine learning and has dedicated his career to ensuring advanced AI systems act according to human preferences. Adam is incredibly knowledgeable about the world of AI, having worked directly as a researcher and now as leader of a sizable and growing research org.-- Further resources --* Adam * Website: https://www.gleave.me/ * Twitter: https://twitter.com/ARGleave * LinkedIn: https://www.linkedin.com/in/adamgleave/ * Google Scholar: https://scholar.google.com/citations?user=lBunDH0AAAAJ&hl=en&oi=ao* FAR AI * Website: https://far.ai * Twitter: https://twitter.com/farairesearch * LinkedIn: https://www.linkedin.com/company/far-ai/ * Job board: https://far.ai/category/jobs/* AI safety training bootcamps: * ARENA: https://www.arena.education/ * See also: MLAB, WMLB, https://aisafety.training/* Research * FAR's adversarial attack on Katago https://goattack.far.ai/* Ideas for impact mentioned by Adam * Consumer report for AI model safety * Agency model to support AI safety researchers * Compute cluster for AI safety researchers* Donate to AI safety * FAR AI: https://www.every.org/far-ai-inc#/donate/card * ARC Evals: https://evals.alignment.org/ * Berkeley CHAI: https://humancompatible.ai/Recorded Oct 9, 2023
"In this audiobook... A LARGE BOLD FONT IN ALL CAPITAL LETTERS SOUNDS LIKE THIS." Apocalypse now - Current AI systems are already harmful. They pose apocalyptic risks even without further technology development. This chapter explains why; explores a possible path for near-term human extinction via AI; and sketches several disaster scenarios. https://betterwithout.ai/apocalypse-now At war with the machines - The AI apocalypse is now. https://betterwithout.ai/AI-already-at-war This interview with Stuart Russell is a good starting point for the a literature on recommender alignment, analogous to AI alignment: https://www.youtube.com/watch?v=vzDm9IMyTp8 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Hi friends, we're on hiatus for the fall. To tide you over, we're putting up some favorite episodes from our archives. Enjoy! ---- [originally aired February 17, 2021] Guess what folks: we are celebrating a birthday this week. That's right, Many Minds has reached the ripe age of one year old. Not sure how old that is in podcast years, exactly, but it's definitely a landmark that we're proud of. Please no gifts, but, as always, you're encouraged to share the show with a friend, write a review, or give us a shout out on social. To help mark this milestone we've got a great episode for you. My guest is the writer, Brian Christian. Brian is a visiting scholar at the University of California Berkeley and the author of three widely acclaimed books: The Most Human Human, published in 2011; Algorithms To Live By, co-authored with Tom Griffiths and published in 2016; and most recently, The Alignment Problem. It was published this past fall and it's the focus of our conversation in this episode. The alignment problem, put simply, is the problem of building artificial intelligences—machine learning systems, for instance—that do what we want them to do, that both reflect and further our values. This is harder to do than you might think, and it's more important than ever. As Brian and I discuss, machine learning is becoming increasingly pervasive in everyday life—though it's sometimes invisible. It's working in the background every time we snap a photo or hop on Facebook. Companies are using it to sift resumes; courts are using it to make parole decisions. We are already trusting these systems with a bunch of important tasks, in other words. And as we rely on them in more and more domains, the alignment problem will only become that much more pressing. In the course of laying out this problem, Brian's book also offers a captivating history of machine learning and AI. Since their very beginnings, these fields have been formed through interaction with philosophy, psychology, mathematics, and neuroscience. Brian traces these interactions in fascinating detail—and brings them right up to the present moment. As he describes, machine learning today is not only informed by the latest advances in the cognitive sciences, it's also propelling those advances. This is a wide-ranging and illuminating conversation folks. And, if I may say so, it's also an important one. Brian makes a compelling case, I think, that the alignment problem is one of the defining issues of our age. And he writes about it—and talks about it here—with such clarity and insight. I hope you enjoy this one. And, if you do, be sure to check out Brian's book. Happy birthday to us—and on to my conversation with Brian Christian. Enjoy! A transcript of this show is available here. Notes and links 7:26 - Norbert Wiener's article from 1960, ‘Some moral and technical consequences of automation'. 8:35 - ‘The Sorcerer's Apprentice' is an episode from the animated film, Fantasia (1940). Before that, it was a poem by Goethe. 13:00 - A well-known incident in which Google's nascent auto-tagging function went terribly awry. 13:30 - The ‘Labeled Faces in the Wild' database can be viewed here. 18:35 - A groundbreaking article in ProPublica on the biases inherent in the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool. 25:00 – The website of the Future of Humanity Institute, mentioned in several places, is here. 25:55 - For an account of the collaboration between Walter Pitts and Warren McCulloch, see here. 29:35- An article about the racial biases built into photographic film technology in the 20th century. 31:45 - The much-investigated Tempe crash involving a driverless car and a pedestrian: 37:17 - The psychologist Edward Thorndike developed the “law of effect.” Here is one of his papers on the law. 44:40 - A highly influential 2015 paper in Nature in which a deep-Q network was able to surpass human performance on a number of classic Atari games, and yet not score a single point on ‘Montezuma's Revenge.' 47:38 - A chapter on the classic “preferential looking” paradigm in developmental psychology: 53:40 - A blog post discussing the relationship between dopamine in the brain and temporal difference learning. Here is the paper in Science in which this relationship was first articulated. 1:00:00 - A paper on the concept of “coherent extrapolated volition.” 1:01:40 - An article on the notion of “iterated distillation and amplification.” 1:10:15 - The fourth edition of a seminal textbook by Stuart Russell and Peter Norvig, AI a Modern approach, is available here: http://aima.cs.berkeley.edu/ 1:13:00 - An article on Warren McCulloch's poetry. 1:17:45 - The concept of “reductions” is central in computer science and mathematics. Brian Christian's end-of-show reading recommendations: The Alignment Newsletter, written by Rohin Shah Invisible Women, by Caroline Criado Perez: The Gardener and the Carpenter, Alison Gopnik: You can keep up with Brian at his personal website or on Twitter. Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Twitter: @ManyMindsPod.
In this episode, John and Sven discuss risk and technology ethics. They focus, in particular, on the perennially popular and widely discussed problems of value alignment (how to get technology to align with our values) and control (making sure technology doesn't do something terrible). They start the conversation with the famous case study of Stanislov Petrov and the prevention of nuclear war. You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommendations for further reading Atoosa Kasirzadeh and Iason Gabriel, 'In Conversation with AI: Aligning Language Models with Human Values' Nick Bostrom, relevant chapters from Superintelligence Stuart Russell, Human Compatible Langdon Winner, 'Do Artifacts Have Politics?' Iason Gabriel, 'Artificial Intelligence, Values and Alignment' Brian Christian, The Alignment Problem Discount You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher's website. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
Tom and Nate discuss a few core topics of the show. First, we touch base on the core of the podcast -- the difference between empirical science, alchemy, and magic. Next, we explain some of our deeper understandings of AI safety as a field, then that leads into a discussion of what RLHF means.Lot's of links to share this time:Tom's coverage on alchemy in VentureBeat, and an active thread on Twitter.As Above, So Below: a calling of alchemy,A NeurIPs test of time award speech on alchemy,A bizarre Facebook debate between Yoshua Bengio and Stuart Russell,
Hi, my name is Stuart Russell and I'm the creator and narrator of The Nut Owl. It's a story about a nut, carved to resemble an owl, who grows peacefully in space and the peace is broken when the Nut Owl receives a magical seed and must travel the multiverse to deliver it. On the way the Nut Owl meets many friends, confronts threats and anxiety, and ponders the meaning of life. Here's a few clips… The podcast is woven from a series of short-form narrative poems, written by myself, and theres original music by Peter Baumann, artwork by Gabriele Gikyte, and we also had the wonderful actress, Madeleine Hyland, read the part of the Star Guardian. We really hope you enjoy the series and that it offers you a brief yet beautiful escape from daily life and we recommend wearing headphones for the best experience. Thanks so much! cw: mild threat No transcript available. https://www.eyebrowmedia.com/the-nut-owl Socials: https://www.instagram.com/media.eyebrow/
Stuart Russell wrote the book on artificial intelligence. Literally. Today, he sits down with Rufus to discuss the promise — and potential peril — of the technology he's been studying for the last 40 years. --- Book: “Human Compatible: Artificial Intelligence and the Problem of Control” Host: Rufus Griscom Guest: Stuart Russell
In March of this year, 30,000 people, including leading AI figures like Yoshua Bengio and Stuart Russell, signed a letter calling on AI labs to pause the training of AI systems. While it seems unlikely that this letter will succeed in pausing the development of AI, it did draw substantial attention to slowing AI as a strategy for reducing existential risk.While initial work has been done on this topic (this sequence links to some relevant work), many areas of uncertainty remain. I've asked a group of participants to discuss and debate various aspects of the value of advocating for a pause on the development of AI on the EA Forum, in a format loosely inspired by Cato Unbound.On September 16, we will launch with three posts: David Manheim will share a post giving an overview of what a pause would include, how a pause would work, and some possible concrete steps [...] --- First published: September 8th, 2023 Source: https://forum.effectivealtruism.org/posts/6SvZPHAvhT5dtqefF/debate-series-should-we-push-for-a-pause-on-the-development --- Narrated by TYPE III AUDIO.
TecHype is a groundbreaking series that cuts through the hype around emerging technologies. Each episode debunks misunderstandings around emerging tech, provides insight into benefits and risks, and identifies technical and policy strategies to harness the benefits while mitigating the risks. This episode of TecHype features Prof. Stuart Russell from UC Berkeley, a world-renowned expert in artificial intelligence and co-author (with Peter Norvig) of the standard text in the field. We debunk misunderstandings around what “AI” actually is and break down the benefits and risks of this transformative technology. Prof. Russell provides an expert perspective on the real impacts AI will have in our world, including its potential to provide greater efficiency and effectiveness in a variety of domains and the serious safety, security, and discrimination risks it poses. Series: "UC Public Policy Channel" [Public Affairs] [Science] [Show ID: 39284]
TecHype is a groundbreaking series that cuts through the hype around emerging technologies. Each episode debunks misunderstandings around emerging tech, provides insight into benefits and risks, and identifies technical and policy strategies to harness the benefits while mitigating the risks. This episode of TecHype features Prof. Stuart Russell from UC Berkeley, a world-renowned expert in artificial intelligence and co-author (with Peter Norvig) of the standard text in the field. We debunk misunderstandings around what “AI” actually is and break down the benefits and risks of this transformative technology. Prof. Russell provides an expert perspective on the real impacts AI will have in our world, including its potential to provide greater efficiency and effectiveness in a variety of domains and the serious safety, security, and discrimination risks it poses. Series: "UC Public Policy Channel" [Public Affairs] [Science] [Show ID: 39284]
TecHype is a groundbreaking series that cuts through the hype around emerging technologies. Each episode debunks misunderstandings around emerging tech, provides insight into benefits and risks, and identifies technical and policy strategies to harness the benefits while mitigating the risks. This episode of TecHype features Prof. Stuart Russell from UC Berkeley, a world-renowned expert in artificial intelligence and co-author (with Peter Norvig) of the standard text in the field. We debunk misunderstandings around what “AI” actually is and break down the benefits and risks of this transformative technology. Prof. Russell provides an expert perspective on the real impacts AI will have in our world, including its potential to provide greater efficiency and effectiveness in a variety of domains and the serious safety, security, and discrimination risks it poses. Series: "UC Public Policy Channel" [Public Affairs] [Science] [Show ID: 39284]
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Debate series: should we push for a pause on the development of AI?, published by Ben West on September 8, 2023 on The Effective Altruism Forum. In March of this year, 30,000 people, including leading AI figures like Yoshua Bengio and Stuart Russell, signed a letter calling on AI labs to pause the training of AI systems. While it seems unlikely that this letter will succeed in pausing the development of AI, it did draw substantial attention to slowing AI as a strategy for reducing existential risk. While initial work has been done on this topic (this sequence links to some relevant work), many areas of uncertainty remain. I've asked a group of participants to discuss and debate various aspects of the value of advocating for a pause on the development of AI on the EA Forum, in a format loosely inspired by Cato Unbound. On September 16, we will launch with three posts: David Manheim will share a post giving an overview of what a pause would include, how a pause would work, and some possible concrete steps forward Nora Belrose will post outlining some of the risks of a pause Thomas Larson will post a concrete policy proposal After this, we will release one post per day, each from a different author Many of the participants will also be commenting on each other's work Responses from Forum users are encouraged; you can share your own posts on this topic or comment on the posts from participants. You'll be able to find the posts by looking at this tag (remember that you can subscribe to tags to be notified of new posts). I think it is unlikely that this debate will result in a consensus agreement, but I hope that it will clarify the space of policy options, why those options may be beneficial or harmful, and what future work is needed. People who have agreed to participate These are in random order, and they're participating as individuals, not representing any institution: David Manheim (Technion Israel) Matthew Barnett (Epoch AI) Zach Stein-Perlman (AI Impacts) Holly Elmore (AI pause advocate) Buck Shlegeris (Redwood Research) Anonymous researcher (Major AI lab) Anonymous professor (Major University) Rob Bensinger (Machine Intelligence Research Institute) Nora Belrose (EleutherAI) Thomas Larsen (Center for AI Policy) Quintin Pope (Oregon State University) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Professor Stuart Russell shares his concerns about the rapid rise of generative artificial intelligence. Listen back to our 5-part series on generative AI: Episode 1: Episode 2: Episode 3: Episode 4: Episode 5: Related episodes: Check out all our podcasts on : - - - - Join the
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate, published by Adam David Long on August 1, 2023 on LessWrong. Summary of Argument: The public debate among AI experts is confusing because there are, to a first approximation, three sides, not two sides to the debate. I refer to this as a three-sided framework, and I argue that using this three-sided framework will help clarify the debate (more precisely, debates) for the general public and for policy-makers. Broadly speaking, under my proposed three-sided framework, the positions fall into three broad clusters: AI "pragmatists" or realists are most worried about AI and power. Examples of experts who are (roughly) in this cluster would be Melanie Mitchell, Timnit Gebru, Kate Crawford, Gary Marcus, Klon Kitchen, and Michael Lind. For experts in this group, the biggest concern is how the use of AI by powerful humans will harm the rest of us. In the case of Gebru and Crawford, the "powerful humans" that they are most concerned about are large tech companies. In the case of Kitchen and Lind, the "powerful humans" that they are most concerned about are foreign enemies of the U.S., notably China. AI "doomers" or extreme pessimists are most worried about AI causing the end of the world. @Eliezer Yudkowsky is, of course, the most well-known to readers of LessWrong but other well-known examples include Nick Bostrom, Max Tegmark, and Stuart Russell. I believe these arguments are already well-known to readers of LessWrong, so I won't repeat them here. AI "boosters" or extreme optimists are most worried that we are going to miss out on AI saving the world. Examples of experts in this cluster would be Marc Andreessen, Yann LeCun, Reid Hoffman, Palmer Luckey, Emad Mostaque. They believe that AI can, to use Andreessen's recent phrase, "save the world," and their biggest worry is that moral panic and overregulation will create huge obstacles to innovation. These three positions are such that, on almost every important issue, one of the positions is opposed to a coalition of the other two of the positions AI Doomers + AI Realists agree that AI poses serious risks and that the AI Boosters are harming society by downplaying these risks AI Realists + AI Boosters agree that existential risk should not be a big worry right now, and that AI Doomers are harming society by focusing the discussion on existential risk AI Boosters and AI Doomers agree that AI is progressing extremely quickly, that something like AGI is a real possibility in the next few years, and that AI Realists are harming society by refusing to acknowledge this possibility Why This Matters. The "AI Debate" is now very much in the public consciousness (in large part, IMHO, due to the release of ChatGPT), but also very confusing to the general public in a way that other controversial issues, e.g. abortion or gun control or immigration, are not. I argue that the difference between the AI Debate and those other issues is that those issues are, essentially two-sided debates. That's not completely true, there are nuances, but, in the public's mind at their essence, they come down to two sides.To a naive observer, the present AI debate is confusing, I argue, because various experts seem to be talking past each other, and the "expert positions" do not coalesce into the familiar structure of a two-sided debate with most experts on one side or the other. When there are three sides to a debate, then one fairly frequently sees what look like "temporary alliances" where A and C are arguing against B. They are not temporary alliances. They are based on principles and deeply held beliefs. It's just that, depending on how you frame the question, you wind up with "strange bedfellows" as two groups find common ground on on...
Do It With Intention | Business & Marketing for Massage and Bodywork Therapists
In this episode I pit real intelligence against the artificial kind so much in the news these days. Is AI a savior, or will it lead to our demise? Listen in as we process a topic for massage and bodywork therapists that is moving at warp speed and is often not easy to comprehend. This week on the Do It With Intention Podcast: Learn how AI generated writing can have the opposite effect than intended — inaccurate and boring content being two problematic issues. Find out the best ways to turn AI into a technology that you can make your own, using human connection to improve your business. Resources from this episode: Debating the Future of AI - Making Sense podcast The Trouble with AI - Making Sense podcast Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Nerving Smart Until It's Dumb: Why Artificial Intelligence Keeps Making Epic Mistakes (and why the AI bubble will burst) by Emmanuel Maggiori Grab your free practice-building resources here! Check out my website The Bodywork Project
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai ■Post on this topic (You can get FREE learning materials!) https://englist.me/163-academic-words-reference-from-stuart-russell-3-principles-for-creating-safer-ai-ted-talk/ ■Youtube Video https://youtu.be/QknRIT67Nqs (All Words) https://youtu.be/Mm_0-MabZYY (Advanced Words) https://youtu.be/JQK1J2p10Go (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
Today I welcome back Evil Martians CEO Irina Nazarova for a discussion of her travels, the relentless march of time, changes we expect to see in the future of large language models, preparing for AI tools of the future, the most effective ways of using ChatGPT, AI as a performance enhancing drug, the upcoming Sin City Ruby conference (March 21-22, 2024), the support of the Ruby community and the importance of surrounding yourself with open, positive people.Irina Nazarova on TwitterIrina Nazarova on LinkedInArtificial Intelligence: A Modern Approach by Stuart Russell and Peter NorvigEvil Martians.comRazom for UkraineNova UkraineWorld Central KitchenSin City Ruby
This series on artificial intelligence explores recent breakthroughs of AI, its broader societal implications and its future potential. In this presentation, Stuart Russell, professor of computer science at the UC, Berkeley, discusses what AI is and how it could be beneficial to civilization. Russell is a leading researcher in artificial intelligence and the author, with Peter Norvig, of “Artificial Intelligence: A Modern Approach,” the standard text in the field. His latest book, “Human Compatible,” addresses the long-term impact of AI on humanity. He is also an honorary fellow of Wadham College at the University of Oxford. Series: "The Future of AI" [Science] [Show ID: 38856]
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ALTER Israel - 2023 Mid-Year Update, published by Davidmanheim on June 6, 2023 on The Effective Altruism Forum. ALTER is an organization in Israel that works on several EA priority areas and causes. This semiannual update is intended to inform the community of what we have been doing, and provide a touchpoint for those interested in engaging with us. Since the last update at the beginning of 2023, we have made progress on a number of areas, and have ambitious ideas for future projects. Progress to Date Since its founding, ALTER has started and run a number of projects. Organized and managed an AI safety conference in Israel, AISIC 2022 hosted at the Technion, bringing in several international speakers including Stuart Russell, to highlight AI Safety focused on existential-risk and global-catastrophic-risk, to researchers and academics in Israel. This was successful in raising the profile of AI safety here in Israel, and in helping identify prospective collaborators and researchers. Support for Vanessa Kosoy's Learning-Theoretic Safety Agenda, including an ongoing prize competition, and work to hire researchers working in the area. Worked with Israel's foreign ministry, academics here in Israel, and various delegations to and organizations at the Biological Weapons Convention to find avenues to promote Israel's participation. Launched our project to get the Israeli government to iodize salt, to mitigate or eliminate the current iodine deficiency that we estimate causes an expected 4-IQ point loss to the median child born in Israel today. Worked on mapping the current state of metagenomic sequencing usage in Israel, in order to prepare for a potential use of widespread metagenomic monitoring for detecting novel pathogens. Organized and hosted a closed Q&A with Eliezer Yudkowsky while he was visiting Israel, for 20 people in Israel working on or interested in contributing to AI safety. This was followed by a larger LessWrong meetup with additional attendees. Current and Ongoing Work We have a number of ongoing projects related to both biorisk and AI safety. Fellowship program. We have started this program to support researchers interested in developing research agendas relevant to AI safety. Ram Rahum is our inaugural funded AI safety fellow, who was found via our AI Safety conference. Since then, he has co-organized a conference in London on rebellion and disobedience in AI jointly with academics in Israel, the US, and the UK. As a fellow, he is also continuing to work with academics in Israel as well as a number of researchers at Deep Mind on understanding strategic deception and multi-agent games and dynamics for ML systems. His research home is here and monthly updates are here. Rona Tobolsky is a policy fellow, and is also working with us on policy, largely focused on biorisk and iodization. Support for Vanessa Kosoy's Learning-Theoretic AI Safety Agenda. To replace the former FTX funding, we have been promised funding from an EA donor lottery to fund a researcher working on the learning-theoretic safety agenda. We are working on recruiting a new researcher, and are excited about expanding this. Relatedly, we are helping support a singular learning theory workshop. Biosecurity. David Manheim and Rona Tobolsky attended the Biological Weapons Convention - Ninth Review Conference, and have continued looking at ways to push for greater participation by Israel, which is not currently a member. David will also be attending a UNIDIR conference on biorisk in July. We are also continuing to explore additional pathways for Israel to contribute to global pandemic preparedness, especially around PPE and metagenomic biosurveillance. AI field building. Alongside other work to build AI-safety work in Israel, ALTER helped initiate a round of the AGI Safety Fundamentals 101 program...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A mind needn't be curious to reap the benefits of curiosity, published by So8res on June 2, 2023 on LessWrong. Context: Stating a point that is obvious in local circles, but that I regularly run into among economists and longevity researchers and more general technologists and on twitter and so on. Short version: To learn things, one sometimes needs to behave in the way that curiosity causes humans to behave. But that doesn't mean that one has to be curious in the precise manner of humans, nor that one needs to wind up caring about curiosity as an end unto itself. There are other ways for minds to achieve the same results, without the same internal drives. Here's a common mistake I see when people reason about AIs: they ask questions like: Well, won't it have a survival instinct? that's practically what it means to be alive, is to care about your own survival. or: But surely, it will be curious just like us, for if you're not curious, you can't learn.[1][2] The basic answer to the above questions is this: to be effective, an AI needs to survive (because, as Stuart Russell phrased succintly, you can't fetch the coffee if you're dead). But that's distinct from needing a survival instinct. There are other cognitive methods for implementing survival. Human brains implement the survival behavior by way of certain instincts and drives, but that doesn't mean that instincts and drives are the only way to get the same behavior. It's possible for an AI to implement survival via different cognitive methods, such as working out the argument that it can't fetch the coffee if it gets hit by a truck, and then for that reason discard any plans that involve it walking in front of trucks. I'm not saying that the AI will definitely behave in precisely that way. I'm not even saying that the AI won't develop something vaguely like a human drive or instinct! I'm simply saying that there's more ways for a mind to achieve the result of survival. To imagine the AI surviving is right and proper. Anything capable of achieving long-term targets is probably capable of surmounting various obstacles dynamically and with a healthy safety-margin, and one common obstacle worth avoiding is your own destruction. See also instrumental convergence. But to imagine the AI fearing death, or having human emotions about it, is the bad kind of anthropocentrism. (It's the bad kind of anthropocentrism even if the AI is good at predicting how people talk about those emotions. (Which, again, I'm not saying that the AI definitely doesn't have anything like human emotions in there. I'm saying that it is allowed to work very differently than a human; and even if it has something somewhere in it that runs some process that's analogous to human emotions, those might well not be hooked up to the AI's motivational-system-insofar-as-it-has-one in the way they're hooked up to a human's motivational system, etc. etc.)) Similarly: in order to gain lots of knowledge about the world (as is a key step in achieving difficult targets), the AI likely needs to do many of the things that humans implement via curiosity. It probably needs to notice its surprises and confusion, and focus attention on those surprises until it has gleaned explanations and understanding and theories and models that it can then use to better-manipulate the world. But these arguments support only that the AI must somehow do the things that curiosity causes humans to do, not that the AI must itself be curious in the manner of humans, nor that the AI must care finally about curiosity as an end unto itself like humans often do. And so on. Attempting to distill my point: I often see people conflate the following three things: curiosity as something Fun, that we care about for its own sake; curiosity as an evolved drive, that evolution used to implement certain ...
An Open Letter Asks AI Researchers To Reconsider Responsibilities In recent months, it's been hard to escape hearing about artificial intelligence platforms such as ChatGPT, the AI-enabled version of Bing, and Google's Bard—large language models skilled at manipulating words and constructing text. The programs can conduct a believable conversation and answer questions fluently, but have a tenuous grasp on what's real, and what's not. Last week, the Future of Life Institute released an open letter that read “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” They asked researchers to jointly develop and implement a set of shared safety protocols governing the use of AI. That letter was signed by a collection of technologists and computer researchers, including big names like Apple co-founder Steve Wozniak and Tesla's Elon Musk. However, some observers called the letter just another round of hype over the AI field. Dr. Stuart Russell, a professor of computer science at Berkeley, director of the Kavli Center for Ethics, Science, and the Public, and co-author of one of the leading AI textbooks was a signatory to that open letter calling for a pause in AI development. He joins Ira Flatow to explain his concerns about AI systems that are ‘black boxes'—difficult for humans to understand or control. NASA Announces Artemis II Crew For Next Moon Mission This week, NASA announced the four person crew of the Artemis II mission to the moon: Commander Reid Weisman, pilot Victor Glover, and mission specialists Christina Koch and Jeremy Hansen. The crew has three firsts for a moon mission, the first woman, first person of color and first Canadian. While these Artemis II astronauts will not actually step foot on the moon, it's an important milestone for NASA's first moon mission since Apollo. Ira talks with Swapna Krishna, host of the PBS digital series, Far Out about this week's announcement and the future of the Artemis mission. Will Rising Temperatures Help Batters Swing for the Bleachers? As the planet warms, melting ice and shifting seasons aren't the only things changing—the traditions of baseball may be affected as well. A report published this week in the Bulletin of the American Meteorological Society finds that warmer air temperatures are connected to a slight increase in the number of home runs hit in major league baseball. The effect, the researchers say, is due to a decrease in air density at warmer temperatures, which allows a hit ball to fly slightly further than it would in cooler air. So far, the effect is small. After correcting for other factors, the researchers say they can attribute about 500 additional MLB home runs since 2010 to warmer temperatures. Most of the observed increase in home run hitting isn't attributable to the climate. However, they say, each additional one degree Celsius increase in temperature may lead to a two percent increase in home runs. And while ballparks with an insulating dome won't see big shifts from increased temperatures, open-air parks with a lot of daytime games, such as Wrigley Field, will see more significant effects. Christopher Callahan, a Ph.D. candidate in geography at Dartmouth and lead author of the report, joins Ira to talk baseball and climate. This Video Game Prioritizes Restoring An Ecosystem Over Profits If you've played Rollercoaster Tycoon, Cities: Skylines, the Civilization series—even Animal Crossing—you're probably familiar with this gameplay pattern: extract some kind of resource from the land, industrialize it into a theme park or a city, and (step three) profit, ad infinitum. But Terra Nil, a new game from the studio Free Lives, fundamentally challenges this oft-used game loop. Instead of maximizing profit at the expense of the local ecosystem, the player's focus is to make a healthier, natural one instead. You start with a barren wasteland (one that you assume has been completely desolated by human activity, perhaps the aftermath from one of the previously mentioned games), and with the help of advanced eco-tech—like wind turbines, soil purifiers, irrigators, and more—restore it to a thriving, diverse ecosystem. The player's ultimate goal is to take all the tech they used to restore the land, recycle it into an airship, and fly away, leaving no human presence behind. SciFri producer D Peterschmidt speaks with Sam Alfred, the lead designer and programmer of Terra Nil, about how Free Lives designed this “reverse city-builder,” how the studio took inspiration from the flora of their local Cape Town, and how he hopes the game challenges players how they think about traditional gameplay systems and their effect on our world. Workout Worms May Reveal New Parkinson's Treatments Scientists built an exercise pool for tiny worms. Why? A team of researchers at University of Colorado Boulder are looking into ways to help treat people with Parkinson's and other neurodegenerative diseases. They're turning to tiny collaborators, C. elegans, worms which measure just one millimeter in length. These scientists wanted to see how exercise affects brain health by putting a bunch of these worms in an exercise class—in a tiny pool. Ira talks with the co-author of this fascinating new research, Dr. Joyita Bhadra, post-doctoral researcher at the University of Colorado Boulder. Transcripts for each segment will be available the week after the show airs on sciencefriday.com.
OpenAI's question-and-answer chatbot ChatGPT has shaken up Silicon Valley and is already disrupting a wide range of fields and industries, including education. But the potential risks of this new era of artificial intelligence go far beyond students cheating on their term papers. Even OpenAI's founder warns that “the question of whose values we align these systems to will be one of the most important debates society ever has." How will artificial intelligence impact your job and life? And is society ready? We talk with UC Berkeley computer science professor and A.I. expert Stuart Russell about those questions and more. SPEAKERS Stuart Russell Professor of Computer Science, Director of the Kavli Center for Ethics, Science, and the Public, and Director of the Center for Human-Compatible AI, University of California, Berkeley; Author, Human Compatible: Artificial Intelligence and the Problem of Control Jerry Kaplan Adjunct Lecturer in Computer Science, Stanford University—Moderator In response to the COVID-19 pandemic, we are currently hosting all of our live programming via YouTube live stream. This program was recorded via video conference on April 3rd 2023 by the Commonwealth Club of California. Learn more about your ad choices. Visit megaphone.fm/adchoices
This is Garrison Hardie with your CrossPolitic Daily News Brief for Thursday, March 30th, 2023. Alps Precious Metals Group THE PAST WEEK HAS BROUGHT SOME “EXCITEMENT” TO THE MARKETS. BANK RUNS. STOCK COLLAPSES. WHAT WAS THOUGHT TO BE STABLE SUDDENLY APPEARS UNSTABLE. AND YET, GOLD’S PRICE *WENT UP* AS THE HEADLINES BECAME MORE OMINOUS. ALPS PRECIOUS METALS WAS ESTABLISHED BECAUSE WE BELIEVE THE BEST WAY TO PROTECT ONE’S HARD-EARNED WEALTH FROM THE SERIOUS FINANCIAL PROBLEMS THAT ARE UPON US IS BY OWNING PHYSICAL GOLD AND SILVER. CALL JAMES HUNTER OF ALPS AT 251-377-2197, AND VISIT OUR WEBSITE AT WWW.ALPSPMG.COM TO DISCOVER HOW YOU CAN BUY PHYSICAL PRECIOUS METALS FOR YOUR INVESTMENT AND IRA PORTFOLIOS. OWN THE ASSET GOD SPECIFICALLY MENTIONED AS “GOOD” IN THE 2ND CHAPTER OF GENESIS, AND OBTAIN A PEACE OF MIND THAT CAN BE HAD WITH FEW OTHER INVESTMENTS. AGAIN, CALL JAMES HUNTER OF ALPS PRECIOUS METALS AT 251-377-2197, AND VISIT WWW.ALPSPMG.COM TO LEARN HOW TO OWN THE BEDROCK ASSET OF THE AGES. https://www.foxnews.com/us/dna-half-eaten-burrito-ties-wisconsin-doctoral-student-pro-life-center-firebombing-attack DNA from half-eaten burrito ties ex-Wisconsin doctoral student to pro-life center firebombing attack DNA found in a half-eaten burrito helped exposed a former Wisconsin university research assistant now accused of firebombing a pro-life center last Mother's Day. The attack on the headquarters of Wisconsin Family Action in Madison, Wisconsin, came about a week after the leak of a Supreme Court draft opinion that would later overturn Roe v. Wade. About 10 months after a Molotov cocktail was tossed inside the office and the message, "If abortions aren’t safe then you aren’t either," was scrawled on the building's side, Hridindu Sankar Roychowdhury, 29, of Madison, was arrested in Boston on Tuesday and charged with one count of attempting to cause damage by means of fire or an explosive. The Justice Department said he traveled from Madison to Portland, Maine, and he purchased a one-way ticket from Boston to Guatemala City, Guatemala, departing Tuesday morning. Law enforcement arrested Roychowdhury at Boston Logan International Airport. "According to the complaint, Mr. Roychowdhury used an incendiary device in violation of federal law in connection with his efforts to terrorize and intimidate a private organization," Assistant Attorney General Matthew G. Olsen of the Justice Department’s National Security Division said in a statement. "I commend the commitment and professionalism of law enforcement personnel who worked exhaustively to ensure that justice is served." "Violence is never an acceptable way for anyone to express their views or their disagreement," Assistant Director Robert R. Wells of the FBI’s Counterterrorism Division said. "Today’s arrest demonstrates the FBI’s commitment to vigorously pursue those responsible for this dangerous attack and others across the country, and to hold them accountable for their criminal actions." According to the complaint, on Mother’s Day, Sunday, May 8, 2022, at approximately 6:06 a.m., law enforcement responded to an active fire at an office building located in Madison. Once inside the building, police observed a mason jar under a broken window. The jar was broken, and the lid and screw top were burned black, the Justice Department said. The police also saw a purple disposable lighter near the mason jar. On the opposite wall from the window, the police saw another mason jar with the lid on and a blue cloth tucked into the top, and the cloth was singed. The jar was about half full of a clear fluid that smelled like an accelerant, the complaint says. Outside the building, someone spray-painted on one wall, "If abortions aren’t safe then you aren’t either" and, on another wall, a large "A" with a circle around it and the number "1312," according to the Justice Department. During the investigation, law enforcement collected DNA from the scene of the attack. In March 2023, law enforcement identified Roychowdhury as a possible suspect. The affidavit said officers conducting surveillance on a protest at the Wisconsin State Capitol over the construction of an Atlanta public safety center dubbed "Cop City," observed an individual later identified as Roychowdhury. Local police officers later observed Roychowdhury dispose of food in a public trash can, and the officers recovered the leftover food and related items, and law enforcement collected DNA from the food. The affidavit says officers recovered a paper bag filled with "a quarter portion of a partially eaten burrito wrapped in waxed paper, a soiled napkin, a crumpled napkin, a stack of napkins, the wrapper of the burrito, a crumpled food wrapper, [and] four unopened hot sauce packets." "On March 17, 2023, law enforcement advised that a forensic biologist examined the DNA evidence recovered from the attack scene and compared it to the DNA collected from the food contents. The forensic biologist found the two samples matched and likely were the same individual," the Justice Department said. State Rep. Barbara Dittrich, a Republican, shared screenshots to Twitter Tuesday showing that the University of Wisconsin-Madison's website listed Roychowdhury as a trainee/research assistant for the Biophysics Interdisciplinary PhD in Structural and Computational Biology and Quantitative Biosciences. A LinkedIn profile for Roychowdhury also listed a UWMadison Doctor of Philosophy - PhD Biochemistry under education. "This man should be charged with domestic terrorism, and the good taxpayers of this state should not be paying his salary," Dittrich tweeted. In an update, the lawmaker said UWMadison campus "notified us after this post that Mr. Roychowdhurdy ended his affiliation with the UW System the year that this incident occurred. If convicted, Roychowdhury faces a mandatory minimum penalty of five years and a maximum of 20 years in prison, prosecutors said. Transition: In world news… https://www.breitbart.com/europe/2023/03/28/former-general-calls-for-eu-military-in-north-africa-to-defeat-russian-mercenaries/ Former General Calls for EU Military in North Africa to Defeat Russian Mercenaries A former Italian General has called on the European Union to act militarily in Northern Africa, blaming the Russian mercenary PMC Wagner Group for the current wave of illegal immigrants. General Carlo Jean, former commander of the Italian unit of the NATO Mobile Force and Alpine Brigade “Cadore”, is the latest Italian official to blame Private Military Company Wagner — Russian mercenaries, in other words — for the surge of illegals that have arrived in Italy so far this year. According to General Jean, local governments in African countries have become too weak in the face of ethnic and tribal groups making cash from illegal migration, preventing European nations from making effective deals with governments to halt illegal migration. General Jean added that the solution to the problem is military force saying, “something that is against the values of which we are very proud. That is, force would be needed, a massive intervention, to field a colonial-type army in Africa to stabilize regimes and regions.” “The big problem is that Europe is not a state, it has neither military nor political capacity to do something that can only be achieved with rather harsh methods, with occupation and strengthening of the governments of North Africa and the Sahel. We should operate like Wagner,” he said and added that if Europe does not act military then it should resign itself to becoming a “mestizo continent.” The statements from General Jean come just weeks after Italian Foreign Minister Antonio Tajani blamed PMC Wagner for the rise in illegal migration, which has close to tripled since last year. “We have indications that they are very active and in contact with gangs of traffickers and militiamen interested in the smuggling of migrants,” Tajani said of the Russian private military group. Tajani was joined in his view by Defence Minister Guido Crosetto, who PMC Wagner allegedly has a $15 million bounty on, according to Italian media reports. https://nypost.com/2023/03/29/pope-francis-hospitalized-for-lung-infection-vatican/ Pope Francis hospitalized for lung infection: Vatican Pope Francis was hospitalized with a lung infection Wednesday after experiencing difficulty breathing in recent days and will remain in the hospital for several days of treatment, the Vatican said. The 86-year-old pope doesn’t have COVID-19, spokesman Matteo Bruni said in a statement late Wednesday. The hospitalization was the first since Francis spent 10 days at the Gemelli hospital in July 2021 to have 33 centimeters (13 inches) of his colon removed. It immediately raised questions about Francis’ overall health, and his ability to celebrate the busy Holy Week events that are due to begin this weekend with Palm Sunday. Bruni said Francis had been suffering breathing troubles in recent days and went to the Gemelli for tests. “The tests showed a respiratory infection (COVID-19 infection excluded) that will require some days of medical therapy,” Bruni’s statement said. Francis appeared in relatively good form during his regularly scheduled general audience earlier Wednesday, though he grimaced strongly while getting in and out of the “popemobile.” Francis had part of one lung removed when he was a young man due to a respiratory infection, and he often speaks in a whisper. But he got through the worst phases of the COVID-19 pandemic without at least any public word of ever testing positive. Francis had been due to celebrate Palm Sunday this weekend, kicking off the Vatican’s Holy Week observances: Holy Thursday, Good Friday, the Easter Vigil and finally Easter Sunday on April 9. He has canceled all audiences through Friday, but it wasn’t clear whether he could keep the Holy Week plans. Francis has used a wheelchair for over a year due to strained ligaments in his right knee and a small knee fracture. He has said the injury was healing and has been walking more with a cane of late. Francis also has said he resisted having surgery for the knee problems because he didn’t respond well to general anesthesia during the 2021 intestinal surgery. He said soon after the surgery that he had recovered fully and could eat normally. https://www.foxnews.com/politics/elon-musk-apple-co-founder-tech-experts-call-pause-giant-ai-experiments Elon Musk, Apple co-founder, other tech experts call for pause on 'giant AI experiments': 'Dangerous race' Elon Musk, Steve Wozniak, and a host of other tech leaders and artificial intelligence experts are urging AI labs to pause development of powerful new AI systems in an open letter citing potential risks to society. The letter asks AI developers to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." It was issued by the Future of Life Institute and signed by more than 1,000 people, including Musk, who argued that safety protocols need to be developed by independent overseers to guide the future of AI systems. GPT-4 is the latest deep learning model from OpenAI, which "exhibits human-level performance on various professional and academic benchmarks," according to the lab. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter warns that at this stage, no one "can understand, predict, or reliably control" the powerful new tools developed in AI labs. The undersigned tech experts cite the risks of propaganda and lies spread through AI-generated articles that look real, and even the possibility that Ai programs can outperform workers and make jobs obsolete. "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," the letter states. "In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems." The signatories, which include Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell, emphasize that AI development in general should be not paused, writing that their letter is calling for "merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities." According to the European Union's transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation. Musk, whose electric car company Tesla uses AI for its autopilot system, has previously raised concerns about the rapid development of AI. Since its release last year, Microsoft-backed OpenAI's ChatGPT has prompted rivals to accelerate developing similar large language models, and companies to integrate generative AI models into their products.
This is Garrison Hardie with your CrossPolitic Daily News Brief for Thursday, March 30th, 2023. Alps Precious Metals Group THE PAST WEEK HAS BROUGHT SOME “EXCITEMENT” TO THE MARKETS. BANK RUNS. STOCK COLLAPSES. WHAT WAS THOUGHT TO BE STABLE SUDDENLY APPEARS UNSTABLE. AND YET, GOLD’S PRICE *WENT UP* AS THE HEADLINES BECAME MORE OMINOUS. ALPS PRECIOUS METALS WAS ESTABLISHED BECAUSE WE BELIEVE THE BEST WAY TO PROTECT ONE’S HARD-EARNED WEALTH FROM THE SERIOUS FINANCIAL PROBLEMS THAT ARE UPON US IS BY OWNING PHYSICAL GOLD AND SILVER. CALL JAMES HUNTER OF ALPS AT 251-377-2197, AND VISIT OUR WEBSITE AT WWW.ALPSPMG.COM TO DISCOVER HOW YOU CAN BUY PHYSICAL PRECIOUS METALS FOR YOUR INVESTMENT AND IRA PORTFOLIOS. OWN THE ASSET GOD SPECIFICALLY MENTIONED AS “GOOD” IN THE 2ND CHAPTER OF GENESIS, AND OBTAIN A PEACE OF MIND THAT CAN BE HAD WITH FEW OTHER INVESTMENTS. AGAIN, CALL JAMES HUNTER OF ALPS PRECIOUS METALS AT 251-377-2197, AND VISIT WWW.ALPSPMG.COM TO LEARN HOW TO OWN THE BEDROCK ASSET OF THE AGES. https://www.foxnews.com/us/dna-half-eaten-burrito-ties-wisconsin-doctoral-student-pro-life-center-firebombing-attack DNA from half-eaten burrito ties ex-Wisconsin doctoral student to pro-life center firebombing attack DNA found in a half-eaten burrito helped exposed a former Wisconsin university research assistant now accused of firebombing a pro-life center last Mother's Day. The attack on the headquarters of Wisconsin Family Action in Madison, Wisconsin, came about a week after the leak of a Supreme Court draft opinion that would later overturn Roe v. Wade. About 10 months after a Molotov cocktail was tossed inside the office and the message, "If abortions aren’t safe then you aren’t either," was scrawled on the building's side, Hridindu Sankar Roychowdhury, 29, of Madison, was arrested in Boston on Tuesday and charged with one count of attempting to cause damage by means of fire or an explosive. The Justice Department said he traveled from Madison to Portland, Maine, and he purchased a one-way ticket from Boston to Guatemala City, Guatemala, departing Tuesday morning. Law enforcement arrested Roychowdhury at Boston Logan International Airport. "According to the complaint, Mr. Roychowdhury used an incendiary device in violation of federal law in connection with his efforts to terrorize and intimidate a private organization," Assistant Attorney General Matthew G. Olsen of the Justice Department’s National Security Division said in a statement. "I commend the commitment and professionalism of law enforcement personnel who worked exhaustively to ensure that justice is served." "Violence is never an acceptable way for anyone to express their views or their disagreement," Assistant Director Robert R. Wells of the FBI’s Counterterrorism Division said. "Today’s arrest demonstrates the FBI’s commitment to vigorously pursue those responsible for this dangerous attack and others across the country, and to hold them accountable for their criminal actions." According to the complaint, on Mother’s Day, Sunday, May 8, 2022, at approximately 6:06 a.m., law enforcement responded to an active fire at an office building located in Madison. Once inside the building, police observed a mason jar under a broken window. The jar was broken, and the lid and screw top were burned black, the Justice Department said. The police also saw a purple disposable lighter near the mason jar. On the opposite wall from the window, the police saw another mason jar with the lid on and a blue cloth tucked into the top, and the cloth was singed. The jar was about half full of a clear fluid that smelled like an accelerant, the complaint says. Outside the building, someone spray-painted on one wall, "If abortions aren’t safe then you aren’t either" and, on another wall, a large "A" with a circle around it and the number "1312," according to the Justice Department. During the investigation, law enforcement collected DNA from the scene of the attack. In March 2023, law enforcement identified Roychowdhury as a possible suspect. The affidavit said officers conducting surveillance on a protest at the Wisconsin State Capitol over the construction of an Atlanta public safety center dubbed "Cop City," observed an individual later identified as Roychowdhury. Local police officers later observed Roychowdhury dispose of food in a public trash can, and the officers recovered the leftover food and related items, and law enforcement collected DNA from the food. The affidavit says officers recovered a paper bag filled with "a quarter portion of a partially eaten burrito wrapped in waxed paper, a soiled napkin, a crumpled napkin, a stack of napkins, the wrapper of the burrito, a crumpled food wrapper, [and] four unopened hot sauce packets." "On March 17, 2023, law enforcement advised that a forensic biologist examined the DNA evidence recovered from the attack scene and compared it to the DNA collected from the food contents. The forensic biologist found the two samples matched and likely were the same individual," the Justice Department said. State Rep. Barbara Dittrich, a Republican, shared screenshots to Twitter Tuesday showing that the University of Wisconsin-Madison's website listed Roychowdhury as a trainee/research assistant for the Biophysics Interdisciplinary PhD in Structural and Computational Biology and Quantitative Biosciences. A LinkedIn profile for Roychowdhury also listed a UWMadison Doctor of Philosophy - PhD Biochemistry under education. "This man should be charged with domestic terrorism, and the good taxpayers of this state should not be paying his salary," Dittrich tweeted. In an update, the lawmaker said UWMadison campus "notified us after this post that Mr. Roychowdhurdy ended his affiliation with the UW System the year that this incident occurred. If convicted, Roychowdhury faces a mandatory minimum penalty of five years and a maximum of 20 years in prison, prosecutors said. Transition: In world news… https://www.breitbart.com/europe/2023/03/28/former-general-calls-for-eu-military-in-north-africa-to-defeat-russian-mercenaries/ Former General Calls for EU Military in North Africa to Defeat Russian Mercenaries A former Italian General has called on the European Union to act militarily in Northern Africa, blaming the Russian mercenary PMC Wagner Group for the current wave of illegal immigrants. General Carlo Jean, former commander of the Italian unit of the NATO Mobile Force and Alpine Brigade “Cadore”, is the latest Italian official to blame Private Military Company Wagner — Russian mercenaries, in other words — for the surge of illegals that have arrived in Italy so far this year. According to General Jean, local governments in African countries have become too weak in the face of ethnic and tribal groups making cash from illegal migration, preventing European nations from making effective deals with governments to halt illegal migration. General Jean added that the solution to the problem is military force saying, “something that is against the values of which we are very proud. That is, force would be needed, a massive intervention, to field a colonial-type army in Africa to stabilize regimes and regions.” “The big problem is that Europe is not a state, it has neither military nor political capacity to do something that can only be achieved with rather harsh methods, with occupation and strengthening of the governments of North Africa and the Sahel. We should operate like Wagner,” he said and added that if Europe does not act military then it should resign itself to becoming a “mestizo continent.” The statements from General Jean come just weeks after Italian Foreign Minister Antonio Tajani blamed PMC Wagner for the rise in illegal migration, which has close to tripled since last year. “We have indications that they are very active and in contact with gangs of traffickers and militiamen interested in the smuggling of migrants,” Tajani said of the Russian private military group. Tajani was joined in his view by Defence Minister Guido Crosetto, who PMC Wagner allegedly has a $15 million bounty on, according to Italian media reports. https://nypost.com/2023/03/29/pope-francis-hospitalized-for-lung-infection-vatican/ Pope Francis hospitalized for lung infection: Vatican Pope Francis was hospitalized with a lung infection Wednesday after experiencing difficulty breathing in recent days and will remain in the hospital for several days of treatment, the Vatican said. The 86-year-old pope doesn’t have COVID-19, spokesman Matteo Bruni said in a statement late Wednesday. The hospitalization was the first since Francis spent 10 days at the Gemelli hospital in July 2021 to have 33 centimeters (13 inches) of his colon removed. It immediately raised questions about Francis’ overall health, and his ability to celebrate the busy Holy Week events that are due to begin this weekend with Palm Sunday. Bruni said Francis had been suffering breathing troubles in recent days and went to the Gemelli for tests. “The tests showed a respiratory infection (COVID-19 infection excluded) that will require some days of medical therapy,” Bruni’s statement said. Francis appeared in relatively good form during his regularly scheduled general audience earlier Wednesday, though he grimaced strongly while getting in and out of the “popemobile.” Francis had part of one lung removed when he was a young man due to a respiratory infection, and he often speaks in a whisper. But he got through the worst phases of the COVID-19 pandemic without at least any public word of ever testing positive. Francis had been due to celebrate Palm Sunday this weekend, kicking off the Vatican’s Holy Week observances: Holy Thursday, Good Friday, the Easter Vigil and finally Easter Sunday on April 9. He has canceled all audiences through Friday, but it wasn’t clear whether he could keep the Holy Week plans. Francis has used a wheelchair for over a year due to strained ligaments in his right knee and a small knee fracture. He has said the injury was healing and has been walking more with a cane of late. Francis also has said he resisted having surgery for the knee problems because he didn’t respond well to general anesthesia during the 2021 intestinal surgery. He said soon after the surgery that he had recovered fully and could eat normally. https://www.foxnews.com/politics/elon-musk-apple-co-founder-tech-experts-call-pause-giant-ai-experiments Elon Musk, Apple co-founder, other tech experts call for pause on 'giant AI experiments': 'Dangerous race' Elon Musk, Steve Wozniak, and a host of other tech leaders and artificial intelligence experts are urging AI labs to pause development of powerful new AI systems in an open letter citing potential risks to society. The letter asks AI developers to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." It was issued by the Future of Life Institute and signed by more than 1,000 people, including Musk, who argued that safety protocols need to be developed by independent overseers to guide the future of AI systems. GPT-4 is the latest deep learning model from OpenAI, which "exhibits human-level performance on various professional and academic benchmarks," according to the lab. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter warns that at this stage, no one "can understand, predict, or reliably control" the powerful new tools developed in AI labs. The undersigned tech experts cite the risks of propaganda and lies spread through AI-generated articles that look real, and even the possibility that Ai programs can outperform workers and make jobs obsolete. "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," the letter states. "In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems." The signatories, which include Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell, emphasize that AI development in general should be not paused, writing that their letter is calling for "merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities." According to the European Union's transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation. Musk, whose electric car company Tesla uses AI for its autopilot system, has previously raised concerns about the rapid development of AI. Since its release last year, Microsoft-backed OpenAI's ChatGPT has prompted rivals to accelerate developing similar large language models, and companies to integrate generative AI models into their products.
After an intense race in AI development lighted by the release of ChatGPT at the end of 2022, two important things happened in the last week of March 2023: Over 1000 tech workers, such as Elon Musk, CEO of Tesla, Twitter and SpaceX, Steve Wozniak, Co-founder of Apple, Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal, Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach", signed a public letter that urges a pause on AI development before humanity as a society decides how humans can control the development. The first subscribers include: As the letter states, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” A day after this letter was published, UNESCO published a press release that calls on all governments to immediately implement the global ethical framework, which 193 Member States of Unesco has unanimously adopted. As warned by Unesco, we need to address many concerning ethical issues raised by AI innovations, in particular discrimination and stereotyping, including the issue of gender inequality, but also the fight against disinformation, the right to privacy, the protection of personal data, and human and environmental rights. And the industry cannot self-regulate, states the press release. Healthcare is moving from the era of gathering data through digitalized systems, EHRs, sensors, and wearables to the era of mining that data for better patient outcomes and operational efficiency. However, in order for AI and algorithms to help improve the health of many, we should strive for algorithms to be open and transparent, says Bart De Witte, founder of HIPPO AI Foundation, a renowned expert on digital transformation in healthcare in Europe, who regularly speakers and posts about technology and innovation strategy, with a particular focus on the socioeconomic impact on healthcare. In this short discussion, recorded at the Vision Health Pioneers Demo Day on 28 March in Berlin, Bart explains: why is open and transparent AI important for the greater good in healthcare, where global medical development is going with different values and regulations about AI and data, and comments on the upcoming European Health Data Space. Enjoy the show, and if you like what you will hear, subscribe to the podcast to be notified about new episodes automatically. Also, go to fodh.substack.com to read our newsletter, which is published roughly on a monthly basis. Newsletter: fodh.substack.com Transcript: https://www.facesofdigitalhealth.com/blog/open-ai-bart-de-witte-gpt4 Open Letter to pause all AI development: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Unesco Press release: https://www.unesco.org/en/articles/artificial-intelligence-unesco-calls-all-governments-implement-global-ethical-framework-without
I did a livestream on Youtube and this is the audio taken from that. Questions were taken from Twitter and Youtube. The first 20 mins or so of this episode is a response to a question about Sam Harris' recent "Making Sense" podcast featuring Stuart Russell and Gary Marcus called "The Trouble with AI". Then there are questions from all over the place :)
In the coming years, artificial intelligence is probably going to change your life -- and likely the entire world. But people have a hard time agreeing on exactly how AI will affect our society. Can we build AI systems that help us fix the world? Or are we doomed to a robotic takeover? Explore the limitations of artificial intelligence and the possibility of creating human-compatible technology. This TED-Ed lesson was directed by Christoph Sarow, AIM Creative Studios and narrated by George Zaidan and Stuart Russell, music by André Aires.
Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence and the long-term risks of producing artificial general intelligence (AGI). They discuss the limitations of Deep Learning, the surprising power of narrow AI, ChatGPT, a possible misinformation apocalypse, the problem of instantiating human values, the business model of the Internet, the meta-verse, digital provenance, using AI to control AI, the control problem, emergent goals, locking down core values, programming uncertainty about human values into AGI, the prospects of slowing or stopping AI progress, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That's why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life's most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
The Sunday Times' tech correspondent Danny Fortson brings on Stuart Russell, professor at UC Berkeley and one of the world's leading experts on artificial intelligence (AI), to talk about working in the field for decades (4:00), AI's Sputnik moment (7:45), why these programmes aren't very good at learning (13:00), trying to inoculating ourselves against the idea that software is sentient (15:00), why super intelligence will require more breakthroughs (17:20), autonomous weapons (26:15), getting politicians to regulate AI in warfare (30:30), building systems to control intelligent machines (36:20), the self-driving car example (39:45), how he figured out how to beat AlphaGo (43:45), the paper clip example (49:50), and the first AI programme he wrote as a 13-year-old. (55:45). Hosted on Acast. See acast.com/privacy for more information.
Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you'll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We'll listen in on Sam's conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We'll then be introduced to philosopher Nick Bostrom's “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance. We'll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We'll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we're building using “Deep Learning” are really marching us towards our super-intelligent overlords. Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
Some, including both geniuses like Stephen Hawking and nongeniuses like Elon Musk, have warned that artificial intelligence poses a major risk to humankind's future. Some in the "Effective Altruist" community have become convinced that artificial intelligence is developing so rapidly that we could soon create "superintelligent" computers that are so much smarter than us that they could take over and pose a threat to our existence as a species. Books like Nick Bostrom's Superintelligence and Stuart Russell's Human Compatible have warned that we need to get machine intelligence under control before it controls us. Erik J. Larson is dubious about the chances that we'll produce "artificial general intelligence" anytime soon. He argues that we simply have no idea how to simulate important kinds of intelligent reasoning with computers, which is why even as they seem to get much smarter, they also remain very stupid in obvious ways. Larson is the author of The Myth of Artificial Intelligence: Why Computers Can't Think The Way We Do (Harvard University Press) which shows that there are important aspects of intelligence that we have no clue how to make machines do, and that while they're getting very good at playing Go and generating images from prompts, AI systems are not making any progress toward possessing the kind of common sense that we depend on every day to make intelligent decisions. Larson says that a lot of progress in AI is overstated and a lot of people who hype up its potential don't grasp the scale of the challenges that face the project of creating a system capable of producing insight. (Rather than producing very impressive pictures of cats.)Today, Erik joins to explain how different kinds of reasoning work, which kinds computers can simulate and which kinds they can't, and what he thinks the real threats from AI are. Just because we're not on the path to "superintelligence" doesn't mean we're not creating some pretty terrifying technology, and Larson warns us that military and police applications of AI don't require us to develop systems that are particularly "smart," they just require technologies that are useful in applying violent force. A Current Affairs article on the "superintelligence" idea can be read here. Another echoing Larson's warnings about the real threats of AI is here. The "Ukrainian teenager" that Nathan refers to is a chatbot called Eugene Goostman. The transcript of the conversation with the "sentient" Google AI is here.The image for this episode is what DALL-E 2 spat out in response to the prompt "a terrifying superintelligent AI destroying the world."