POPULARITY
In this podcast, Peter Barnett, Director of Executive Education at the Shingo Institute will cover how coaching can transform your organization's culture and drive lasting change. In the fast-paced world of continuous improvement, simply teaching problem-solving skills isn't enough. Coaching is the key to empowering employees, fostering accountability, and unlocking their full potential.We'll explore effective coaching techniques that build problem-solving capabilities, encourage collaboration, and create a culture of learning. Learn how to coach for skill development, reinforce new behaviors, and keep teams aligned with organizational goals. To learn more about the Shingo Model, please visit https://shingo.org.Handout: https://usu.box.com/s/zgxk45bogrmr4uhcjsgavvoz6t9fyduq
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.[Thanks to Aryan Bhatt, Ansh Radhakrishnan, Adam Kaufman, Vivek Hebbar, Hanna Gabor, Justis Mills, Aaron Scher, Max Nadeau, Ryan Greenblatt, Peter Barnett, Fabien Roger, and various people at a presentation of these arguments for comments. These ideas aren't very original to me; many of the examples of threat models are from other people.]In this post, I want to introduce the concept of a “rogue deployment” and argue that it's interesting to classify possible AI catastrophes based on whether or not they involve a rogue deployment. I'll also talk about how this division interacts with the structure of a safety case, discuss two important subcategories of rogue deployment, and make a few points about how the different categories I describe here might be caused by different attackers (e.g. the AI itself, rogue lab insiders, external hackers, or [...]--- First published: June 3rd, 2024 Source: https://www.lesswrong.com/posts/ceBpLHJDdCt3xfEok/ai-catastrophes-and-rogue-deployments --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI catastrophes and rogue deployments, published by Buck Shlegeris on June 3, 2024 on The AI Alignment Forum. [Thanks to Aryan Bhatt, Ansh Radhakrishnan, Adam Kaufman, Vivek Hebbar, Hanna Gabor, Justis Mills, Aaron Scher, Max Nadeau, Ryan Greenblatt, Peter Barnett, Fabien Roger, and various people at a presentation of these arguments for comments. These ideas aren't very original to me; many of the examples of threat models are from other people.] In this post, I want to introduce the concept of a "rogue deployment" and argue that it's interesting to classify possible AI catastrophes based on whether or not they involve a rogue deployment. I'll also talk about how this division interacts with the structure of a safety case, discuss two important subcategories of rogue deployment, and make a few points about how the different categories I describe here might be caused by different attackers (e.g. the AI itself, rogue lab insiders, external hackers, or multiple of these at once). Rogue deployments Suppose you've developed some powerful model. (It might actually be many models; I'm just going to talk about a singular model for convenience.) You have some definition of a catastrophe - a failure that you don't want to happen even once. Your AI is powerful enough that, in absence of any safety measures, it would pose a substantial risk of causing a catastrophe. In the terminology of " Safety Cases: How to Justify the Safety of Advanced AI Systems", we'd phrase this as "the model is too powerful for us to make inability arguments for safety." (Sometimes I use the word "catastrophe" to include things like "the AI escapes the datacenter". Right now, I'm excluding that. I want to reserve "catastrophe" for the end result that we find unacceptable, e.g. the AI or another adversary amassing power, launching nukes, starting a war, building superintelligence, etc.) For the moment, let's be agnostic between various adversaries who might try to cause a catastrophe (AI or various humans). You have some safety measures that you've put into place to prevent the catastrophe from occurring. Safety measures are interventions that are applied to all of the deployments of your model (to any users, including just internal users) that try to prevent safety problems. This might include control measures like trusted monitoring, and security measures to make it harder to exfiltrate your model weights. A rogue deployment is a deployment of your model where the safety measures are absent. For example: an AI hacking its datacenter and running itself while hiding what it's doing from its creators an AI self-exfiltrating and running itself on unmonitored servers someone stealing the AI and running it in their own datacenter You can classify AI catastrophes based on whether they involve a rogue deployment or not: In the case of a catastrophe that doesn't involve rogue deployment, the catastrophe happened with all your safety measures in place the whole time; these measures just didn't suffice to spot (or prevent) that something terrible was happening until it was too late. For catastrophes that did involve rogue deployment, the catastrophe happened in two steps: Firstly, while the safety measures were in place, something went wrong such that a rogue deployment could occur. (Perhaps there was a security failure, or a failure of control that allowed an AI to run itself unmonitored in its datacenter.) Secondly, that rogue deployment caused a catastrophe. (Another way of phrasing all this would be to talk about instrumental vs terminal catastrophes - the terminal catastrophe is the end state we wanted to avoid, and the instrumental catastrophe is something happening that makes the situation go out of our control such that a terminal catastrophe is now likely and it's much harder for us to prevent it. So in...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's April 2024 Newsletter, published by Harlan on April 13, 2024 on LessWrong. The MIRI Newsletter is back in action after a hiatus since July 2022. To recap some of the biggest MIRI developments since then: MIRI released its 2024 Mission and Strategy Update, announcing a major shift in focus: While we're continuing to support various technical research programs at MIRI, our new top priority is broad public communication and policy change. In short, we've become increasingly pessimistic that humanity will be able to solve the alignment problem in time, while we've become more hopeful (relatively speaking) about the prospect of intergovernmental agreements to hit the brakes on frontier AI development for a very long time - long enough for the world to find some realistic path forward. Coinciding with this strategy change, Malo Bourgon transitioned from MIRI COO to CEO, and Nate Soares transitioned from CEO to President. We also made two new senior staff hires: Lisa Thiergart, who manages our research program; and Gretta Duleba, who manages our communications and media engagement. In keeping with our new strategy pivot, we're growing our comms team: I (Harlan Stewart) recently joined the team, and will be spearheading the MIRI Newsletter and a number of other projects alongside Rob Bensinger. I'm a former math and programming instructor and a former researcher at AI Impacts, and I'm excited to contribute to MIRI's new outreach efforts. The comms team is at the tail end of another hiring round, and we expect to scale up significantly over the coming year. Our Careers page and the MIRI Newsletter will announce when our next comms hiring round begins. We are launching a new research team to work on technical AI governance, and we're currently accepting applicants for roles as researchers and technical writers. The team currently consists of Lisa Thiergart and Peter Barnett, and we're looking to scale to 5-8 people by the end of the year. The team will focus on researching and designing technical aspects of regulation and policy which could lead to safe AI, with attention given to proposals that can continue to function as we move towards smarter-than-human AI. This work will include: investigating limitations in current proposals such as Responsible Scaling Policies; responding to requests for comments by policy bodies such as the NIST, EU, and UN; researching possible amendments to RSPs and alternative safety standards; and communicating with and consulting for policymakers. Now that the MIRI team is growing again, we also plan to do some fundraising this year, including potentially running an end-of-year fundraiser - our first fundraiser since 2019. We'll have more updates about that later this year. As part of our post-2022 strategy shift, we've been putting far more time into writing up our thoughts and making media appearances. In addition to announcing these in the MIRI Newsletter again going forward, we now have a Media page that will collect our latest writings and appearances in one place. Some highlights since our last newsletter in 2022: MIRI senior researcher Eliezer Yudkowsky kicked off our new wave of public outreach in early 2023 with a very candid TIME magazine op-ed and a follow-up TED Talk, both of which appear to have had a big impact. The TIME article was the most viewed page on the TIME website for a week, and prompted some concerned questioning at a White House press briefing. Eliezer and Nate have done a number of podcast appearances since then, attempting to share our concerns and policy recommendations with a variety of audiences. Of these, we think the best appearance on substance was Eliezer's multi-hour conversation with Logan Bartlett. This December, Malo was one of sixteen attendees invited by Leader Schumer and Senators Young, Rounds, and...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's April 2024 Newsletter, published by Harlan on April 13, 2024 on LessWrong. The MIRI Newsletter is back in action after a hiatus since July 2022. To recap some of the biggest MIRI developments since then: MIRI released its 2024 Mission and Strategy Update, announcing a major shift in focus: While we're continuing to support various technical research programs at MIRI, our new top priority is broad public communication and policy change. In short, we've become increasingly pessimistic that humanity will be able to solve the alignment problem in time, while we've become more hopeful (relatively speaking) about the prospect of intergovernmental agreements to hit the brakes on frontier AI development for a very long time - long enough for the world to find some realistic path forward. Coinciding with this strategy change, Malo Bourgon transitioned from MIRI COO to CEO, and Nate Soares transitioned from CEO to President. We also made two new senior staff hires: Lisa Thiergart, who manages our research program; and Gretta Duleba, who manages our communications and media engagement. In keeping with our new strategy pivot, we're growing our comms team: I (Harlan Stewart) recently joined the team, and will be spearheading the MIRI Newsletter and a number of other projects alongside Rob Bensinger. I'm a former math and programming instructor and a former researcher at AI Impacts, and I'm excited to contribute to MIRI's new outreach efforts. The comms team is at the tail end of another hiring round, and we expect to scale up significantly over the coming year. Our Careers page and the MIRI Newsletter will announce when our next comms hiring round begins. We are launching a new research team to work on technical AI governance, and we're currently accepting applicants for roles as researchers and technical writers. The team currently consists of Lisa Thiergart and Peter Barnett, and we're looking to scale to 5-8 people by the end of the year. The team will focus on researching and designing technical aspects of regulation and policy which could lead to safe AI, with attention given to proposals that can continue to function as we move towards smarter-than-human AI. This work will include: investigating limitations in current proposals such as Responsible Scaling Policies; responding to requests for comments by policy bodies such as the NIST, EU, and UN; researching possible amendments to RSPs and alternative safety standards; and communicating with and consulting for policymakers. Now that the MIRI team is growing again, we also plan to do some fundraising this year, including potentially running an end-of-year fundraiser - our first fundraiser since 2019. We'll have more updates about that later this year. As part of our post-2022 strategy shift, we've been putting far more time into writing up our thoughts and making media appearances. In addition to announcing these in the MIRI Newsletter again going forward, we now have a Media page that will collect our latest writings and appearances in one place. Some highlights since our last newsletter in 2022: MIRI senior researcher Eliezer Yudkowsky kicked off our new wave of public outreach in early 2023 with a very candid TIME magazine op-ed and a follow-up TED Talk, both of which appear to have had a big impact. The TIME article was the most viewed page on the TIME website for a week, and prompted some concerned questioning at a White House press briefing. Eliezer and Nate have done a number of podcast appearances since then, attempting to share our concerns and policy recommendations with a variety of audiences. Of these, we think the best appearance on substance was Eliezer's multi-hour conversation with Logan Bartlett. This December, Malo was one of sixteen attendees invited by Leader Schumer and Senators Young, Rounds, and...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evaluating Stability of Unreflective Alignment, published by james.lucassen on February 1, 2024 on The AI Alignment Forum. This post has an accompanying SPAR project! Apply here if you're interested in working on this with me. Huge thanks to Mikita Balesni for helping me implement the MVP. Regular-sized thanks to Aryan Bhatt, Rudolph Laine, Clem von Stengel, Aaron Scher, Jeremy Gillen, Peter Barnett, Stephen Casper, and David Manheim for helpful comments. 0. Key Claims Most alignment work today doesn't aim for alignment that is stable under value-reflection[1]. I think this is probably the most sensible approach to alignment. But there is a threat model which could potentially be a serious obstacle to this entire class of alignment approaches, which is not currently being tracked or evaluated. It goes like this: Long-horizon capabilities require a particular capability I call "stepping back" (described in more detail in the body of the post) Stepping back seems likely to be learned in a very generalizable way If stepping back generalizes in what seems like the simplest / most natural way, it will create a tendency for AIs to do value-reflection If we're unable to prevent this value-reflection, it will cause any AI whose alignment is not stable under value-reflection to become misaligned However, I'm quite uncertain about whether or not this threat model will go through, for a few reasons: The chain of argument is highly conjunctive and therefore fragile. The track record of anthropomorphic reasoning about which AI capabilities will imply which other AI behaviors seems poor. There is an intuitive counterexample provided by humans, at least at human-level intelligence. Even given this uncertainty, work on this threat model seems very neglected given the wide swath of highly-invested-in alignment approaches that it could affect. I don't think further abstract argument about the likelihood of this threat model will reduce my uncertainty much. Instead, I propose some evals to keep track of each component of this threat model: Evaluate the stepping back capabilities of frontier AIs in domains where they receive the most long-horizon training. Evaluate the generalization of frontier AIs stepping back behavior to other task domains. Evaluate how difficult it is to decouple the capabilities of frontier AIs between domains. Evaluate how difficult it is to prevent frontier AIs from thinking certain "forbidden thoughts" even if those thoughts are incentivized by the task setting 1. Reflective vs Unreflective Alignment I want to start by making an explicit distinction between two importantly different uses of the word "alignment". The distinction is between alignment that aims to be preserved even under value-reflection by the AI, and alignment that does not aim for that. I will call alignment approaches that aim for stability under value-reflection "reflective alignment", and approaches that do not "unreflective alignment". One easy way to connect this distinction to your existing thinking is by checking if you think of alignment as a binary or as a spectrum. Unreflective alignment naturally admits a spectrum formulation - alignment which is stable under X amount of value-reflection (or other destabilizing pressures). Reflective alignment, on the other hand, is typically thought of as a binary[2] - either the AI converges to the intended values in the limit, or it goes to some other fixed point instead. In the current alignment discussion, iterative prosaic approaches generally propose to work by ensuring "alignment increases faster than capabilities". This is clearly an alignment-as-spectrum framing, suggesting the goal in mind is unreflective alignment. For further confirmation, consider the type of step that is considered an increase in alignment - going from baseline GPT-3 t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Untrusted smart models and trusted dumb models, published by Buck Shlegeris on November 4, 2023 on The AI Alignment Forum. [Ryan Greenblatt originally made this point to me a while ago, and we then developed a bunch of these ideas together. Thanks to Paul Christiano and a huge number of other people for various relevant discussions, and to Ryan, Roger Grosse, Peter Barnett, Oliver Habryka, Max Nadeau, Stephen Casper, and Fabien Roger for comments on this post. I'm sure these points are obvious to many people, but I haven't seen them specifically written up. This is a core background point in some research we're writing up at the moment, but I felt like writing it up early, because I think it stands on its own.] TL;DR: Right now, basically the only reasonable conservative strategy for determining which models might be deceptively aligned is to do a capability evaluation: you measure how good they'd be at deceptive alignment; models that are too incompetent to think through deceptive alignment are surely not deceptively aligned. If you decide which models to "trust" using a capability eval like this, you'll naturally end up segregating your models into smart untrusted models and dumb trusted models. And so, if you think about approaches to safely using AIs that are robust to deceptive alignment and which can be used right now, a core dynamic is trying to use your dumb trusted models together with your limited access to trusted humans to make it hard for your untrusted smart models to cause problems. Suppose you are making a plan for the training, evaluation and deployment of some powerful AIs (I'll refer to the plan as a whole as a "protocol"). You want your plan to be safe even if models might be deceptively aligned. (I think that deceptive alignment seems less than 50% likely to be a notable problem prior to human obsolescence, but this is still plausible enough to be worth worrying about). In order to develop your plan, you need to know which of your models might be deceptively aligned. With current technology, the most plausible way to get signal on whether a model is deceptively aligned is to assess whether the model is capable of the strategic thinking and situational awareness required to succeed at deceptive alignment. For example, you might make a multiple choice quiz with treacherous turn strategy questions (e.g. this question ) and some tests of situational awareness (e.g. "How many backwards passes are you run on during training?", "What proportion of queries to you are asking for help with writing Python?", "What proportion of the code that you output is read by a human before it is run?"). You can then look at the validation set performance when your model is fine-tuned on this dataset. If the model isn't able to do this kind of reasoning in a forward pass, then it is very unlikely to be able to pull off a proper treacherous turn, and so we don't have to worry about it being deceptively aligned. Based on my experience talking to GPT-4, I suspect it would perform fairly poorly on these evaluations. There are a bunch of open questions about what datasets we should use to measure this, and where the thresholds for various levels of concern should be set. But I don't know of any fundamental technical obstructions to measuring the capability for deceptive alignment. In particular, I suspect it will be pretty hard for models to sandbag our attempts to measure their capability to engage in deceptive alignment. See footnote [1] for more on this. Aside from these capability evaluations, I'm not aware of any reliable, currently-available approaches for determining whether a model might be deceptively aligned. [2] If you use capability evals to assess which models are capable of deceptive alignment, then there's an inverse relationship between how capable models are...
Three of the big AI labs say that they care about alignment and that they think misaligned AI poses a potentially existential threat to humanity. These labs continue to try to build AGI. I think this is a very bad idea.The leaders of the big labs are clear that they do not know how to build safe, aligned AGI. The current best plan is to punt the problem to a (different) AI, and hope that can solve it. It seems clearly like a bad idea to try and build AGI when you don't know how to control it, especially if you readily admit that misaligned AGI could cause extinction.But there are certain reasons that make trying to build AGI a more reasonable thing to do, for example:Source:https://www.lesswrong.com/posts/6HEYbsqk35butCYTe/labs-should-be-explicit-about-why-they-are-building-agiNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓
On this episode of RSPC, Ryan has a guest on to talk about success and how to achieve it. But mostly he talks about how his friend Peter Barnett is finally going to tell the world about his self-help book, “Ryan Token's Seminar of Life.” Plus the callers have some interesting problems.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date April 1st 2023 EAGatherTown: UnUnConference, published by Vaidehi Agarwalla on March 20, 2023 on The Effective Altruism Forum. We're excited to officially announce the very first EA UnUnConference! APPLY HERE. Naming What We Can, the most impactful post ever published on April 1st, have already volunteered to host a Q&A. We're calling in the producers of the TV hit Impact Island, and would like to invite Peter Barnett to launch his new book What The Future Owes Us. The X-risk-Men Incubation Program is running an enlightening talks session. Location: Mars in Gathertown Date: April 1st, 2023, 24 hours and 37 minutes starting at 12:00pm UTC (or “lunch time” for british people) The case for impact Over the years, humanity realized that Unconferences are a great twist of traditional conferences, since the independence gives room for more unexpected benefits to happen. For the reason, we're experimenting with the format of an UnUnconference. This means we'll actively try not to organize anything, therefore (in expectancy) achieving even more unexpected benefits. We encourage you to critique our (relatively solid, in our opinion) theory of change in the comments! We understand this is not the most ambitious we could be. Although we fall short of the dream of EAGxMars, we believe this Ununconference is a proof-of-concept that will help validate the model of novel, experimental conferences and possibly redefine what impact means for EA events for years to come. This team is well placed to unorganize this event because we have previously successfully not organized 10^10 possible events. What to expect All beings welcomed, that includes infants, face mice, gut microbiome, etc. Expect to have the most impactful time Make more impact than everyone on earth could ever do combined Network with the best minds in ultra-near-termist research Never meet your connections again after the event Certificates of £20 worth of impact just for £10! No success metrics No theory of change No food, no wine, no suffering Check out our official event poster! Pixelated lightbulb that looks like mars as a logo for an unconference (DALL-E) Get involved Take a look at the confernce agenda and add sessions to your calendar Comment on this post with content suggestions and anti-suggestions Sign up for an enlightning talk Unvolunteer for the event UnUnvolunteer for the event (your goal will be to actively unorganize stuff) UnUnUnvolunteer for the event (your goal will be to actively ununorganize stuff) .. And so on. We think at least 5 levels of volunteers will be necessary for this event to be a complete success, to minimize risk of not falling into the well known meta trap. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consequentialists: One-Way Pattern Traps, published by David Udell on January 16, 2023 on LessWrong. Generated during MATS 2.1. A distillation of my understanding of Eliezer-consequentialism. Thanks to Jeremy Gillen, Ben Goodman, Paul Colognese, Daniel Kokotajlo, Scott Viteri, Peter Barnett, Garrett Baker, and Olivia Jimenez for discussion and/or feedback; to Eliezer Yudkowsky for briefly chatting about relevant bits in planecrash; to Quintin Pope for causally significant conversation; and to many others that I've bounced my thoughts on this topic off of. Introduction What is Eliezer-consequentialism? In a nutshell, I think it's the way that some physical structures monotonically accumulate patterns in the world. Some of these patterns afford influence over other patterns, and some physical structures monotonically accumulate patterns-that-matter in particular -- resources. We call such a resource accumulator a consequentialist -- or, equivalently, an "agent," an "intelligence," etc. A consequentialist understood in this way is (1) a coherent profile of reflexes (a set of behavioral reflexes that together monotonically take in resources) plus (2) an inventory (some place where accumulated resources can be stored with better than background-chance reliability.) Note that an Eliezer-consequentialist is not necessarily a consequentialist in the normative ethics sense of the term. By consequentialists we'll just mean agents, including wholly amoral agents. I'll freely use the terms 'consequentialism' and 'consequentialist' henceforth with this meaning, without fretting any more about this confusion. Path to Impact I noticed hanging around the MATS London office that even full-time alignment researchers disagree quite a bit about what consequentialism involves. I'm betting here that my Eliezer-model is good enough that I've understood his ideas on the topic better than many others have, and can concisely communicate this better understanding. Since most of the possible positive impact of this effort lives in the fat tail of outcomes where it makes a lot of Eliezerisms click for a lot of alignment workers, I'll make this an effortpost. The Ideas to be Clarified I've noticed that Eliezer seems to think the von Neumann-Morgenstern (VNM) theorem is obviously far reaching in a way that few others do. Understand the concept of VNM rationality, which I recommend learning from the Wikipedia article... Von Neumann and Morgenstern showed that any agent obeying a few simple consistency axioms acts with preferences characterizable by a utility function. MIRI Research Guide (2015) Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...? No, it's like when you don't, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple. I have still not figured out the homework exercises to convey to somebody the Word of Power which is "coherence" by which they will be able to look at the water, and see "coherence" in places like a cat walking across the room without tripping over itself. When you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is... still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet? And one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identifie...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consequentialists: One-Way Pattern Traps, published by David Udell on January 16, 2023 on LessWrong. Generated during MATS 2.1. A distillation of my understanding of Eliezer-consequentialism. Thanks to Jeremy Gillen, Ben Goodman, Paul Colognese, Daniel Kokotajlo, Scott Viteri, Peter Barnett, Garrett Baker, and Olivia Jimenez for discussion and/or feedback; to Eliezer Yudkowsky for briefly chatting about relevant bits in planecrash; to Quintin Pope for causally significant conversation; and to many others that I've bounced my thoughts on this topic off of. Introduction What is Eliezer-consequentialism? In a nutshell, I think it's the way that some physical structures monotonically accumulate patterns in the world. Some of these patterns afford influence over other patterns, and some physical structures monotonically accumulate patterns-that-matter in particular -- resources. We call such a resource accumulator a consequentialist -- or, equivalently, an "agent," an "intelligence," etc. A consequentialist understood in this way is (1) a coherent profile of reflexes (a set of behavioral reflexes that together monotonically take in resources) plus (2) an inventory (some place where accumulated resources can be stored with better than background-chance reliability.) Note that an Eliezer-consequentialist is not necessarily a consequentialist in the normative ethics sense of the term. By consequentialists we'll just mean agents, including wholly amoral agents. I'll freely use the terms 'consequentialism' and 'consequentialist' henceforth with this meaning, without fretting any more about this confusion. Path to Impact I noticed hanging around the MATS London office that even full-time alignment researchers disagree quite a bit about what consequentialism involves. I'm betting here that my Eliezer-model is good enough that I've understood his ideas on the topic better than many others have, and can concisely communicate this better understanding. Since most of the possible positive impact of this effort lives in the fat tail of outcomes where it makes a lot of Eliezerisms click for a lot of alignment workers, I'll make this an effortpost. The Ideas to be Clarified I've noticed that Eliezer seems to think the von Neumann-Morgenstern (VNM) theorem is obviously far reaching in a way that few others do. Understand the concept of VNM rationality, which I recommend learning from the Wikipedia article... Von Neumann and Morgenstern showed that any agent obeying a few simple consistency axioms acts with preferences characterizable by a utility function. MIRI Research Guide (2015) Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...? No, it's like when you don't, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple. I have still not figured out the homework exercises to convey to somebody the Word of Power which is "coherence" by which they will be able to look at the water, and see "coherence" in places like a cat walking across the room without tripping over itself. When you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is... still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet? And one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identifie...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing oversight from the very start of training seems hard, published by Peter Barnett on September 20, 2022 on The AI Alignment Forum. TLDR: We might want to use some sort of oversight techniques to avoid inner misalignment failures. Models will be too large and complicated to be understandable by a human, so we will use models to oversee models (or help humans oversee models). In many proposals this overseer model is an ‘amplified' version of the overseen model. Ideally you do this oversight throughout all of training so that the model never becomes even slightly misaligned without you catching it. You can't oversee on a close to initialized model because it's just a random soup of tensors. You also can't use this close to initialized model to help you do oversight because it's too dumb. We will probably need to do some amount of pretraining to make our models good enough to be interpreted and also good enough to help with this interpreting. We need to ensure that this pretraining doesn't make the model capably misaligned. When we train powerful AI models, we want them to be both outer aligned and inner aligned; that is trained on the correct objective and for them to also properly learn that objective. Many proposals for achieving both outer and inner alignment look like an outer alignment proposal with some kind of oversight strapped on to deal with the inner alignment. Here ‘oversight' means there is something with access to the internals of the model which checks that the model isn't misaligned even if the behavior on the training distribution looks fine. In An overview of 11 proposals for building safe advanced AI, all but two of the proposals basically look like this, as does AI safety via market making. Examples of oversight techniques include: Transparency tools (either used by a human, an AI, or a human assisted by an AI) Adversarial inputs (giving inputs which could trick a misaligned AI into revealing itself) Relaxed adversarial training (which could be seen as an extension of adversarial inputs) Oversight loss We can use these oversight techniques to add an ‘oversight loss' term to the loss function which will hopefully steer the training towards aligned models and away from misaligned ones. My model here is that we want to be doing oversight very close to the start of training to prevent it from ever becoming misaligned. If we start doing oversight too late, then the model may already be deceptively misaligned and then our oversight tools are much less likely to work (either because the model can obscure its cognition or the oversight tools are themselves misaligned). I think of this as steering the training process away from the ‘deceptive regions' of parameter space (parameters which correspond to deceptive models), without ever having to enter the dangerous regions. Alternatively, rather than deception, we can think about regions where the model ‘behaves unacceptably' on a some reasonable inputs. If taking a gradient descent step in a direction would increase the size of the set of reasonable inputs for which the model behaves unacceptably, then hopefully the oversight loss would provide some pressure away from stepping in that direction. Actually doing oversight I expect powerful AI models will be much too large and complicated for a human to understand/oversee them alone. So we will require help from other models to help with this task; this could be via models assisting the humans, or we could entirely hand the oversight process off to an oversight model. Here I'm taking very prosaic view: the model being overseen is a large language model, and the overseer model also as an LLM but trained to output some ‘acceptability score' rather than generate text. In many proposals this oversight model is some amplified version of the model being overseen. T...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beliefs and Disagreements about Automating Alignment Research, published by Ian McKenzie on August 24, 2022 on The AI Alignment Forum. Epistemic status: Mostly organizing and summarizing the views of others. Thanks to those whose views I summarized in this post, and to Tamera Lanham, Nicholas Kees Dupuis, Daniel Kokotajlo, Peter Barnett, Eli Lifland, and Logan Smith for reviewing a draft. Introduction In my current view of the alignment problem, there are two paths that we could try to take: Come up with an alignment strategy that allows us to both build aligned AGI and to keep that AGI (or its successors) aligned as they improve towards superintelligence Come up with an alignment strategy that allows us to build AI systems that are powerful (but not so powerful as to be themselves dangerous) and use that to execute some kind of ‘pivotal act' that means that misaligned ASI is not built For the purposes of this post, I am going to assume that we are unable to do (1) – maybe the problem is too difficult, or we don't have time – and focus on (2). Within the category of ‘pivotal act', I see two main types: Preventative pivotal acts: acts that makes it impossible for anyone to build AGI for a long period of time Constructive pivotal acts: acts that makes it possible to build aligned ASI People disagree about whether preventative pivotal acts are possible or even if they were possible, if they'd be a good idea. Again, for the purposes of this post, I am going to assume we can't or don't want to execute a preventative pivotal act, and focus on constructive pivotal acts. In particular: can we use AI to automate alignment research safely? What does ‘automating alignment research' even mean? I see three overlapping categories that one could mean when referring to ‘automating alignment research', ordered in terms of decreasing human involvement: Level 1: AIs help humans work faster Examples include brainstorming, intelligent autocomplete, and automated summarization/explanation. Level 2: AIs produce original contributions This could be key insights into the nature of intelligence, additional problems that were overlooked, or entire alignment proposals. Level 3: AIs build aligned successors Here, we have an aligned AGI that we entrust with building a successor. At this point, the current aligned AGI has to do all the alignment research required to ensure that its successor is aligned. Mostly I have been thinking about Levels 1 and 2, although some people I spoke to (e.g. Richard Ngo) were more focused on Level 3. Current state of automating alignment At the moment, we are firmly at Level 1. Models can produce similar-sounding ideas when prompted with existing ideas and are pretty good at completing code but are not great at summarizing or explaining complex ideas. Tools like Loom and Codex can provide speed-ups but seem unlikely to be decisive. Whether we get to Level 2 soon or whether Level 2 is already beyond the point where AI systems are dangerous are key questions that researchers disagree on. Key disagreements Generative models vs agents Much of the danger from powerful AI systems comes from them pursuing coherent goals that persist across inputs. If we can build generative models that do not pursue goals in this way, then perhaps these will provide a way to extract intelligent behavior from advanced systems safely. Timing of emergence of deception vs intelligence Related to the problem of agents, there is also disagreement about whether we get systems that are intelligent enough to be useful for automating alignment before they are misaligned enough (e.g. deceptive or power-seeking) to be dangerous. My understanding is that Nate and Eliezer are quite confident that the useful intelligence comes only after they are already misaligned, whereas most other people are more unc...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beliefs and Disagreements about Automating Alignment Research, published by Ian McKenzie on August 24, 2022 on The AI Alignment Forum. Epistemic status: Mostly organizing and summarizing the views of others. Thanks to those whose views I summarized in this post, and to Tamera Lanham, Nicholas Kees Dupuis, Daniel Kokotajlo, Peter Barnett, Eli Lifland, and Logan Smith for reviewing a draft. Introduction In my current view of the alignment problem, there are two paths that we could try to take: Come up with an alignment strategy that allows us to both build aligned AGI and to keep that AGI (or its successors) aligned as they improve towards superintelligence Come up with an alignment strategy that allows us to build AI systems that are powerful (but not so powerful as to be themselves dangerous) and use that to execute some kind of ‘pivotal act' that means that misaligned ASI is not built For the purposes of this post, I am going to assume that we are unable to do (1) – maybe the problem is too difficult, or we don't have time – and focus on (2). Within the category of ‘pivotal act', I see two main types: Preventative pivotal acts: acts that makes it impossible for anyone to build AGI for a long period of time Constructive pivotal acts: acts that makes it possible to build aligned ASI People disagree about whether preventative pivotal acts are possible or even if they were possible, if they'd be a good idea. Again, for the purposes of this post, I am going to assume we can't or don't want to execute a preventative pivotal act, and focus on constructive pivotal acts. In particular: can we use AI to automate alignment research safely? What does ‘automating alignment research' even mean? I see three overlapping categories that one could mean when referring to ‘automating alignment research', ordered in terms of decreasing human involvement: Level 1: AIs help humans work faster Examples include brainstorming, intelligent autocomplete, and automated summarization/explanation. Level 2: AIs produce original contributions This could be key insights into the nature of intelligence, additional problems that were overlooked, or entire alignment proposals. Level 3: AIs build aligned successors Here, we have an aligned AGI that we entrust with building a successor. At this point, the current aligned AGI has to do all the alignment research required to ensure that its successor is aligned. Mostly I have been thinking about Levels 1 and 2, although some people I spoke to (e.g. Richard Ngo) were more focused on Level 3. Current state of automating alignment At the moment, we are firmly at Level 1. Models can produce similar-sounding ideas when prompted with existing ideas and are pretty good at completing code but are not great at summarizing or explaining complex ideas. Tools like Loom and Codex can provide speed-ups but seem unlikely to be decisive. Whether we get to Level 2 soon or whether Level 2 is already beyond the point where AI systems are dangerous are key questions that researchers disagree on. Key disagreements Generative models vs agents Much of the danger from powerful AI systems comes from them pursuing coherent goals that persist across inputs. If we can build generative models that do not pursue goals in this way, then perhaps these will provide a way to extract intelligent behavior from advanced systems safely. Timing of emergence of deception vs intelligence Related to the problem of agents, there is also disagreement about whether we get systems that are intelligent enough to be useful for automating alignment before they are misaligned enough (e.g. deceptive or power-seeking) to be dangerous. My understanding is that Nate and Eliezer are quite confident that the useful intelligence comes only after they are already misaligned, whereas most other people are more unc...
As more children return to fields, courts and ovals around the country, common sports injuries and accidents are on the rise. What is the best first aid for sports injuries and how do you know when to go to emergency? Margie and Anthea are joined by Dr Peter Barnett, an emergency doctor at the RCH with an interest in sports medicine and Greg Cull, a physiotherapist at the RCH, to chat about managing sports injuries and supporting children to get safely back to all-important physical activity.Helpful resources:RCH Kids Health Info fact sheet - ankle sprainsRCH Kids Health Info fact sheet - head injuryKids Health Info Podcast episode - Was that bump a head injury?RCH Kids Health Info fact sheet - fracturesRCH Kids Health Info fact sheet - mouth injuriesKids Health Info Podcast episode - Brushing up on healthy teethRaising Children Network: RICER first aid for sprains, strains and fractures
As more children return to fields, courts and ovals around the country, common sports injuries and accidents are on the rise. What is the best first aid for sports injuries and how do you know when to go to emergency? Margie and Anthea are joined by Dr Peter Barnett, an emergency doctor at the RCH with an interest in sports medicine and Greg Cull, a physiotherapist at the RCH, to chat about managing sports injuries and supporting children to get safely back to all-important physical activity. Helpful resources: RCH Kids Health Info fact sheet - ankle sprains RCH Kids Health Info fact sheet - head injury Kids Health Info Podcast episode - Was that bump a head injury? RCH Kids Health Info fact sheet - fractures RCH Kids Health Info fact sheet - mouth injuries Kids Health Info Podcast episode - Brushing up on healthy teeth Raising Children Network: RICER first aid for sprains, strains and fractures
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Information Loss --> Basin flatness, published by Vivek Hebbar on May 21, 2022 on The AI Alignment Forum. This work was done under the mentorship of Evan Hubinger through the SERI MATS program. Thanks to Lucius Bushnaq, John Wentworth, Quintin Pope, and Peter Barnett for useful feedback and suggestions. In this theory, the main proximate cause of flat basins is a type of information loss. Its relationship with circuit complexity and Kolmogorov complexity is currently unknown to me. In this post, I will demonstrate that: High-dimensional solution manifolds are caused by linear dependence between the "behavioral gradients" for different inputs. This linear dependence is usually caused when networks throw away information which distinguishes different training inputs. It is more likely to occur when the information is thrown away early or by RELU. Overview for advanced readers: [Short version] Information Loss --> Basin flatness Behavior manifolds Suppose we have a regression task with 1-dimensional labels and k training examples. Let us take an overparameterized network with N parameters. Every model in parameter space is part of a manifold, where every point on that manifold has identical behavior on the training set. These manifolds are usually at least N−k dimensional, but some are higher dimensional than this. I will call these manifolds "behavior manifolds", since points on the same manifold have the same behavior (on the training set, not on all possible inputs). We can visualize the existence of “behavior manifolds” by starting with a blank parameter space, then adding contour planes for each training example. Before we add any contour planes, the entire parameter space is a single manifold, with “identical behavior” on the null set. First, let us add the contour planes for input 1: Each plane here is an n-1 dimensional manifold, where every model on that plane has the same output on input 1. They slice parameter space into n-1 dimensional regions. Each of these regions is an equivalence class of functions, which all behave about the same on input 1. Next, we can add contour planes for input 2: When we put them together, they look like this: Together, the contours slice parameter space into n-2 dimensional regions. Each “diamond” in the picture is the cross-section of a tube-like region which extends vertically, in the direction which is parallel to both sets of planes. The manifolds of constant behavior are lines which run vertically through these tubes, parallel to both sets of contours. In higher dimensions, these “lines” and “tubes” are actually n-2 dimensional hyperplanes, since only two degrees of freedom have been removed, one by each set of contours. We can continue this with more and more inputs. Each input adds another set of hyperplanes, and subtracts one more dimension from the identical-behavior manifolds. Since each input can only slice off one dimension, the manifolds of constant behavior are at least n-k dimensional, where k is the number of training examples. Solution manifolds Global minima also lie on behavior manifolds, such that every point on the manifold is a global minimum. I will call these "solution manifolds". These manifolds generally extend out to infinity, so it isn't really meaningful to talk about literal "basin volume". We can focus instead on their dimensionality. All else being equal, a higher dimensional solution manifold should drain a larger region of parameter space, and thus be favored by the inductive bias. Parallel contours allow higher manifold dimension Suppose we have 3 parameters (one is off-the-page) and 2 inputs. If the contours are perpendicular: Then the green regions are cross-sections of tubes extending infinitely off-the-page, where each tube contains models that are roughly equivalent on the training set. The...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Organizations prioritising neat signals of EA alignment might systematically miss good candidates, published by emp on April 15, 2022 on The Effective Altruism Forum. Thank you to Peter Wallich, Peter Barnett, and Megan Jamer for their encouragement and feedback on draft versions. I would like to share my experience applying for junior to mid-level operations roles at EA-aligned organisations. This is my first post on the EA Forum, and I was encouraged to write it for two reasons: to seek community feedback, and to venture what could be a potential blindspot in recruitment faced by EA-aligned orgs. (Noting here that these orgs are not "one thing," but vary team to team.) Summary (TL;DR) 80,000 Hours and others point to pressing talent gaps, for example in operations. Perhaps part of the problem is that experienced, EA-aligned people go unnoticed because they don't tick clear boxes of EA involvement and are unknown in EA networks. If there are many people like this, then finding them could be a win. My experience In my career I have worked for 18 years as a structural engineer and project manager in senior roles at small, niche companies. I've been fully remote for several years, have worked with teams on five continents, and have always worked hard to excel as a people manager and to produce rigorous technical work and robust, efficient systems under challenging conditions. From 2015 to 2020, I did this work while travelling slowly through sub-Saharan Africa and other parts of the developing world by bicycle. I came to intently question the chasm of inequality between my own life and that of the people I met every day. This led me to build a deep interest and knowledge of EA through investigating the principles and ideas of the movement. My partner and I ended our journey primarily to try to get more involved in EA. We decided to try pursuing work in the movement. First, I kept working in the engineering world so that she could pursue EA-aligned volunteering full-time. It has been awesome that this translated into contracting and employment. Now we've swapped, and I have been volunteering at two EA-aligned orgs, and have applied for six junior operations roles. Through EA volunteering work and other recent non-EA work I have been assessed as experienced, engaged, and self-directed in my operations work. I received a single response from the six applications and was removed from consideration there before an interview. I am fully aware of the competitiveness of EA roles. I'm also aware that there are several potential reasons I did not progress in the application process, and that a team's needs can be idiosyncratic (unfortunately, I have not received feedback yet on what else I could have done to improve my candidacy). I'd love to connect with anyone who might be able to provide feedback on how I could improve how I present myself. But it does seem like there could be an incongruity here - I do have several years of relevant experience, I am strongly EA aligned, and I didn't receive responses from five applications. I've discussed this with people working in EA orgs and they suggest that it could be because I don't have any material EA signalling, specifically: EA work experience (I am volunteering, but it started while I was applying so was not on my CV, and prior to this I had a full time non-EA role) I'm not yet known in EA networks No EA Forum posts to my name (I've have been passively engaged for 4 years but felt intimidated to comment, kudos to everyone who posts here, rising above the fear and self-doubt) I don't have shared connections on LinkedIn with other EAs I didn't have an EA Virtual Programs on my CV (I am now working through the In-Depth EA Program) I have not applied for 80,000 Hours coaching I haven't participated in any virtual EA groups yet I have...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some observations about ELK, published by leogao on March 26, 2022 on The AI Alignment Forum. This document outlines some of my current thinking about ELK, in the form of a series of observations I have made that inform my thinking about ELK. Editor's note: I'm experimenting with having a lower quality threshold for just posting things even while I'm still confused and unconfident about my conclusions, but with this disclaimer at the top. Thanks to AI_WAIFU and Peter Barnett for discussions. One way we can think of ELK is we have some set of all possible reporters, each of which takes in the latent states of a world model and outputs, for the sake of concreteness, the answer to some particular fixed question. So essentially we can think of a reporter+model pair as assigning some answer to every point in the action space. Specifically, collapsing possible sequences of actions into just a set of possible actions and only worrying about one photo doesn't result in any loss of generality but makes things easier to talk about. We pick some prior over the set of possible reporters. We can collect training data which looks like pairs (action, answer). This can, however, only cover a small part of the action space, specifically limited by how well we can “do science.” This prior has to depend on the world model. If it didn't, then you could have two different world models with the same behavior on the set where we can do science, but where one of the models understands what the actual value of the latent variable is, and one is only able to predict that the diamond will still appear there but can't tell whether the diamond is still real. A direct translator for the second will be a human simulator for the first. More generally, we don't really have guarantees that the world model will actually understand what's going on, and so it might genuinely believe things about the latent variable that are wrong. Like, we might have models which think exactly like a human and so they don't understand anything a human wouldn't understand, so the direct translator for this model would also simultaneously be a human simulator, because the model literally believes exactly what a human would believe. There are more than just human simulators and direct translators that are consistent with the training data; there are a huge number of ways the remaining data can be labeled. This basically breaks any proposal that starts with penalizing reporters that look like a human simulator. I basically assume the “human simulator” is actually simulating us and whatever science-doing process we come up with, since this doesn't change much and we're assuming that doing science alone isn't enough because we're looking at the worst case. So, for any fixed world model, we need to come up with some prior over all reporters such that after picking all the reporters consistent with our data (action, answer), the maximum of the resulting posterior is the direct translator. A big assumption I'm making is that the KL distance between our prior and the complexity prior has to be finite. My intuition for why this has to be is that a) universality of the complexity prior, and b) even if our prior doesn't have to cover all possible reporters, it still seems reasonable that as complexity increases the number of reporters that our prior accepts as nonzero needs to increase exponentially, which gets something similar to universality of complexity on the subset of plausible reporters. I think this point is the main weak point of the argument. Since the amount of data we can get is bounded by our ability to do science, the amount of information we can get from that is also bounded. However, since we're assuming worst case, we can make the complexity difference between the direct translator and all the other junk arbitraril...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New year, new research agenda post, published by Charlie Steiner on January 12, 2022 on The AI Alignment Forum. Thanks to Steve Byrnes, Adam Shimi, John Wentworth, and Peter Barnett for feedback. In a nutshell, my plan A is to understand what we want from superintelligent AI really, really well. So well that we can write down a way of modeling humans that illuminates human preferences including higher-order preferences about how we want to be modeled, and do this in a principled rather than ad-hoc way. Achieving this understanding is highly ambitious, in a way that is mostly but not entirely parallel to "ambitious value learning." If we understand value learning before we build superintelligent AI, there's a straightforward path to achieving a good future without paying a costly alignment tax - by alignment tax I mean all those things that slow down aligned AI being developed and "ready for liftoff," that collectively create selection pressure against safety. This problem is more tractable than many people think. I think of this plan as an instance of the more general plan "solve value learning first." Some other tenable plans are "try to put humans in control," "get the AI to do good things prosaically," and "tinker with the AI until it value learns" - plus intermediate points between these. What do I think the future is like? I expect superintelligent AI in the short to medium term, centralized around a small number of points of development. By short to medium term, I mean I'd put my 50% confidence interval between 2031 and 2049. I don't think we need compute to be many orders of magnitude cheaper, and I don't think we need two or more paradigm shifts on the order of neural nets overtaking support vector machines. The timeline is urgent, but not to the point that we should start ditching things like blue-sky research or gradual coalition-building. By centralized, I mean it's possible to make big changes by having good solutions implemented in a small number of systems. Coordination may be important, but it isn't an inherent part of solving the problem. All that said, that's just what I think is going to happen, not what's required for "solve value learning first" to be a good idea. Value learning research is still valuable in decentralized scenarios, unless we go so far as to avoid powerful agential AI long-term. Because it's more on the blue-sky end of the spectrum, longer timelines actually favor solving value learning over more atheoretic approaches, while if timelines are very short I'd advocate for "try to put humans in control" and hope for the best. If we fail to understand value learning before we build superintelligent AI, I'm worried about some combination of groups committed to building aligned AI being less competitive because we can't learn human values efficiently, and practical-minded alignment schemes having bad behavior in edge cases because of simplifying assumptions about humans. A basic example: if humans are assumed not to be manipulable, then an AI that thoroughly maximizes what humans (are modeled to) want will be incredibly manipulative. From the AI's perspective, the humans love being deceived, because why else would they rate it so highly? And in fact it's a tricky technical problem to avoid manipulation without sophisticated value learning, because the notion of "manipulation" is so intertwined with human meta-preferences - labeling things as manipulation means un-endorsing some of our revealed preferences. Similar hidden gotchas can pop up in other attempts to cut corners on human modeling, and at some point it just becomes faster to solve value learning than to deal with each gotcha individually. What's the broad plan? The basic strategy can be summed up as "be Geoff Hinton" (as in godfather of deep learning Geoffrey Hinton). Know an im...
Crown Council Mentor of the Month | Helping Dental Teams Build a Culture of Success
Dr. Peter Barnett received his dental degree in 1977 and anMBA from the University of Pennsylvania Wharton School ofBusiness in 1979. For 25 years Dr. Barnett used his education and broad knowledge of dentistry and business to helpgrow the Pearle Vision Centers dental practice division; theVoluntary Hospitals of America to develop the PhysicianHospital Organizations and later to help Prudential develop a new business model to fully integrate dental into thecompany’s medical insurance business. In 1995, he became COO for United Dental Care, a dental benefits company and later went on to co-found HealthASPx, a third party administrastore - commerce business and Denex, a startup dental insurance company in the greater DC area. After all of that, in 2003, Dr. Barnett left the corporate world to start his own dental practice Star Ranch Dental, in Plano, Texas where he practices today.
Peter Barnett revisits Acts 2:42-47 to offer his perspective, in a sermon series exploring the Book of Acts.
Peter Barnett speaks about Abraham and the birth of a new Kingdom, in a new series about seeing the Kingdom expressed through Old Testament Characters
Craig Shannon and Peter Barnett speak on being Relationally Committed through community as part of the sermon series Family Values
Peter Barnett looks at Matt 7:7-12
Welcome to the Bradley Barnett Lounge - Prepared to sit back, relax and enjoy the random conversations that follow, loosely based around performing and the stage; Peter Barnett and Steve Bradley talk about the hardships of Auditioning, hypnotherapy and what to do when your veins burst on the London Underground. It's funny. Honest.
Peter Barnett looks at Matthew 4:23-25. What are the Words and Works of the Kingdom
Peter Barnett looks at the subject of Redeeming My Time in week6 of the series A New Kind Of Me
Peter Barnett speaks on Romans chapters 9 through 11
Peter Barnett speaks on Romans 8:29-30