POPULARITY
Have you ever felt like big success is just a series of small wins stacked over time? Most people overestimate what they can do in a day—and underestimate what they can achieve in a year.In this episode, Ryan Carey shares the real, unfiltered story of how he turned consistent daily actions into a thriving business — and how you can too. Whether you're just starting or looking to scale, his journey will reignite your belief in steady growth over flashy shortcuts.Together, we dive into how consistency, relationship-building, and trusting the long game helped Ryan scale a thriving, purpose-driven business without burning out.If you're feeling frustrated by slow growth or tempted by flashy shortcuts, this episode will reconnect you to the real game of entrepreneurship: playing for legacy, not likes.What You'll Learn in This EpisodeHow small, daily wins compound into massive business growthWhy patience is a superpower most entrepreneurs overlookHow to prioritize authentic relationships over marketing gimmicksThe emotional challenges of slow seasons—and how to navigate themMindset shifts that helped Ryan scale sustainably and intentionallyHow to stay grounded and focused during periods of uncertaintyKey Takeaways✔️Consistency compounds faster than you think—if you stay patient.✔️Your business grows at the speed of the relationships you build.✔️Success is hidden in the quiet seasons, not just the highlight reels.✔️You don't need massive wins—you need daily micro-commitments.✔️Trust the long game. If you master the process, the results will chase you.✔️Sustainability > speed when building something that lasts. Timestamps[00:00] – Why slow growth often beats fast success[03:50] – Ryan's backstory: building momentum one day at a time[08:10] – The real compounding effect of small daily wins[14:30] – How authentic relationships accelerated Ryan's business[20:00] – Managing mindset during the “quiet seasons”[26:40] – The patience paradox: staying committed when results are invisible[31:50] – How Ryan stays focused and consistent today[37:00] – Final advice: How to reframe what success really meansChoose Your Next Steps:Identify one small action you can commit to daily (even when no one's watching)Reach out to someone in your network and nurture an authentic relationship—no agendaReflect: Am I optimizing for flash… or for legacy?Share your biggest breakthrough from this episode with me on Instagram: @itsgeorgebryantConnect with Ryan CareyLoved Ryan's insights? Be sure to connect with him and let him know your biggest takeaway from the episode!InstagramWebsiteBetterOnJoin The Alliance – The Relationship Beats Algorithms™ community for entrepreneurs who scale with trust and connectionApply for 1:1 Coaching – Ready to build your business with sustainability, impact, and ease? Apply hereLive Events – Get in the room where long-term success is built: mindofgeorge.com/event
Ryan Carey, CEO of BetterOn, is on a mission to help leaders and professionals build authentic presence—on camera, in person, and within their organizations. Drawing on his early experiences as YouTube's 41st employee, Ryan reveals how mastering video presence can fast-track leadership development, strengthen teams, and inspire personal growth across every area of life. Companies […] The post Grow Your Team With On-Camera Executive Presence, With Ryan Carey first appeared on Business Creators Radio Show with Adam Hommey.
Today, we're diving into a transformative topic: enhancing your professional and personal presence through self-awareness, confidence, and communication. Joining us is Ryan Carey, the founder of BetterOn, a career development program that helps individuals unlock their potential by improving their on-camera communication skills.Ryan's work is about more than professional development—it fosters self—awareness, embraces authenticity, and builds confidence that extends beyond the workplace into our families and communities. Through BetterOn, Ryan has helped countless individuals strengthen their interpersonal skills, improve their mental health, and become more present with their families.Chapters00:00 Introduction to Self-Awareness and Communication01:09 Ryan Carey's Journey and the Birth of Better On04:33 The Evolution of YouTube and Its Impact08:01 The Vulnerability of Public Speaking10:14 How Better On Enhances Self-Confidence12:52 The Role of Reflection in Personal Growth17:50 Overcoming Self-Doubt and Embracing Authenticity20:21 The Connection Between Self-Awareness and Mental Health23:38 Understanding Self-Awareness and Behavior29:12 The Importance of Self-Reflection34:07 Courage and Responsibility in Personal Growth39:44 Mastering Presence for Effective CommunicationCheck out the Website for Interactive Activity Guides, Resources, Full Transcripts, all things YDP- www.youngdadpod.com Clink the Link for YDP Deals (Joon, Forefathers &more)- https://linktr.ee/youngdadpod Want to be a guest on Young Dad Podcast? Send Jey Young a message on PodMatch, here: https://www.joinpodmatch.com/youngdadLastly consider a monetary donation to support the Pod, https://buymeacoffee.com/youngdadpod
In this episode of the YouTube Creators Hub podcast, host Dusty Porter speaks with Ryan Carey, CEO of Better On and one of YouTube's earliest team members. They discuss Ryan's journey from working at YouTube during its early days to founding Better On, a company focused on helping individuals enhance their presence and communication skills through video. The conversation delves into the importance of authenticity, building trust with audiences, and overcoming self-doubt as a creator. Ryan shares valuable insights on how to connect with audiences and the significance of self-awareness in the creative process. What We Offer Creators Join Creator Communities. A place to gather with other creators every single day. This provides you access to Our Private Discord Server, Monthly Mastermind Group, and MORE! Hire Dusty To Be Your YouTube Coach Subscribe to our weekly newsletter. Each week I document what I'm doing in my business and creative journey, share new things I've discovered, mistakes I've made, and much more! Follow The Show: Facebook /// X /// YouTube /// Instagram About Ryan: Ryan Carey is the CEO behind BetterOn, a company dedicated to helping leaders and professionals build authentic presence on video, in person, and across workplaces. A pioneer in the video space, he was one of YouTube's earliest team members, witnessing firsthand the platform's explosive growth and transformative power. After his own journey as a YouTube content creator, Ryan launched BetterOn in 2014, combining his unique insights into video with a mission to elevate workplace communication. Forward thinking companies like Google, IBM, Deloitte, and Red Hat use BetterOn to invest in their high potential people. Connect With Ryan Here: BetterOn /// LinkedIn /// Instagram Request a Guest To Come On The Podcast:
News.com.au’s Shannon Molloy meets Ryan Carey, a man who spent 40 years as part of a cult-like church called the Geelong Revival Centre. For more, head to news.com.auSee omnystudio.com/listener for privacy information.
Ryan Carey is the founder of Golden Age Auctions, the world's leading destination for golf collectibles and memorabilia. Founded in 2006, Golden Age prides itself on helping collectors buy, sell, and consign some of the most significant collectibles in golf history. We have clients in 58 different countries and have been regularly featured on ESPN, Sports Illustrated, Fox Business, Golf Digest, Golf.com, CBS Sports, CNBC, and more. Ryan and Jordan talk about auctioning off some Tiger Woods' stuff, actually talking to Tiger, and the world of golf collectibles.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Managing risks while trying to do good, published by Wei Dai on February 1, 2024 on The Effective Altruism Forum. I often think about "the road to hell is paved with good intentions".[1] I'm unsure to what degree this is true, but it does seem that people trying to do good have caused more negative consequences in aggregate than one might naively expect.[2] "Power corrupts" and "power-seekers using altruism as an excuse to gain power" are two often cited reasons for this, but I think don't explain all of it. A more subtle reason is that even when people are genuinely trying to do good, they're not entirely aligned with goodness. Status-seeking is a powerful motivation for almost all humans, including altruists, and we frequently award social status to people for merely trying to do good, before seeing all of the consequences of their actions. This is in some sense inevitable as there are no good alternatives. We often need to award people with social status before all of the consequences play out, both to motivate them to continue to try to do good, and to provide them with influence/power to help them accomplish their goals. A person who consciously or subconsciously cares a lot about social status will not optimize strictly for doing good, but also for appearing to do good. One way these two motivations diverge is in how to manage risks, especially risks of causing highly negative consequences. Someone who wants to appear to do good would be motivated to hide or downplay such risks, from others and perhaps from themselves, as fully acknowledging such risks would often amount to admitting that they're not doing as much good (on expectation) as they appear to be. How to mitigate this problem Individually, altruists (to the extent that they endorse actually doing good) can make a habit of asking themselves and others what risks they may be overlooking, dismissing, or downplaying.[3] Institutionally, we can rearrange organizational structures to take these individual tendencies into account, for example by creating positions dedicated to or focused on managing risk. These could be risk management officers within organizations, or people empowered to manage risk across the EA community.[4] Socially, we can reward people/organizations for taking risks seriously, or punish (or withhold rewards from) those who fail to do so. This is tricky because due to information asymmetry, we can easily create "risk management theaters" akin to "security theater" (which come to think of it, is a type of risk management theater). But I think we should at least take notice when someone or some organization fails, in a clear and obvious way, to acknowledge risks or to do good risk management, for example not writing down a list of important risks to be mindful of and keeping it updated, or avoiding/deflecting questions about risk. More optimistically, we can try to develop a culture where people and organizations are monitored and held accountable for managing risks substantively and competently. ^ due in part to my family history ^ Normally I'd give some examples here, but we can probably all think of some from the recent past. ^ I try to do this myself in the comments. ^ an idea previously discussed by Ryan Carey and William MacAskill Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Today we are joined by Ben Schoff and Ryan Carey of Golden Age Auctions to talk about the interesting world of golf memorabilia. Whether it is a signed flag, green jacket, Scotty Cameron putter, or something else in the golf collectibles world, these are the guys who sell the coolest items out there.I found this conversation super interesting from a standpoint of what it means to procure a valuable item, what drives value and makes people want something in golf or other sports, and the things they can do as an auction house to drive value of an item. There are so many things to think through when it comes not only to what makes an item valuable, but also how to treat a valuable item. Million dollar items don't get sent by UPS!Tune in to hear a super fun chat in the cultural realm of the game and let us know what you think! Our links as well as Golden Age's below.Cheers,- The Tie GuysGolden Age Auctions Website:https://goldenageauctions.comGAA Instagram: https://www.instagram.com/goldenageauctions/Website:https://www.thetiepodcast.comInstagram:https://www.instagram.com/thetiepodcast/?hl=enTwitter:https://mobile.twitter.com/thetiepodcastGoodWalk Coffee:https://goodwalkcoffee.comCODE: thetie for 20% offBDraddy:bdraddy.comCODE: thetie25 for 25% off
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reward Hacking from a Causal Perspective, published by Tom Everitt on July 21, 2023 on The AI Alignment Forum. Post 4 of Towards Causal Foundations of Safe AGI, preceded by Post 1: Introduction, Post 2: Causality, Post 3: Agency, and Post 4: Incentives. By Frances Rhys Ward, Tom Everitt, Sebastian Benthall, James Fox, Matt MacDermott, Milad Kazemi, Ryan Carey representing the Causal Incentives Working Group. Thanks also to Toby Shevlane and Aliya Ahmad. AI systems are typically trained to optimise an objective function, such as a loss or reward function. However, objective functions are sometimes misspecified in ways that allow them to be optimised without doing the intended task. This is called reward hacking. It can be contrasted with misgeneralisation, which occurs when the system extrapolates (potentially) correct feedback in unintended ways. This post will discuss why human-provided rewards can sometimes fail to reflect what the human really wants, and why this can lead to malign incentives. It also considers several proposed solutions, all from the perspective of causal influence diagrams. Why Humans might Reward the Wrong Behaviours In situations where a programmatic reward function is hard to specify, AI systems can often be trained from human feedback. For example, a content recommender may be optimising for likes, and a language model trained on feedback from human raters. Unfortunately, humans don't always reward the behaviour they actually want. For example, a human may give positive feedback for a credible-sounding summary, even though it actually misses key points: More concerningly, the system may covertly influence the human into providing positive feedback. For example, a recommender system with the goal of maximising engagement can do so by influencing the user's preferences and mood. This leads to a kind of reward misspecification, where the human provides positive feedback for situations that don't actually bring them utility. A causal model of the situation reveals the agent may have an instrumental control incentive (or similarly, an intention) to manipulate the user's preferences. This can be inferred directly from the graph. First, the human may be influenced by the agent's behaviour, as they must observe it before evaluating it. And, second, the agent can get better feedback by influencing the human: For example, we typically read a post before deciding whether to "like" it. By making the user more emotional, the system may be more likely to increase engagement. While this effect is stronger for longer interactions, the incentive is there even for "single time-step" interactions. Scalable Oversight One proposed solution to the reward misspecification problem is scalable oversight. It provides the human with a helper agent that advises them on what feedback to give. The helper agent observes the learning agent's behaviour, and may point out, for instance, an inaccuracy in a credible-looking summary, or warn against manipulation attempts. The extra assistance may make it harder for the learning agent to manipulate or deceive the human: Influential scalable oversight agendas include iterated distillation and amplification, AI safety via debate, recursive reward modelling, and constitutional AI. Unfortunately, the learning agent still has an incentive to deceive the human or manipulate their preferences, as the human preferences still satisfy the graphical criterion for an instrumental control incentive (it's on a directed causal path from behaviour to feedback). Additionally, the learning agent also has an incentive to deceive the helper agent: An important question for scalable oversight schemes is whether weaker agents can effectively help to supervise more capable agents (and whether this can be done recursively to supervise agents much smart...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Incentives from a causal perspective, published by Tom Everitt on July 10, 2023 on The AI Alignment Forum. Post 4 of Towards Causal Foundations of Safe AGI, preceded by Post 1: Introduction, Post 2: Causality, and Post 3: Agency. By Tom Everitt, James Fox, Ryan Carey, Matt MacDermott, Sebastian Benthall, and Jon Richens, representing the Causal Incentives Working Group. Thanks also to Toby Shevlane and Aliya Ahmad. “Show me the incentive, and I'll show you the outcome” - Charlie Munger Predicting behaviour is an important question when designing and deploying agentic AI systems. Incentives capture some key forces that shape agent behaviour, which don't require us to fully understand the internal workings of a system. This post shows how a causal model of an agent and its environment can reveal what the agent wants to know and what it wants to control, as well as how it will respond to commands and influence its environment. A complementary result shows that some incentives can only be inferred from a causal model, so a causal model of the agent's environment is strictly necessary for a full incentive analysis. Value of information What information would an agent like to learn? Consider, for example, Mr Jones deciding whether to water his lawn, based on the weather report, and whether the newspaper arrived in the morning. Knowing the weather means that he can water more when it will be sunny than when it will be raining, which saves water and improves the greenness of the grass. The weather forecast therefore has information value for the sprinkler decision, and so does the weather itself, but the newspaper arrival does not. We can quantify how useful observing the weather is for Mr Jones, by comparing his expected utility in a world in which he does observe the weather, to a world in which he doesn't. (This measure only makes sense if we can assume that Mr Jones adapts appropriately to the different worlds, i.e. he needs to be agentic in this sense.) The causal structure of the environment reveals which variables provide useful information. In particular, the d-separation criterion captures whether information can flow between variables in a causal graph when a subset of variables are observed. In single-decision graphs, value of information is possible when there is an information-carrying path from a variable to the agent's utility node, when conditioning on the decision node and its parents (i.e. the “observed” nodes). For example, in the above graph, there is an information-carrying path from forecast to weather to grass greenness, when conditioning on the sprinkler, forecast and newspaper. This means that the forecast can (and likely will) provide useful information about optimal watering. In contrast, there is no such path from the newspaper arrival. In that case, we call the information link from the newspaper to the sprinkler nonrequisite. Understanding what information an agent wants to obtain is useful for several reasons. First, in e.g. fairness settings, the question of why a decision was made is often as important as what the decision was. Did gender determine a hiring decision? Value of information can help us understand what information the system is trying to glean from its available observations (though a formal understanding of proxies remains an important open question). More philosophically, some researchers consider an agent's cognitive boundary as the events that the agent cares to measure and influence. Events that lack value of information must fall outside the measuring part of this boundary. Response Incentives Related to the value of information are response incentives: what changes in the environment would a decision chosen by an optimal policy respond to? Changes are operationalised as post-policy interventions, i.e. as interventions that...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Agency from a causal perspective, published by Tom Everitt on June 30, 2023 on The AI Alignment Forum. Post 3 of Towards Causal Foundations of Safe AGI, preceded by Post 1: Introduction and Post 2: Causality. By Matt MacDermott, James Fox, Rhys Ward, Jonathan Richens, and Tom Everitt representing the Causal Incentives Working Group. Thanks also to Ryan Carey, Toby Shevlane, and Aliya Ahmad. The purpose of this post is twofold: to lay the foundation for subsequent posts by exploring what agency means from a causal perspective, and to sketch a research program for a deeper understanding of agency. The Importance of Understanding Agency Agency is a complex concept that has been studied from multiple perspectives, including social science, philosophy, and AI research. Broadly it refers to a system able to act autonomously. For the purposes of this blog post, we interpret agency as goal-directedness, i.e. acting as if trying to direct the world in some particular direction. There are strong incentives to create more agentic AI systems. Such systems could potentially do many tasks humans are currently needed for, such as independently researching topics, or even run their own companies. However, making systems more agentic comes with an additional set of potential dangers and harms, as goal-directed AI systems could become capable adversaries if their goals are misaligned with human interest. A better understanding of agency may let us: Understand dangers and harms from powerful machine learning systems. Evaluate whether a particular ML model is dangerously agentic. Design systems that are not agentic, such as AGI scientists or oracles, or which are agentic in a safe way. Lay a foundation for progress on other AGI safety topics, such as interpretability, incentives, and generalisation. Preserve human agency, e.g. through a better understanding of the conditions under which agency is enhanced or diminished. Degrees of freedom (Goal-directed) agents come in all shapes and sizes – from bacteria to humans, from football teams to governments, and from RL policies to LLM simulacra – but they share some fundamental features. First, an agent needs the freedom to choose between a set of options. We don't need to assume that this decision is free from causal influence, or that we can't make any prediction about it in advance – but there does need to be a sense in which it could either go one way or another. Dennett calls this degrees of freedom. For example, Mr Jones can choose to turn his sprinkler on or not. We can model his decision as a random variable with “watering” and “not watering” as possible outcomes: Freedom comes in degrees. A thermostat can only choose heater output, while most humans have access to a range of physical and verbal actions. Influence Second, in order to be relevant, an agent's behaviour must have consequences. Mr Jones decision to turn on the sprinkler affects how green his grass becomes: The amount of influence varies between different agents. For example, a language model's influence will heavily depend on whether it only interacts with its own developers, or with millions of users through a public API. Suggested measures of influence include (causal) channel capacity, performative power, and power in Markov decision processes. Adaptation Third, and most importantly, goal-directed agents do things for reasons. That is, (they act as if) they have preferences about the world, and these preferences drive their behaviour: Mr Jones turns on the sprinkler because it makes the grass green. If the grass didn't need water, then Mr Jones likely wouldn't water it. The consequences drive the behaviour. This feedback loop, or backwards causality, can be represented by adding a so-called mechanism node to each object-level node in the original graph. The mechanism n...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Causality: A Brief Introduction, published by tom4everitt on June 20, 2023 on LessWrong. Post 2 of Towards Causal Foundations of Safe AGI, see also Post 1 Introduction. By Lewis Hammond, Tom Everitt, Jon Richens, Francis Rhys Ward, Ryan Carey, Sebastian Benthall, and James Fox, representing the Causal Incentives Working Group. Thanks also to Alexis Bellot, Toby Shevlane, and Aliya Ahmad. Causal models are the foundations of our work. In this post, we provide a succinct but accessible explanation of causal models that can handle interventions, counterfactuals, and agents, which will be the building blocks of future posts in the sequence. Basic familiarity with (conditional) probabilities will be assumed. What is causality? What does it mean for the rain to cause the grass to become green? Causality is a philosophically intriguing topic that underlies many other concepts of human importance. In particular, many concepts relevant to safe AGI, like influence, response, agency, intent, fairness, harm, and manipulation, cannot be grasped without a causal model of the world, as we mentioned in the intro post and will discuss further in subsequent posts. We follow Pearl and adopt an interventionist definition of causality: the sprinkler today causally influences the greenness of the grass tomorrow, because if someone intervened and turned on the sprinkler, then the greenness of the grass would be different. In contrast, making the grass green tomorrow has no effect on the sprinkler today (assuming no one predicts the intervention). So the sprinkler today causally influences the grass tomorrow, but not vice versa, as we would intuitively expect. Interventions Causal Bayesian Networks (CBNs) represent causal dependencies between aspects of reality using a directed acyclic graph. An arrow from a variable A to a variable B means that A influences B under some fixed setting of the other variables. For example, we draw an arrow from sprinkler (S) to grass greenness (G): For each node in the graph, a causal mechanism of how the node is influenced by its parents is specified with a conditional probability distribution. For the sprinkler, a distribution p(S) specifies how commonly it is turned on, e.g. P(S=on)=30%. For the grass, a conditional distribution p(G∣S) specifies how likely it is that the grass becomes green when the sprinkler is on, e.g. p(G=green∣S=on)=100%, and how likely it is that the grass becomes green when the sprinkler is off, e.g. p(G=green∣S=off)=30%. By multiplying the distributions together, we get a joint probability distribution p(S,G)=p(S)p(G∣S) that describes the likelihood of any combination of outcomes. An intervention on a system changes one or more causal mechanisms. For example, an intervention that turns the sprinkler on corresponds to replacing the causal mechanism p(S)for the sprinkler, with a new mechanism 1(S=on) that always has the sprinkler on. The effects of the intervention can be computed from the updated joint distribution p(S,G∣do(S=on))=1(S=on)intervenedmechanismp(G∣S) where do(S=on) denotes the intervention. Ultimately, all statistical correlations are due to casual influences. Hence, for a set of variables there is always some CBN that represents the underlying causal structure of the data generating process, though extra variables may be needed to explain e.g. unmeasured confounders. Counterfactuals Suppose that the sprinkler is on and the grass is green. Would the grass have been green had the sprinkler not been on? Questions about counterfactuals like these are harder than questions about interventions, because they involve reasoning across multiple worlds. To handle such reasoning, structural causal models (SCMs) refine CBNs in three important ways. First, background context that is shared across hypothetical worlds is ex...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Causality: A Brief Introduction, published by Tom Everitt on June 20, 2023 on The AI Alignment Forum. Post 2 of Towards Causal Foundations of Safe AGI, see also Post 1 Introduction. By Lewis Hammond, Tom Everitt, Francis Rhys Ward, Ryan Carey, Sebastian Benthall, and James Fox, representing the Causal Incentives Working Group. Thanks also to Alexis Bellot, Toby Shevlane, and Aliya Ahmad. Causal models are the foundations of our work. In this post, we provide a succinct but accessible explanation of causal models that can handle interventions, counterfactuals, and agents, which will be the building blocks of future posts in the sequence. Basic familiarity with (conditional) probabilities will be assumed. What is causality? What does it mean for the rain to cause the grass to become green? Causality is a philosophically intriguing topic that underlies many other concepts of human importance. In particular, many concepts relevant to safe AGI, like influence, response, agency, intent, fairness, harm, and manipulation, cannot be grasped without a causal model of the world, as we mentioned in the intro post and will discuss further in subsequent posts. We follow Pearl and adopt an interventionist definition of causality: the sprinkler today causally influences the greenness of the grass tomorrow, because if someone intervened and turned on the sprinkler, then the greenness of the grass would be different. In contrast, making the grass green tomorrow has no effect on the sprinkler today (assuming no one predicts the intervention). So the sprinkler today causally influences the grass tomorrow, but not vice versa, as we would intuitively expect. Interventions Causal Bayesian Networks (CBNs) represent causal dependencies between aspects of reality using a directed acyclic graph. An arrow from a variable A to a variable B means that A influences B under some fixed setting of the other variables. For example, we draw an arrow from sprinkler (S) to grass greenness (G): For each node in the graph, a causal mechanism of how the node is influenced by its parents is specified with a conditional probability distribution. For the sprinkler, a distribution p(S) specifies how commonly it is turned on, e.g. P(S=on)=30%. For the grass, a conditional distribution p(G∣S) specifies how likely it is that the grass becomes green when the sprinkler is on, e.g. p(G=green∣S=on)=100%, and how likely it is that the grass becomes green when the sprinkler is off, e.g. p(G=green∣S=off)=30%. By multiplying the distributions together, we get a joint probability distribution p(S,G)=p(S)p(G∣S) that describes the likelihood of any combination of outcomes. An intervention on a system changes one or more causal mechanisms. For example, an intervention that turns the sprinkler on corresponds to replacing the causal mechanism p(S)for the sprinkler, with a new mechanism 1(S=on) that always has the sprinkler on. The effects of the intervention can be computed from the updated joint distribution p(S,G∣do(S=on))=1(S=on)intervenedmechanismp(G∣S) where do(S=on) denotes the intervention. Ultimately, all statistical correlations are due to casual influences. Hence, for a set of variables there is always some CBN that represents the underlying causal structure of the data generating process, though extra variables may be needed to explain e.g. unmeasured confounders. Counterfactuals Suppose that the sprinkler is on and the grass is green. Would the grass have been green had the sprinkler not been on? Questions about counterfactuals like these are harder than questions about interventions, because they involve reasoning across multiple worlds. To handle such reasoning, structural causal models (SCMs) refine CBNs in three important ways. First, background context that is shared across hypothetical worlds is ex...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Causality: A Brief Introduction, published by tom4everitt on June 20, 2023 on LessWrong. Post 2 of Towards Causal Foundations of Safe AGI, see also Post 1 Introduction. By Lewis Hammond, Tom Everitt, Jon Richens, Francis Rhys Ward, Ryan Carey, Sebastian Benthall, and James Fox, representing the Causal Incentives Working Group. Thanks also to Alexis Bellot, Toby Shevlane, and Aliya Ahmad. Causal models are the foundations of our work. In this post, we provide a succinct but accessible explanation of causal models that can handle interventions, counterfactuals, and agents, which will be the building blocks of future posts in the sequence. Basic familiarity with (conditional) probabilities will be assumed. What is causality? What does it mean for the rain to cause the grass to become green? Causality is a philosophically intriguing topic that underlies many other concepts of human importance. In particular, many concepts relevant to safe AGI, like influence, response, agency, intent, fairness, harm, and manipulation, cannot be grasped without a causal model of the world, as we mentioned in the intro post and will discuss further in subsequent posts. We follow Pearl and adopt an interventionist definition of causality: the sprinkler today causally influences the greenness of the grass tomorrow, because if someone intervened and turned on the sprinkler, then the greenness of the grass would be different. In contrast, making the grass green tomorrow has no effect on the sprinkler today (assuming no one predicts the intervention). So the sprinkler today causally influences the grass tomorrow, but not vice versa, as we would intuitively expect. Interventions Causal Bayesian Networks (CBNs) represent causal dependencies between aspects of reality using a directed acyclic graph. An arrow from a variable A to a variable B means that A influences B under some fixed setting of the other variables. For example, we draw an arrow from sprinkler (S) to grass greenness (G): For each node in the graph, a causal mechanism of how the node is influenced by its parents is specified with a conditional probability distribution. For the sprinkler, a distribution p(S) specifies how commonly it is turned on, e.g. P(S=on)=30%. For the grass, a conditional distribution p(G∣S) specifies how likely it is that the grass becomes green when the sprinkler is on, e.g. p(G=green∣S=on)=100%, and how likely it is that the grass becomes green when the sprinkler is off, e.g. p(G=green∣S=off)=30%. By multiplying the distributions together, we get a joint probability distribution p(S,G)=p(S)p(G∣S) that describes the likelihood of any combination of outcomes. An intervention on a system changes one or more causal mechanisms. For example, an intervention that turns the sprinkler on corresponds to replacing the causal mechanism p(S)for the sprinkler, with a new mechanism 1(S=on) that always has the sprinkler on. The effects of the intervention can be computed from the updated joint distribution p(S,G∣do(S=on))=1(S=on)intervenedmechanismp(G∣S) where do(S=on) denotes the intervention. Ultimately, all statistical correlations are due to casual influences. Hence, for a set of variables there is always some CBN that represents the underlying causal structure of the data generating process, though extra variables may be needed to explain e.g. unmeasured confounders. Counterfactuals Suppose that the sprinkler is on and the grass is green. Would the grass have been green had the sprinkler not been on? Questions about counterfactuals like these are harder than questions about interventions, because they involve reasoning across multiple worlds. To handle such reasoning, structural causal models (SCMs) refine CBNs in three important ways. First, background context that is shared across hypothetical worlds is ex...
This episode explores the potential for using plant based, psychedelic medicine, as a means to not just treat but heal from PTSD. It's a fascinating conversation with Carlos Durand and Ryan Carey, the co-founders of Operation Purify. This project is taking veterans on a two week long experience in Columbia to explore the depths of their soul with the help of the local ancient healers and ayahuasca. To learn more and participate click here. Don't forget to SUBSCRIBE and SMASH THE BELL to get informed of new videos coming down the pipe. ✅If you're a busy, veteran man looking to lose belly fat, improve your energy levels and reduce some chronic pain issues, you can apply to the BE.A.S.T. Program. All you need to do is apply here ✅Find your ideal training program and smash your goals. Try my training app for free for one week. Head to: https://davemorrow.net ✅Want to come join us in the Hard To Kill Tribe? Click here: https://www.facebook.com/groups/hrd2killtrgprgms ✅Want the #1 Book in Canada for getting warfighters out of the pain cave? Get my book, The Nimble Warrior here: https://thenimblewarriorbook.com/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introduction to Towards Causal Foundations of Safe AGI, published by tom4everitt on June 12, 2023 on LessWrong. By Tom Everitt, Lewis Hammond, Rhys Ward, Ryan Carey, James Fox, Sebastian Benthall, Matt MacDermott and Shreshth Malik representing the Causal Incentives Working Group. Thanks also to Toby Shevlane, MH Tessler, Aliya Ahmad, Zac Kenton, and Maria Loks-Thompson. Over the next few years, society, organisations, and individuals will face a number of fundamental questions stemming from the rise of advanced AI systems: How to make sure that advanced AI systems do what we want them to (the alignment problem)? What makes a system safe enough to develop and deploy, and what constitutes sufficient evidence of that? How do we preserve our autonomy and control as decision making is increasingly delegated to digital assistants? A causal perspective on agency provides conceptual tools for navigating the above questions, as we'll explain in this sequence of blog posts. An effort will be made to minimise and explain jargon, to make the sequence accessible to researchers from a range of backgrounds. Agency First, with agent we mean a goal-directed system that is trying to steer the world in some particular direction(s). Examples include animals, humans, and organisations (more on agents in a subsequent post). Understanding agents is key to the above questions. Artificial agents are widely considered the primary existential threat from AGI-level technology, whether they emerge spontaneously or through deliberate design. Despite the myriad risks to our existence, highly capable agents pose a distinct danger, because many goals can be achieved more effectively by accumulating influence over the world. Whereas an asteroid moving towards earth isn't intending to harm humans and won't resist redirection, misaligned agents might be distinctly adversarial and active threats. Second, the preservation of human agency is critical in the approaching technological transition, for both individuals and collectives. Concerns have already been raised that manipulative social media algorithms and content recommenders undermine users' ability to focus on their long-term goals. More powerful assistants could exacerbate this. And as more decision-making is delegated to AI systems, the ability of society to set its own trajectory comes into question Human agency can also be nurtured and protected. Helping people to help themselves is less paternalistic than directly fulfilling their desires, and fostering empowerment may be less contingent on complete alignment than direct satisfaction of individual preferences. Indeed, self-determination theory provides evidence that humans intrinsically value agency, and some human rights can be interpreted as “protections of our normative agency”. Third, artificial agents might themselves eventually constitute moral patients. A clearer understanding of agency could help us refine our moral intuitions and avoid unethical actions. Some ethical dilemmas might be possible to avoid altogether by only designing artificial systems that lack moral patienthood. Key questions One hope for our research is that it would build up a theory of agency. Such a theory would ideally answer questions such as: What are the possible kinds of agents that can be created, and along what dimension can they differ? The agents we've seen so far primarily include animals, humans, and human organisations, but the range of possible goal-directed systems is likely much larger than that. Emergence: how are agents created? For example, when might an LLM become agentic? When does a system of agents become a “meta-agent”, such as an organisation? Disempowerment: how is agency lost? How do we preserve and nurture human agency? What are the ethical demands posed by various types of systems and a...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introduction to Towards Causal Foundations of Safe AGI, published by tom4everitt on June 12, 2023 on LessWrong. By Tom Everitt, Lewis Hammond, Rhys Ward, Ryan Carey, James Fox, Sebastian Benthall, Matt MacDermott and Shreshth Malik representing the Causal Incentives Working Group. Thanks also to Toby Shevlane, MH Tessler, Aliya Ahmad, Zac Kenton, and Maria Loks-Thompson. Over the next few years, society, organisations, and individuals will face a number of fundamental questions stemming from the rise of advanced AI systems: How to make sure that advanced AI systems do what we want them to (the alignment problem)? What makes a system safe enough to develop and deploy, and what constitutes sufficient evidence of that? How do we preserve our autonomy and control as decision making is increasingly delegated to digital assistants? A causal perspective on agency provides conceptual tools for navigating the above questions, as we'll explain in this sequence of blog posts. An effort will be made to minimise and explain jargon, to make the sequence accessible to researchers from a range of backgrounds. Agency First, with agent we mean a goal-directed system that is trying to steer the world in some particular direction(s). Examples include animals, humans, and organisations (more on agents in a subsequent post). Understanding agents is key to the above questions. Artificial agents are widely considered the primary existential threat from AGI-level technology, whether they emerge spontaneously or through deliberate design. Despite the myriad risks to our existence, highly capable agents pose a distinct danger, because many goals can be achieved more effectively by accumulating influence over the world. Whereas an asteroid moving towards earth isn't intending to harm humans and won't resist redirection, misaligned agents might be distinctly adversarial and active threats. Second, the preservation of human agency is critical in the approaching technological transition, for both individuals and collectives. Concerns have already been raised that manipulative social media algorithms and content recommenders undermine users' ability to focus on their long-term goals. More powerful assistants could exacerbate this. And as more decision-making is delegated to AI systems, the ability of society to set its own trajectory comes into question Human agency can also be nurtured and protected. Helping people to help themselves is less paternalistic than directly fulfilling their desires, and fostering empowerment may be less contingent on complete alignment than direct satisfaction of individual preferences. Indeed, self-determination theory provides evidence that humans intrinsically value agency, and some human rights can be interpreted as “protections of our normative agency”. Third, artificial agents might themselves eventually constitute moral patients. A clearer understanding of agency could help us refine our moral intuitions and avoid unethical actions. Some ethical dilemmas might be possible to avoid altogether by only designing artificial systems that lack moral patienthood. Key questions One hope for our research is that it would build up a theory of agency. Such a theory would ideally answer questions such as: What are the possible kinds of agents that can be created, and along what dimension can they differ? The agents we've seen so far primarily include animals, humans, and human organisations, but the range of possible goal-directed systems is likely much larger than that. Emergence: how are agents created? For example, when might an LLM become agentic? When does a system of agents become a “meta-agent”, such as an organisation? Disempowerment: how is agency lost? How do we preserve and nurture human agency? What are the ethical demands posed by various types of systems and a...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introduction to Towards Causal Foundations of Safe AGI, published by Tom Everitt on June 12, 2023 on The AI Alignment Forum. By Tom Everitt, Lewis Hammond, Rhys Ward, Ryan Carey, James Fox, Sebastian Benthall, Matt MacDermott and Shreshth Malik representing the Causal Incentives Working Group. Thanks also to Toby Shevlane, MH Tessler, Aliya Ahmad, Zac Kenton, and Maria Loks-Thompson. Over the next few years, society, organisations, and individuals will face a number of fundamental questions stemming from the rise of advanced AI systems: How to make sure that advanced AI systems do what we want them to (the alignment problem)? What makes a system safe enough to develop and deploy, and what constitutes sufficient evidence of that? How do we preserve our autonomy and control as decision making is increasingly delegated to digital assistants? A causal perspective on agency provides conceptual tools for navigating the above questions, as we'll explain in this sequence of blog posts. An effort will be made to minimise and explain jargon, to make the sequence accessible to researchers from a range of backgrounds. Agency First, with agent we mean a goal-directed system that is trying to steer the world in some particular direction(s). Examples include animals, humans, and organisations (more on agents in a subsequent post). Understanding agents is key to the above questions. Artificial agents are widely considered the primary existential threat from AGI-level technology, whether they emerge spontaneously or through deliberate design. Despite the myriad risks to our existence, highly capable agents pose a distinct danger, because many goals can be achieved more effectively by accumulating influence over the world. Whereas an asteroid moving towards earth isn't intending to harm humans and won't resist redirection, misaligned agents might be distinctly adversarial and active threats. Second, the preservation of human agency is critical in the approaching technological transition, for both individuals and collectives. Concerns have already been raised that manipulative social media algorithms and content recommenders undermine users' ability to focus on their long-term goals. More powerful assistants could exacerbate this. And as more decision-making is delegated to AI systems, the ability of society to set its own trajectory comes into question Human agency can also be nurtured and protected. Helping people to help themselves is less paternalistic than directly fulfilling their desires, and fostering empowerment may be less contingent on complete alignment than direct satisfaction of individual preferences. Indeed, self-determination theory provides evidence that humans intrinsically value agency, and some human rights can be interpreted as “protections of our normative agency”. Third, artificial agents might themselves eventually constitute moral patients. A clearer understanding of agency could help us refine our moral intuitions and avoid unethical actions. Some ethical dilemmas might be possible to avoid altogether by only designing artificial systems that lack moral patienthood. Key questions One hope for our research is that it would build up a theory of agency. Such a theory would ideally answer questions such as: What are the possible kinds of agents that can be created, and along what dimension can they differ? The agents we've seen so far primarily include animals, humans, and human organisations, but the range of possible goal-directed systems is likely much larger than that. Emergence: how are agents created? For example, when might an LLM become agentic? When does a system of agents become a “meta-agent”, such as an organisation? Disempowerment: how is agency lost? How do we preserve and nurture human agency? What are the ethical demands posed by various types of ...
Andy opens this mega-episode by running through his top five players of the post-Tiger era. Brentley Romine then joins the show (18:50) to talk all things NCAA Golf, including the men's and women's national championships, Rose Zhang's next steps, and players to look out for in the men's tournament. For a final segment, Ryan Carey of Golden Age Auctions chats with Andy (1:00:23) about how he started and built his golf-centric auction house and some of the intriguing items and stories he has encountered along the way.
(Lander, WY) - Can they make a fifth straight championship game? That is what the Lander Lobos will be looking to do this season after winning the 2022 championships a year ago. The Lobos last season won the state tournament in the best form possible...dramatic fashion. They had a walk-off in the semi-finals to reach the chipper before winning over Cheyenne 5-3 to claim the state title. The Lobos only look to get better. Lander is led by David Rees along with multiple assistant coaches including Shannon Stephenson and Ryan Carey coach the team and are looking to get better every day this season. "We are just looking forward to getting better." The Lobos are coming into the season with some new players and some returners. "We have a pretty young team. We are trying to make them better players and better people," Rees said. (h/t Wyatt Burichka) (h/t Wyatt Burichka) (h/t Wyatt Burichka) (h/t Wyatt Burichka) (h/t Wyatt Burichka) (h/t Wyatt Burichka) (h/t Wyatt Burichka) (h/t Wyatt Burichka) This past weekend, the Lobos hosted scrimmage games again Otto and Gillette. Otto would see a 13-3 win for the Lobos while the Gilette contest had Lander falling in both games on Sunday. "There is no such thing as perfection, but there is nothing wrong with working towards it," Rees said about the scrimmage over the weekend. The Lobos have seen talent go onto the college level including head coach David Rees's sons Peyton and Paxton Rees and more over the year including, Dominic Susenka, Ty Massey, Jace LaClair, and Keegan Stephensen are just amounted the many players to move onto the college level. Currently, a couple of players are being looked at from the Lobos and will look to show out this season. You can listen to the full interview of David Rees below: Here is a look at the season schedule for the Lobos. All games are subject to change. (Note: Bold means games that County 10 will have coverage of this season! Otto is also scheduled to play Lander again this season. That date and time are TBD) May 13-14 Mother's Day Tournament in Lander County 10 will have coverage on Sunday. May 20 home vs Casper Crosshairs 2 and 4 p.m. May 26-28 at Gillette TBD June 3 at Rock Springs TBD June 10 Casper Crosshairs TBD June 17-18 Father's Day Tournament in Lander TBD June 23-25 Cheyenne Tournament TBD July 5-9 State in Casper
Ryan Carey is the President and Founder of Golden Age Golf Auctions, the world's leading golf collectibles and memorabilia auction house. Among some of the record-breaking items Ryan and his company have helped source and auction are multiple Masters winner trophies, tournament used items from some of the greatest golfers in history, and of course, the record setting sale – auctioning Tiger Woods' irons from the “Tiger Slam”, often referred to as the greatest stretch of golf in history, which sold for $5.15 million dollars in the summer of 2022.In this episode, we discuss the rise and global expansion of Golden Age Auctions, as well as how Ryan first began his partnership with the Ouimet Fund. Over the last six years, Ryan's work on behalf of The Fund has helped raise more than $1 million for need-based Ouimet Scholarships, something Ryan is deeply proud of. Through this work, Ryan has helped keep the Ouimet legacy in the forefront by spreading the story of Francis and Eddie to new supporters each year.
Kate Carey and Ryan Carey are at the hospital waiting as they believe that Annie Cook is about to have a baby, but it seems the dup are out of the loop of things just a little bit! We are now on YouTube! Don't forget to Subscribe and enjoy the show there! Also new is Volume 49 Deluxe Edition get yours today! Also, buy your copy of The Tales of Grasmere Valley today! Volumes 1-5 Volume 6-10 Volume 11-15 Volumes 16-20 Volumes 21-25 Volume 26-30 Volume 31-35 Volume 36-40 Volume 41-45 Acoustic/Folk Instrumental by Hyde - Free Instrumentals https://soundcloud.com/davidhydemusic Creative Commons — Attribution 3.0 Unported— CC BY 3.0 Free Download / Stream: https://bit.ly/acoustic-folk-instrume... Music promoted by Audio Library https://youtu.be/YKdXVnaHfo8 "Scheming Weasel (slower version)" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/
Today is part 4 of the Window Replacement series. Today we're gonna talk about the big three window manufacturers.Ryan Carey first talks about a Minnesota company, Andersen, which is a more popular brand. He discusses the type of windows that Anderson makes: the 400 and 100 series, the regular, and the Renewal products.They talk about direct competition, Marvin, and why they have exclusive contractors that install windows. He also discusses the window options from the flagship brand Infinity, Integrity, Elevate, and Essentials series. Then they talked about an innovative company, Pella. He highlights their testing process and that only they make a true Vinyl and fiberglass window and offer a lifetime warranty. Ryan discusses the differences between the brand's products, the materials used, and the advantages. He also shares the brand with the best value-for-money and what to expect with the more affordable ones.Visit getmy3quotes.com or send an e-mail to ryan@getmy3quotes.com.
Another week with Ryan Carey ! In today's show, we're going to talk everything about window replacement.Ryan discusses the two main types and the difference in window replacements: the insert or retrofit window and the full frame. He also explains the window replacement process and which type is best for each window or type of house.They talk about replacing windows to address water intrusion and energy efficiency. Ryan also explains sash and glass replacements. Then he discusses the acceptable price range. He also talks about the type of vinyl windows and products from Lindsay Windows, Hayfield Windows, Alside Windows, and more. Visit Ryan Carey at getmy3quotes.com. Send your podcast questions or suggestions to podcast@structuretalk.com.
Ryan Carey of My 3 Quotes joins us for another week to discuss which type of window is the best.Tessa asks about low emissivity or low-E windows and the layers of coating. Ryan explains its effect on houses, the light, and the heat that comes into the house, especially in the winter. He discusses the cost of double and triple-pane windows as well as gas fills like argon and krypton. They highlight that improving the windows helps increase energy efficiency. Ryan discusses the difference between the pros and cons of wood, vinyl, fiberglass, and composite windows and their warranties. He further talks about window brands like the Anderwon Renewal and Marvin Infinity. He shares why he personally chose a vinyl window for his house. Ryan can be reached through e-mail at ryan@getmy3quotes.com. Visit their website at getmy3quotes.com.
Ryan Carey discusses the sales process of window companies and the mind-game tactics they use. Reuben and Ryan also reminisce about Department Supervisor Training from Home Depot, where they learned how to deal with different types of customers. Reuben inquires about the important stuff to understand and look for when choosing a window. Ryan discusses the U-value, EnergyStar rating, and highlights that the National Fenestration Rate Council rates windows. Ryan Carey can be reached at ryan@getmy3quotes.com.. Send your podcast questions to podcast@structuretech.com.
Tad is joined by Emily and Ryan for the best in Small Press Comics 2022Consider becoming a patron!Support the show
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Entrepreneurship ETG Might Be Better Than 80k Thought, published by Ben West on December 29, 2022 on The Effective Altruism Forum. Summary In 2014, Ryan Carey estimated that YCombinator-backed startup founders averaged $2.5M/year I repeat his analysis, and find that this number is now substantially higher: $3.8-9.9M/year When this amount is discounted by 12%/year (average S&P 500 returns) this falls to $1.9-4.3M/y, and with a 20%/year discount (a number I've heard for returns to community building) it falls to $1.1-2.9M/y. Note that these numbers include illiquid (pre-exit) valuations. Major Results 0% discount$3.77M/y$8.17M/y12% discount$1.98M/y$4.28M/y20% discount$1.34M/y$2.90M/y Discount All companies Excluding post-2019 Average founder income under different discount rates. “All companies” includes in the denominator every company incubated by YCombinator; “excluding post-2019” excludes companies incubated after 2019 (which presumably are less likely to make it to the list of top YCombinator companies by valuation, and therefore arguably should be excluded from consideration). Weighted Per Year 0% discount$4.56M/y$9.87M/y12% discount$1.84M/y$3.98M/y20% discount$1.06M/y$2.30M/y Discount All companies Excluding post-2019 This table is the same as the above except it e.g. counts a company which has been around for 4 years twice as much as one which has been around for 2 years. I.e. this table is the expected value of a founder-year, whereas the previous table is the expected annual value of founding a company. I'm not sure which is more intuitive. Commentary Background: See this 80k article for the basic case behind considering entrepreneurship for earning to give reasons. These numbers seem fairly high, and may indicate that earning to give through entrepreneurship is a good path for those who have solid personal fit (with the usual caveats about only pursuing ethical startup careers; see also my analysis of YCombinator fraud rates). With a 20% annual discount the numbers are not that far off from what I've heard as higher-end estimates of the value of direct work, and I expect that there is a fairly strong correlation between being at the higher end of entrepreneurship returns and being at the higher end of direct work, so this doesn't seem like that strong of an argument for entrepreneurship over direct work. My impression is that these numbers are roughly similar to average quantitative finance income, so I'm not sure there's much of an argument for one over the other based on this data (from an income perspective). Note that the vast majority of founders who apply to YCombinator are rejected, and this is not considered in these estimates. Appendix A: Methods and Data Note: if you know Python, reading the Jupyter notebook might be easier than following this document. Methods Used this list of YCombinator top companies and tried to find public information about their most recent valuation. Importantly, note that this is including pre-exit valuations. For publicly traded companies, I used their market capitalization at the time of writing (rather than when they IPO'd). I used an estimate of 2.3 people per founding team and average equity ownership of 35% from the original 80 K article. These numbers could probably use an update. The discount was calculated using a straightforward geometric discount, i.e. receiving $N in Y years with discount rate d has a net present value of (1-d)^Y N. I assume that everything not on that list is valued at zero. This is obviously an underestimate; but I think it's not too far off: I estimate the value of the company at the bottom of the list (Karbon Card) at $60M If the 1,788 companies started after 2019 who are not in this list were all valued at $60M, this would increase the total valuation by $107B = 19.5% This is a very conse...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX, published by Tyrone-Jay Barugh on November 12, 2022 on The Effective Altruism Forum. I think that key EA orgs (perhaps collectively) like the Center for Effective Altruism/Effective Ventures, Open Philanthropy, and Rethink Priorities should consider engaging an independent investigator (with no connection to EA) to try to identify whether key figures in those organisations knew (or can reasonably be inferred to have known, based on other things they knew) about the (likely) fraud at FTX. The investigator should also be contactable (probably confidentially?) by members of the community and others who might have relevant information. Typically a lawyer might be engaged to carry out the investigation, particularly because of professional obligations in relation to confidentiality (subject to the terms of reference of the investigation) and natural justice. But other professionals also conduct independent investigations, and there is no in principle reason why a lawyer needs to lead this work. My sense is that this should happen very promptly. If anyone did know about the (likely) fraud at FTX, then delay potentially increases the risk that any such person hides evidence or spreads an alternative account that vindicates them. I'm torn about whether to post this, as it may well be something that leadership (or lawyers) in the key EA orgs are already thinking about, and posting this prematurely might result in those orgs being pressured to launch an investigation hastily with bad terms of reference. On the other hand, I've had the concern that there is no whistleblower protection in EA for some time (raised in my March 2022 post on legal needs within EA), and others (e.g. Carla Zoe C) have made this point earlier still. I am not posting this because I have a strong belief that anyone in a key EA org did know - I have no information in this regard beyond vague speculation I have seen on Twitter. If you have a better suggestion, I would appreciate you sharing it (even if anonymously). Epistemic status: pretty uncertain, slightly anxious this will make the situation worse, but on balance think worth raising. Relevant disclosure: I received a regrant from the FTX Future Fund to investigate the legal needs of effective altruist organisations. Edit: I want to clarify that I don't think that any particular person knew. I still trust all the same community figures I trusted one week ago, other than folks in the FTX business. For each 'High Profile EA' I can think of, I would be very surprised if that person in particular knew. But even if we think there is only a 0.1% chance that any of the most influential, say, 100 EAs knew, then the chance that none of them knew is 0.999^100, which is about 90.4% (assuming we naively treat those as independent events). If we care about the top 1000 most influential EAs, then we could get that 90.4% chance with just a 0.01% chance of failure. Edit: I think Ryan Carey's comment is further in the right direction than this post (subject to my view that an independent investigation should stick to fact-finding rather than making philosophical/moral calls for EA) plus I've also had other people contact me spitballing ideas that seem sensible. I don't know what the terms of reference of an investigation would be, but it does seem like simply answering "did anybody know" might be the wrong approach. If you have further suggestions for the sorts of things that should be considered, it might be worth dropping those into the comments. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Ryan Carey wasn't looking to become the world's foremost golf memorabilia auctioneer. A lawyer by trade, Carey started Green Jacket Auctions as a side gig to feed his passion for the game's history. 16 years and one very expensive set of irons later, Carey and his team are riding high and traveling the world in search of the next great golf item. Carey joins Tom Coyne live in the Broken Tee Society Discord server to offer a glimpse into the career that's taken him everywhere from Tom Watson's home bar to Perry Maxwell's basement. Plus, he details the record-breaking sale of the “Tiger Slam” irons, and delivers an otherworldly prediction when asked about the Holy Grail of golf collecting.
Ryan Carey wasn't looking to become the world's foremost golf memorabilia auctioneer. A lawyer by trade, Carey started Green Jacket Auctions as a side gig to feed his passion for the game's history. 16 years and one very expensive set of irons later, Carey and his team are riding high and traveling the world in search of the next great golf item. Carey joins Tom Coyne live in the Broken Tee Society Discord server to offer a glimpse into the career that's taken him everywhere from Tom Watson's home bar to Perry Maxwell's basement. Plus, he details the record-breaking sale of the “Tiger Slam” irons, and delivers an otherworldly prediction when asked about the Holy Grail of golf collecting.
Tyler and Tad are joined by Ryan Carey to talk about the 2022 Insert Name Comics and Zines Mini-Fest!Consider becoming a patron!
In this episode of The Sharing Experiences With Concussions/TBI podcast, Simon Kardynal welcomes May Machoun, Ryan Carey, and Blair Hennessy to talk about brain injuries and head trauma in the military and veteran community. Listen in as these veterans shed light on their experiences with concussion and TBI, the systemic healthcare problems veterans are facing in Canada, and how we can continue advocating for better treatment of veterans with TBI and improving TBI education throughout the military. Whether you're a veteran, a medical professional, or a family member of a concussion survivor, you have the power to make a positive change, so let's do this together!
The man who brokered last weekend's $5 million Tiger Woods irons sale discusses, yes, the $5 million Tiger Woods irons sale, as well as the potential value of memorabilia from Arnold Palmer, Rory McIlroy, and Charlie Woods.
On April 9, Golden Age Auctions sold a set of match-used Tiger Woods irons for over $5 million. Golden Age's Founder and President Ryan Carey gave us the full story behind the largest golf memorabilia sale in history.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A tough career decision, published by PabloAMC on April 9, 2022 on The Effective Altruism Forum. In this post, I summarize a tough career decision I have had to take over the last few weeks. Setting the stage I am a few months from finishing my Ph.D. in quantum algorithms. During these 4 years, I have become quite involved in Effective Altruism: I attended two EA Globals, facilitated a couple of virtual intro fellowships, and helped organize the EA group in Madrid. Given my background, I have also felt closer to AI Safety than any other cause area. As such, I have also been involved in the AI Safety community, by participating in two AI Safety Camps, and as a facilitator in some intro fellowships. I even did a summer internship in AI Safety with José Hernández Orallo last summer which led to a rather lucky AAAI publication. My Ph.D. has also gone well. While it started a bit dubitative and was unable to get anything published for the first two years, at that point I got my first two publications and over the last two, I have been able to do well. It is not a superstar Ph.D. but I believe I have learned enough to make contributions to the field that actually get used, which is harder than it looks. In fact, I feel happy that thanks to my last article, one rather serious quantum startup contacted me to collaborate, and this led to another quite high-quality paper. The options Since I am finishing my Ph.D., I had to plan my next step. The first obvious choice was to apply for funding to do AI Safety. I cold emailed Victor Veitch, who I found through the super useful Future of Life AI Safety Community, and he was happy to take me as long as I could work more or less independently. The reason why I opted for applying with Victor was that my research style is more about knowing well the tools I am using, not to the level of a pure mathematician, but to the level where techniques are known and can be used. Additionally, I think causality is cool, and being able to apply it in large language models is rather remarkable. I am also a big fan of Ryan Carey, who works in causality and is one of the people in the community who has helped me the most. I am really grateful to him. Apart from the Future of Life postdoc program, I also applied to the EA Long Term Future fund, and Open Philanthropy, with this proposal. Out of this, the EA Long Term Future fund accepted funding me, most likely on the basis of a career change, while FLI declined based on the proposal and probably an interview that while I prepared was not able to use to explain well why I think this perspective could be useful. This came a bit of a disappointing result, to be honest. Open Philanthropy, on the other hand, is behind schedule, and I don't know their answer yet. The alternative was to pursue, at least for now, a more standard research topic. I applied to IBM, Google, Amazon, and three startups: Zapata, PsiQuantum, and Xanadu. With the latter, I have been working, so it was not really an application. I never heard back from any of the big companies, but got offers from the three startups. To be fair I also applied to a couple of non-EA AI postdocs with the hope of getting them, but they were a very long shot. For a bit of context though, PsiQuantum is a very serious contender in building a photonic quantum computer and error correction and has really deep pockets, while Xanadu is probably a bit behind them, but it's also quite good and has a bit more focus on ML. The situation and conditioning factors. Perhaps the main conditioning factor on all of this was the fact that my girlfriend is really a very important part of my life, and responsible for my happiness. A friend of mine called this the unsolvable two-body problem
Golden Age Auctions focuses on Golf Memorabilia and has an insane auction happening as the Masters kicks off. Ryan talks about his thoughts on why Golf Memorabilia is undervalued, why ticket stubs are legit, and much more.
Whether we like it or not, we all spend a lot of time on camera nowadays. And, if you find yourself not liking it, that's where today's guest can help! Ryan Carey is the founder of BetterOn, which provides coaching for individuals to help them become more comfortable on screen. In this episode, Jason and Ryan talk about some of those tips that can help create a comfortable video podcast.The Earfluence Podcast is a production of Earfluence Media and is hosted by Jason Gillikin and Cee Cee Huffman.
Ryan is an integral part of the Concussion Legacy Foundation Team, here in Canada. A CAF veteran, former CFL football player and part-time rock star, Ryan is out on a new mission to address the hidden injury most servicemen and women are unaware of - concussions. Find out how you can pledge your brain to the Concussion Legacy Foundation here. Get Your Copy of my book, The Nimble Warrior, here
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I prefer "Effective Altruism" to "Global Priorities" , published by AllAmericanBreakfast on the Effective Altruism Forum. Edit: Jonas responds in comments that he was not intending to argue for a name change, but more a change of emphasis. I respond explaining why I took “name change” as a reasonable interpretation, but you should read his OP and decide for yourself. In comments to Jonas Vollner's post, Ryan Carey makes a good argument that we should change the name of our movement from EA to Global Priorities, or at least change the emphasis. Among other options, “global priorities” was suggested. Tractability concerns aside, I have six key arguments for why this specific name change would not be the right move. These are my personal intuitions on the implications of names. I don't consider this a strong form of evidence. So I will keep this brief. "Global Priorities" sounds more arrogant than "Effective Altruism." Yes, EA can sound arrogant to some people. But many people wrestle with the question of how to make real positive change, and it makes sense to build a community to support that. Saying you're part of a Global Priorities movement sounds like you're wanting to impose your views on what those priorities should be. Don't trust me, though. Run a poll, maybe on Amazon's Mechanical Turk. Give a one-paragraph description of the EA movement's mission and principles, but randomly title it the "Global Priorities" or "Effective Altruism" movement. Randomly show one or the other to respondents. Ask them which sounds more arrogant, and which they'd be more likely to support or join. "Global Priorities" doesn't convey the moral or political basis for that prioritization. Whose priorities? The national interests of the most powerful nations? Are we advocating for world government? For a command economy? To give a voice to less powerful groups? What makes something a priority? Are those priorities supposed to be good? The meaning of the name is more open to interpretation. "Global Priorities" implies a focus on institutions. Altruism is clearly something that individuals can do. But most individuals don't have a say in what our global priorities should be. I can be an effective altruist while working in a purely technical role. But it's not clear to me that I can be involved in a movement for global priorities doing that sort of work. GP sounds like it's all about governance. "Global Priorities" doesn't necessarily imply a need for change. GP could easily just be about gathering inputs from a bunch of powerful interests about what they consider their priorities to be, and then coordinating to achieve them. Those priorities don't necessarily have to be transcendently important or good from a consequentialist standpoint. If I had to guess, many major powers would currently consider the free flow of oil to be a greater global priority than minimizing animal cruelty, and that sounds like a very different sort of movement from the one we've got. "Global Priorities" doesn't necessarily imply an emphasis on neglected issues. Some of our causes may only need to comprise a tiny fraction of global spending or work hours to be sufficient. Perhaps the world only needs to sink a total of $20 billion/year into biosecurity to do a good job. That would be about half of one percent of the US government's 2019 tax revenue. But if you asked somebody to look at a budget and rank causes by the importance they are assigned, it would be reasonable to rank them by the amount of budget that's been allocated. And also to spend the most time arguing over the biggest budget allocations. Spend 1/200th of your time arguing over 1/200th of the budget. We're trying to create a movement that inverts this. We deliberately try to spend the majority of our time arguing over the issues that get a very s...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writing about my job: Research Fellow, FHI, published by rgb on the AI Alignment Forum. Following Aaron Gertler's prompt, I am writing about my job as a researcher at the Future of Humanity Institute: the path that led to me applying for it, the application itself, and what it's like to do the job. See also the 80,000 Hours guides on academic research and philosophy academia. The basics Research fellow, Future of Humanity Institute (FHI) at Oxford University October 1, 2020 - present When I started this position, I was still working on a PhD in philosophy at New York University. I am still finishing up my dissertation, while working for FHI full-time (here's my FHI page). Background and path to applying I graduated from Harvard in 2011 with a degree in Social Studies (comparable to the UK's PPE). I did a masters in philosophy at Brandeis University and started a PhD at NYU in fall 2015. EA Global 2016 My path to FHI can be directly traced back to my desire, in the summer of 2016, to get my travel to EA Global reimbursed. I got interested in EA around 2015 and took the Giving What We Can Pledge in summer 2016. Flushed with enthusiasm, I looked into going to EA Global 2016, which was in Berkeley. Michelle Hutchinson organized an academic poster session for that EAG; somehow I was on a list of people who got an email encouraging me to submit a poster. It occurred to me that NYU's Center for Mind, Brain, and Consciousness reimburses the travel expenses for PhD students who are giving talks and presentations in philosophy of mind. Driven in no small measure by this pecuniary motive, I hastily threw together a poster presentation at the intersection of EA and philosophy of mind. The most important thing about the poster is simply that it got me to the conference.[1] That's where I first met Michelle Hutchinson; I surmise that meeting Michelle, and being a presenter, got me on a list of EA academics. GPI As a result (I think), about a year later I was invited to be part of the Global Priority Institute's first group of summer fellows in the summer of 2018. For my project, I worked on applying Lara Buchak's work on risk aversion to longtermism and cause prioritization.[2] That summer I met lots of people at FHI, who we shared an office and a kitchen with - most notably for the purposes of this post, Katja Grace and Ryan Carey. AI Impacts Meeting Katja Grace in summer 2018 led to me doing research for AI Impacts in the summer of 2019. Also in summer 2019, Ryan Carey messaged me to encourage me to apply for the FHI Research Fellow role. All told, that's all three of my EA gigs - GPI, AI Impacts, FHI - that stemmed from my decision to go to EA Global 2016 and my cheeky quest to get it reimbursed.[3] PhD research Throughout this time, I was doing my PhD research. It was during my PhD that I wrote a paper on fairness measures in machine learning that I would eventually use as my writing sample for FHI. My PhD research also gave me enough familiarity with AI to work on AI-related topics at AI Impacts and eventually FHI.[4] I also ran a reading group on philosophy and AI. The application Materials and process The application required, if I recall correctly: cover letter, CV/resume, research proposal, writing sample, and two references. The process involved a 2-hour (maybe 3?) timed work test, and two rounds of interviews. My research proposal, inspired by issues I had been thinking about at AI Impacts, outlined ways to get evidence for or against the Prosaic AGI thesis. In an interview, the selection committee made it clear that they were not especially excited about this research direction. I also discussed my work in AI ethics. My references were Katja Grace and my dissertation supervisor, David Chalmers. My writing sample was the aforementioned paper on fairness in machine learn...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Complete archive of the Felicifia forum , published by Louis_Francini on the Effective Altruism Forum. Prior to the existence of a unified effective altruism movement, a handful of proto-EA communities and organizations were already aiming towards similar ends. These groups included the web forum LessWrong and the charity evaluator GiveWell. One lesser-known community that played an important role in the history of the EA movement is the Felicifia utilitarian forum. The name "Felicifia," a reference to Jeremy Bentham's felicific calculus, was originally used as the title of Seth Baum's personal blog which he started in September 2006. In December 2006, Baum moved to Felicifia.com, which became a community blog/forum. A minority of the posts from this site are viewable on the Wayback Machine and archive.is. (Brian Tomasik is slowly working on producing a better archive at oldfelicifia.org.) The final iteration of Felicifia, and the one I'm concerned with here, launched in 2008 as a phpBB forum. Unfortunately, for years the site has been glitchy, and for the past several months it has been completely inaccessible. Thus I thought it would be valuable to produce an archive that is more easily browsable than the Wayback Machine. Hence: felicifia.github.io The site featured some of the earliest discussions of certain cause areas, such as wild animal suffering. Common EA concepts such as the meat eater argument and s-risks were developed and refined here. Of course, the forum also delved into the more theoretical aspects of utilitarian ethics. A few of the many prominent EAs who participated in the forum include Brian Tomasik, Peter Hurford, Ryan Carey, Pablo Stafforini, Carl Shulman, and Michael Dickens. While not all of the threads contained detailed discussion, some of the content is quite high-quality. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My current framework for thinking about AGI timelines, published by Alex Zhu on the AI Alignment Forum. At the beginning of 2017, someone I deeply trusted said they thought AGI would come in 10 years, with 50% probability. I didn't take their opinion at face value, especially since so many experts seemed confident that AGI was decades away. But the possibility of imminent apocalypse seemed plausible enough and important enough that I decided to prioritize investigating AGI timelines over trying to strike gold. I left the VC-backed startup I'd cofounded, and went around talking to every smart and sensible person I could find who seemed to have opinions about when humanity would develop AGI. My biggest takeaways after 3 years might be disappointing -- I don't think the considerations currently available to us point to any decisive conclusion one way or another, and I don't think anybody really knows when AGI is coming. At the very least, the fields of knowledge that I think bear on AGI forecasting (including deep learning, predictive coding, and comparative neuroanatomy) are disparate, and I don't know of any careful and measured thinkers with all the relevant expertise. That being said, I did manage to identify a handful of background variables that consistently play significant roles in informing people's intuitive estimates of when we'll get to AGI. In other words, people would often tell me that their estimates of AGI timelines would significantly change if their views on one of these background variables changed. I've put together a framework for understanding AGI timelines based on these background variables. Among all the frameworks for AGI timelines I've encountered, it's the framework that most comprehensively enumerates crucial considerations for AGI timelines, and it's the framework that best explains how smart and sensible people might arrive at vastly different views on AGI timelines. Over the course of the next few weeks, I'll publish a series of posts about these background variables and some considerations that shed light on what their values are. I'll conclude by describing my framework for how they come together to explain various overall viewpoints on AGI timelines, depending on different prior assumptions on the values of these variables. By trade, I'm a math competition junkie, an entrepreneur, and a hippie. I am not an expert on any of the topics I'll be writing about -- my analyses will not be comprehensive, and they might contain mistakes. I'm sharing them with you anyway in the hopes that you might contribute your own expertise, correct for my epistemic shortcomings, and perhaps find them interesting. I'd like to thank Paul Christiano, Jessica Taylor, Carl Shulman, Anna Salamon, Katja Grace, Tegan McCaslin, Eric Drexler, Vlad Firiou, Janos Kramar, Victoria Krakovna, Jan Leike, Richard Ngo, Rohin Shah, Jacob Steinhardt, David Dalrymple, Catherine Olsson, Jelena Luketina, Alex Ray, Jack Gallagher, Ben Hoffman, Tsvi BT, Sam Eisenstat, Matthew Graves, Ryan Carey, Gary Basin, Eliana Lorch, Anand Srinivasan, Michael Webb, Ashwin Sah, Yi Sun, Mark Sellke, Alex Gunning, Paul Kreiner, David Girardo, Danit Gal, Oliver Habryka, Sarah Constantin, Alex Flint, Stag Lynn, Andis Draguns, Tristan Hume, Holden Lee, David Dohan, and Daniel Kang for enlightening conversations about AGI timelines, and I'd like to apologize to anyone whose name I ought to have included, but forgot to include. Table of contents As I post over the coming weeks, I'll update this table of contents with links to the posts, and I might update some of the titles and descriptions. How special are human brains among animal brains? Humans can perform intellectual feats that appear qualitatively different from those of other animals, but are our brains really doing anything so different? How u...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement: AI alignment prize round 4 winners , published by Vladimir Slepnev on the AI Alignment Forum. We (Zvi Mowshowitz and Vladimir Slepnev) are happy to announce the results of the fourth round of the AI Alignment Prize, funded by Paul Christiano. From July 15 to December 31, 2018 we received 10 entries, and are awarding four prizes for a total of $20,000. The winners We are awarding two first prizes of $7,500 each. One of them goes to Alexander Turner for Penalizing Impact via Attainable Utility Preservation; the other goes to Abram Demski and Scott Garrabrant for the Embedded Agency sequence. We are also awarding two second prizes of $2,500 each: to Ryan Carey for Addressing three problems with counterfactual corrigibility, and to Wei Dai for Three AI Safety Related Ideas and Two Neglected Problems in Human-AI Safety. We will contact each winner by email to arrange transfer of money. Many thanks to everyone else who participated! Moving on This concludes the AI Alignment Prize for now. It has stimulated a lot of good work during its year-long run, but participation has been slowing down from round to round, and we don't think it's worth continuing in its current form. Once again, we'd like to thank everyone who sent us articles! And special thanks to Ben and Oliver from the LW2.0 team for their enthusiasm and help. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Progress on Causal Influence Diagrams, published by Tom Everitt on the AI Alignment Forum. By Tom Everitt, Ryan Carey, Lewis Hammond, James Fox, Eric Langlois, and Shane Legg Crossposted from DeepMind Safety Research About 2 years ago, we released the first few papers on understanding agent incentives using causal influence diagrams. This blog post will summarize progress made since then. What are causal influence diagrams? A key problem in AI alignment is understanding agent incentives. Concerns have been raised that agents may be incentivized to avoid correction, manipulate users, or inappropriately influence their learning. This is particularly worrying as training schemes often shape incentives in subtle and surprising ways. For these reasons, we're developing a formal theory of incentives based on causal influence diagrams (CIDs). Here is an example of a CID for a one-step Markov decision process (MDP). The random variable S1 represents the state at time 1, A1 represents the agent's action, S2 the state at time 2, and R2 the agent's reward. The action A1 is modeled with a decision node (square) and the reward R2 is modeled as a utility node (diamond), while the states are normal chance nodes (rounded edges). Causal links specify that S1 and A1 influence S2, and that S2 determines R2. The information link S1 → A1 specifies that the agent knows the initial state S1 when choosing its action A1. In general, random variables can be chosen to represent agent decision points, objectives, and other relevant aspects of the environment. In short, a CID specifies: Agent decisions Agent objectives Causal relationships in the environment Agent information constraints These pieces of information are often essential when trying to figure out an agent's incentives: how an objective can be achieved depends on how it is causally related to other (influenceable) aspects in the environment, and an agent's optimization is constrained by what information it has access to. In many cases, the qualitative judgements expressed by a (non-parameterized) CID suffice to infer important aspects of incentives, with minimal assumptions about implementation details. Conversely, it has been shown that it is necessary to know the causal relationships in the environment to infer incentives, so it's often impossible to infer incentives with less information than is expressed by a CID. This makes CIDs natural representations for many types of incentive analysis. Other advantages of CIDs is that they build on well-researched topics like causality and influence diagrams, and so allows us to leverage the deep thinking that's already been done in these fields. Incentive Concepts Having a unified language for objectives and training setups enables us to develop generally applicable concepts and results. We define four such concepts in Agent Incentives: A Causal Perspective (AAAI-21): Value of information: what does the agent want to know before making a decision? Response incentive: what changes in the environment do optimal agents respond to? Value of control: what does the agent want to control? Instrumental control incentive: what is the agent both interested and able to control? For example, in the one-step MDP above: For S1, an optimal agent would act differently (i.e. respond) if S1 changed, and would value knowing and controlling S1, but it cannot influence S1 with its action. So S1 has value of information, response incentive, and value of control, but not an instrumental control incentive. For S2 and R2, an optimal agent could not respond to changes, nor know them before choosing its action, so these have neither value of information nor a response incentive. But the agent would value controlling them, and is able to influence them, so S2 and R2 have value of control and instrumental control ince...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Non-Obstruction: A Simple Concept Motivating Corrigibility, published by Alex Turner on the AI Alignment Forum. Thanks to Mathias Bonde, Tiffany Cai, Ryan Carey, Michael Cohen, Joe Collman, Andrew Critch, Abram Demski, Michael Dennis, Thomas Gilbert, Matthew Graves, Koen Holtman, Evan Hubinger, Victoria Krakovna, Amanda Ngo, Rohin Shah, Adam Shimi, Logan Smith, and Mark Xu for their thoughts. Main claim: corrigibility's benefits can be mathematically represented as a counterfactual form of alignment. Overview: I'm going to talk about a unified mathematical frame I have for understanding corrigibility's benefits, what it “is”, and what it isn't. This frame is precisely understood by graphing the human overseer's ability to achieve various goals (their attainable utility (AU) landscape). I argue that corrigibility's benefits are secretly a form of counterfactual alignment (alignment with a set of goals the human may want to pursue). A counterfactually aligned agent doesn't have to let us literally correct it. Rather, this frame theoretically motivates why we might want corrigibility anyways. This frame also motivates other AI alignment subproblems, such as intent alignment, mild optimization, and low impact. Nomenclature Corrigibility goes by a lot of concepts: “not incentivized to stop us from shutting it off”, “wants to account for its own flaws”, “doesn't take away much power from us”, etc. Named by Robert Miles, the word ‘corrigibility' means “able to be corrected [by humans]." I'm going to argue that these are correlates of a key thing we plausibly actually want from the agent design, which seems conceptually simple. In this post, I take the following common-language definitions: Corrigibility: the AI literally lets us correct it (modify its policy), and it doesn't manipulate us either. Without both of these conditions, the AI's behavior isn't sufficiently constrained for the concept to be useful. Being able to correct it is small comfort if it manipulates us into making the modifications it wants. An AI which is only non-manipulative doesn't have to give us the chance to correct it or shut it down. Impact alignment: the AI's actual impact is aligned with what we want. Deploying the AI actually makes good things happen. Intent alignment: the AI makes an honest effort to figure out what we want and to make good things happen. I think that these definitions follow what their words mean, and that the alignment community should use these (or other clear groundings) in general. Two of the more important concepts in the field (alignment and corrigibility) shouldn't have ambiguous and varied meanings. If the above definitions are unsatisfactory, I think we should settle upon better ones as soon as possible. If that would be premature due to confusion about the alignment problem, we should define as much as we can now and explicitly note what we're still confused about. We certainly shouldn't keep using 2+ definitions for both alignment and corrigibility. Some people have even stopped using ‘corrigibility' to refer to corrigibility! I think it would be better for us to define the behavioral criterion (e.g. as I defined 'corrigibility'), and then define mechanistic ways of getting that criterion (e.g. intent corrigibility). We can have lots of concepts, but they should each have different names. Evan Hubinger recently wrote a great FAQ on inner alignment terminology. We won't be talking about inner/outer alignment today, but I intend for my usage of "impact alignment" to roughly map onto his "alignment", and "intent alignment" to map onto his usage of "intent alignment." Similarly, my usage of "impact/intent alignment" directly aligns with the definitions from Andrew Critch's recent post, Some AI research areas and their relevance to existential safety. A Simple Concept Mo...
Episode Breakdown: 1st Half: GW11 Data Dump League Results & Tactics 2nd Half: Twitter Q&As Pub Quiz: Raas vs Ryan
Sports, much like the military, are often heralded for creating leaders. And this belief makes sense because stressful moments tend to bring out the leader in us, and afterwards, during the inevitable de-brief or watching of the game tape, we can take the time to reflect on what worked and what was an epic fail, but.... no matter what, learning has happened... We learned about our leadership, we learned about ourselves, and also, we earned about our passions.And so, in this special Remembrance Day episode, we're going to talk about learning, about ourselves and our passions. To do this, I've brought back Ryan Carey, the Director of Military Engagement for the Concussion Legacy Foundation, and Tim Fleizer, the Executive Director, also from the Concussion Legacy Foundation, where we will talk about how they were able to turn their passions into their careers. Reviews are the best way for us to know what we are doing right, what we are doing wrong, and what we should talk about in the future, so please click on the links below and let us know if this episode was helpful.Relevant Links:1. Concussion Legacy Foundation web-site: https://www.concussionfoundation.ca2. Project Enlist web page: https://www.projectenlist.ca3. Health Support Line: https://www.projectenlist.ca/support-line
Your leadership journey is yours alone. Your experiences, your triumphs, your failures, your hard times and your easy times, they've all been shaping you and how you act as a leader. The trick is to be accepting of your failures and your wins, to see how each of your experiences has shaped you, and then ensure your evolution as a leader is in tune with the kind of leader you want to be.In this special Remembrance Day episode, we are speaking with Ryan Carey, who will discuss his leadership journey from his youth to professional football player to Canadian Armed Forces veteran, and being to an advocate for increasing awareness surrounding concussions across all fields.Ryan's story is meant to remind us that each of our leadership journeys are unique and special, and once we create a path that follows our personal values, we can, and WILL, make the differences that we want for ourselves and the world. Reviews are the best way for us to know what we are doing right, what we are doing wrong, and what we should talk about in the future, so please click on the links below and let us know if this episode was helpful.
Ryan Carey from My3quotes.com joins the show to talk about different window types and manufacturers. His company provides customers with an unbiased review of different products from different stores and contractors. Reuben recalls one of the most popular blogs by Ryan about Andersen vs Pella Vs Marvin. Then, Ryan talks about the pros and cons of their window products. He also discusses his preferred materials and the differences between vinyl, wood, fiberglass, and aluminum. Moreover, he highlights the cost and price difference between these brands and materials. Tessa and Bill share tips on how to reduce moisture and condensation inside the house, ventilation strategies, and window maintenance. Ryan also talks about foam and metal spacers, window U-factors, deterioration, and warranties. Visit Ryan Carey and his team at www.getmy3quotes.com.
Today Ryan Carey, the owner of My 3 Quotes, joins the show to talk about getting the best provider for your building needs. Ryan shares how he provides customers an unbiased review of different products with 3 quotes from 3 different contractors. They fill the space between a homeowner and a contractor by guiding them to have an informed decision. Moreover, they discuss the differences not only in the costing but as well as the advantages and disadvantages of the building methods and practices. Reuben remembers the blog post by Ryan regarding James Hardie vs LP Smartside. Ryan discusses the difference between Hardie and LP products as well as their warranty provisions. Tessa asks about the life expectancy for Hardie and LP products. Then, Bill clarifies the provisions in lifetime warranties. Visit Ryan Carrey and their team at www.getmy3quotes.com.
Ryan Carey drops a massive pool of knowledge about the past, present, and growing future of golf memorabilia.
On this week's Fully Equipped, hosts Jonathan Wall, GOLF's Managing Editor for Equipment, Andrew Tursky, GOLF's Senior Editor for Equipment, and Kris McCormack, True Spec Golf's VP of Tour and Education talk this week's biggest gear stories including Rory McIlroy tossing his 3-wood on to the Jersey Turnpike, Collin Morikawa's cereal inspired head covers, and Cam Champ testing a PING i500 long iron. The episode then concludes with an exclusive interview featuring Golden Age Golf Auctions Founder Ryan Carey.
On this week's Fully Equipped, hosts Jonathan Wall, GOLF's Managing Editor for Equipment, Andrew Tursky, GOLF's Senior Editor for Equipment, and Kris McCormack, True Spec Golf's VP of Tour and Education talk this week's biggest gear stories including Rory McIlroy tossing his 3-wood on to the Jersey Turnpike, Collin Morikawa's cereal inspired head covers, and Cam Champ testing a PING i500 long iron. The episode then concludes with an exclusive interview featuring Golden Age Golf Auctions Founder Ryan Carey.
An agency's success or failure is reliant upon the owner's leadership skills to a certain extent. There are many elements that ultimately create an agency's legacy, but the strength of the leadership is one of the cornerstones that holds it all together. Most teams are built with people who have come up and through the agency world, so by the time they step into a leadership role, there's a risk that they're doing it the way everyone else has done it. If you want to have a next-level agency, it might be worth the effort to reexamine and sharpen your leadership approach. Ryan Carey is a ranger-qualified army vet who now heads up White Label IQ, our presenting sponsor here at the podcast. His background and experience have given him unique insights into leadership that allow him to come at agency life with an inspiring perspective. In this episode of Build a Better Agency, Ryan and I explore military-inspired leadership and how it can benefit agencies. These worlds might seem vastly different but they both share a need for clarity, communication, trust, respectful disagreements, and unflappable dedication to the team. We talk about all of these things, sprinkled with Ryan's true-life stories from the front lines, with the hope of challenging and improving how you think about leadership. A big thank you to our podcast's presenting sponsor, White Label IQ. They're an amazing resource for agencies who want to outsource their design, dev, or PPC work at wholesale prices. Check out their special offer (10 free hours!) for podcast listeners here. What You Will Learn in This Episode: The need to ensure clarity on a client's definition of success Tips for avoiding micromanaging your leadership The need for clear processes Why it's important to not get complacent, even when succeeding Absolute vs Participatory Leadership How to respectfully disagree The importance of trust Challenges of splitting-focus Military experience that benefits agency success “As you continue to develop a project with a client, you've got to continue to ensure it's aligning with where they're wanting to go because they're learning too in the whole process.” - Ryan CareyCLICK TO TWEET“In leadership, it's easy to get into a micromanaging scenario, especially when it's client-facing.” - Ryan CareyCLICK TO TWEET“I think it's very easy to become a line item on an expense report and not a benefit to the company.” - Ryan CareyCLICK TO TWEET“The more that you build trust with people, the more that they're going to be able to say something when it matters.” - Ryan CareyCLICK TO TWEET“People can do amazing things and when you give them leeway and trust, they can take that very far.” - Ryan CareyCLICK TO TWEET Ways to contact Ryan Carey: Special offer: Get 10 Hours FREE on Development of any kind at www.whitelabeliq.com/ami Websites: https://www.whitelabeliq.com LinkedIn: https://www.linkedin.com/in/ryan-carey-wl/ Facebook: https://www.facebook.com/WhiteLabelIQ/ Email: ryanc@whitelabeliq.com Additional Resources: AE Bootcamp — September 14 & 15 in Chicago, IL Sell with Authority (buy Drew's book) Facebook Group for the Build a Better Agency Podcast My Future Self Mini-Course
AI safety researchers are increasingly focused on understanding what AI systems want. That may sound like an odd thing to care about: after all, aren’t we just programming AIs to want certain things by providing them with a loss function, or a number to optimize? Well, not necessarily. It turns out that AI systems can have incentives that aren’t necessarily obvious based on their initial programming. Twitter, for example, runs a recommender system whose job is nominally to figure out what tweets you’re most likely to engage with. And while that might make you think that it should be optimizing for matching tweets to people, another way Twitter can achieve its goal is by matching people to tweets — that is, making people easier to predict, by nudging them towards simplistic and partisan views of the world. Some have argued that’s a key reason that social media has had such a divisive impact on online political discourse. So the incentives of many current AIs already deviate from those of their programmers in important and significant ways — ways that are literally shaping society. But there’s a bigger reason they matter: as AI systems continue to develop more capabilities, inconsistencies between their incentives and our own will become more and more important. That’s why my guest for this episode, Ryan Carey, has focused much of his research on identifying and controlling the incentives of AIs. Ryan is a former medical doctor, now pursuing a PhD in machine learning and doing research on AI safety at Oxford University’s Future of Humanity Institute.
Ryan and I hop into a conversation immediately about the nature of cursing, drugs, reselling, religion, (etc.) and an all around theme of validation from your peers! This was definitely a more serious convo but just as entertaining as the last! Like and Subscribe as you please :) Instagram.com/ryan.carey20 Instagram.com/clout.alper
Corey Butler and Shawn Todd have an incredible chat with Ryan Carey - former CFL first-rounder, Canadian Forces Infantry Officer, and now folk guitarist and Invictus games athlete. Ryan doesn't hold back on his journey through Op Medusa in Afghanistan, his time healing with PTSD, and his strategies on being successful in pro sports, life, and resilience. A very powerful listen. Connect with Corey, Shawn, and Ecivda Financial on Instagram, FB, LinkedIn, YouTube & Twitter.
Way back in 2014, Tim and Mulele discussed the first volume of R.u.N. (Remember Ur Nature), a comic in shonen manga style about the sport of parkour. Now, at last, volume two is available, and Tim is joined by a new voice, Ryan Carey of SOLRAD, to discuss the book (by Kariofillis Chris Hatzopoulos, Rafail … Continue reading Critiquing Comics #184: “R.U.N.” volume 2
Ryan Carey is one of golf's top memorabilia experts. He is a Co-founder of the highly reputable company Golden Age Golf Auctions. Since founding the company with his partner Bob Zafian in 2006, Carey has become the go-to figure for those looking to buy or sell the greatest treasures from golf history. With a Rolodex that spans the globe, he has helped bring hard to find items like major championship trophies, rare clubs, limited edition books, early golf photography, autographed items, and other unique finds to market. You may remember the story of his company(originally named Green Jacket Auctions) getting sideways with the folks at Augusta National Golf Club after selling a few of their precious coats. Today, with a new name and an impressive track record, the phone is ringing more than ever. The golf boom brought on by the pandemic has only grown the market for collectors and Golden Age Auctions continues to deliver. Carey joins me on Mid-Am crisis to talk about his most recent auction which was headlined by a collection of Gary Player's major trophies. We also talked about his personal collection, what items he imagines will be hot in the coming years, and a few artifacts he wishes he could get his hands on. Beyond selling the coolest stuff in golf, Ryan Carey is also a great guy and true lover of golf. I'm hoping that we can tee it up and talk some more history soon. Every time we chat I learn something new about the game. This episode is no different. To learn more about Ryan Carey and his company please visit https://goldenagegolfauctions.com/ To find more of my musings on golf, including my booke, The Nine Virtues of Golf, check out https://jayrevell.com/
Great leaders elevate organizations and communities, while simultaneously giving back. Join Business Development Specialist of Chalmers Insurance Group, Carissa Chipman, as she interviews local leaders who champion important causes in their community. Whether you own a company or support a local passion project, this is the podcast for you. Today we hear from Ryan Carey, owner of Noble Barbecue, Fire & Company and Pizza Pie On The Fly: how social media can inspire people; the value of partnering with local businesses; how to use the pandemic as a reset button; and how being adaptive can help a business thrive.
Sports have always played a key role in the life of Retired Captain Ryan Carey, who played in the Canadian Football League before joining the Canadian Armed Forces. He joins us to talk about how returning to those competitive roots to prepare to represent his country in archery, sitting volleyball, and swimming at the Invictus Games has impacted his life.
Go ahead and name a golf memorabilia item. Chances are, Golden Age Auctions has sold it. Green jackets? Check. Masters trophies? Check. Tiger Woods' golf clubs he used during the Tiger Slam? Check! This week, we dive into the world of golf auctions — one is live right now! — and learn everything there is to know about buying and selling golf history. Eventually, co-founder Ryan Carey entertains us with his top 3 fantasy golf auction items, which might be the next great golf debate.
Which six videogames would you only play during quarantine for the rest of your life? We discuss best COD, Halo, Fifa, 2k and more in this week's episode of The Corey Walsh Podcast.
My guest is Ryan Carey, founder of BetterOn Video. Ryan was an early YouTube employee whose job it was to be a video evangelist - persuading companies to use video to tell their story. In order to do that, he had to train people on how to be great on video. What he learned by doing this is that authenticity works. This is a lesson you can use to get good at video interviews.
On this episode of the Slashenings Podcast, I chat with artist and musician Ryan Carey. We discuss his inspirations for his work, Stephen King, what makes the idea of the occult creepy, and more!Artwork by Slasher DaveMusic by Altar of AshEditing by JR You can follow the show on Instagram Facebook Twitter
In this week's episode Corey Aram and Ryan Discuss which movies would you bring onto a Quarantine Island? With only one movie from each series allowed it raises a lot of debates to which is best of each?
It's a fantasy draft where Corey, Aram, and Ryan go through an 8 round draft of what shows they would like to bring onto a corona lifestyle Esque quarantine island.
“Never buy golf memorabilia as an investment; buy it because you love the item. But if you get your hands on the right item at the right price, you might accidentally make money on it.” (https://golfyeah.com/wp-content/uploads/2019/05/Ryan-Carey-with-Jacket.jpg) Ryan Carey Owner, Green Jacket Auctions Boston, Massachusetts Ryan Carey is exactly the type of person that the Golf Yeah podcast was created to showcase. A lawyer by training, and a practicing attorney, around 13 years ago Ryan Carey and his partner Bob Zafian, identified an opportunity in the world of golf memorabilia, and in 2006 they created on online business called “Green Jacket Auctions,” that’s grown into the largest auction house of its kind. To give you an idea of the scope of Ryan’s business, its most recent auction featured more than 1,000 lots, with items ranging in price from $25 to more than $200,000. To appreciate those numbers, this morning I searched Ebay for golf memorabilia, and found only 124 items…ranging from a Francis Ouimet trophy listed at around $20,000, to an unsigned photo of Jack Nicklaus kissing the Claret Jug for $5.55. (https://golfyeah.com/wp-content/uploads/2019/04/Green-Jacket-Auctions.jpg) Business partners Ryan Carey and Bob Zafian have established Green Jacket Auction’s reputation as the world’s leading auction house for golf memorabilia. Based on their passion for golf, and for collecting other rare sports items, Ryan and Bob have created what’s become a thriving enterprise that’s expanded into related services, including appraisals, authentication and private searches for hard to find collectables. And in the process, they’ve established reputations a leading authorities on just about everything related to golf…and as their company name suggests…everything related to the Masters Tournament. I had not met nor spoken with Ryan prior to his interview, but from what I was able to discern from his footprints on the internet…I learned that he’s a native of Tampa, Florida…an avid golfer, a fly fisherman, and a lover of fine wines. He currently resides in Boston, and hoped he’d made the conversion to being a Red Sox fan. As you’ll learn in his interview, he’s a Tampa Bay Rays fan. Show Highlights: How a shared interest in golf with a stranger that he met online led to creation of Green Jacket Auctions The public showdown with Tiger Woods that served as the first major catalyst for business growth Why collectors are better off buying a few high priced items, rather than a lot of less expensive items The rationale for keeping an online archive of every item that’s ever been sold on Green Jacket Auctions What makes most autographed items from golf professional less valuable than other sports autographs Ryan’s “official” announcement regarding the recent dispute with Augusta National Golf Club How Green Jacket Auctions took the market for autographed golf items away from Ebay Simple advice for people who are interested in starting a collection of golf memorabilia Notable Quotes: On starting the business: “For both Bob and me, this was our side gig that we were not telling our employers about.” On their dispute with Tiger Woods: “In 2010, we were still pretty small when Tiger talked some shit about us; and that was the best thing that ever happened to us.” On the golf memorabilia market dynamics: “The market is dominated by a couple dozen very large collectors around the world. They move the bigger ticket items that drive our auction.” On limiting the number of auctions to 3 each year: “Instead of running auctions every month, we like to make them special events and create buzz around them. We’ve got a great core of followers that love to follow our auctions, and if we have them all the time they will feel less special.” On...
One of the more fascinating corners of golf history is collecting, a multi million dollar business which spans the globe. The biggest and most influential auction house in golf is Green Jacket Auctions, started by Bob Zafian and Ryan Carey in 2006. Bob joins Episode 7 of the podcast to talk about how he got started in collecting, some of the best - and weirdest - items he's been offered for sale and some advice on how a beginner should approach getting started in the trade. Links mentioned in this episode: Barnbougle Dunes Course Study Tour (https://www.talkingolf.com/gca-tours) Green Jacket Auctions (https://greenjacketauctions.com/about.htm) Connor Lewis on Twitter (https://twitter.com/SHistorians) Society of Golf Historians on Facebook (https://www.facebook.com/groups/722105418132700/) Rod Morri on Twitter (https://twitter.com/Rod_Morri) Special Guest: Bob Zafian.
Ricky Gray had a very traumatic childhood that would inevitably lead him down a path of a murderer. On New Years Eve 2005, Ricky and his nephew, Ray, started their 7 day attacking/murder spree that would leave 1 man in a coma (Ryan Carey) and 7 dead (The Harvey family & The Baskerville-Tucker family). To end this episode on a high note, we finish with patreon songs for our new members! Podcast rec of the week is Karaok Big E so go check it out! https://itunes.apple.com/us/podcast/karaok-big-e/id1323806398?mt=2If you have a Stranger Danger story, send it to itsaboutdamncrime@gmail.com or submit it on our website.Website - ItsAboutDamnCrime.comPlease consider supporting our show by joining Patreon.com/iadcpodcastWant to keep up on our Jam of the Weeks? You can listen to our playlist at https://open.spotify.com/user/125157452/playlist/41vsfx8vrUkKHVX9AD8A3m?si=97_uWGJCQEqvtfZtDzYTaQFacebook, FB Group, & Instagram- Its About Damn CrimeTwiiter & Snapchat- iadcpodcast
We are thrilled to discuss and explore the mysteries of the Avocado with Ryan Carey, friend of the Prod, and Jan Delyser, our featured guest and avocado expert, on the first episode of Prodcast from the Produce Aisle.
Part 4 of 4 is finally here! Bill and Brian welcome a new set of guests to close out our discussion of our favorite songs from every Beatles album. Joining us are musician Jim McGee (thearrivist.bandcamp.com), blogger/freelance journalist Ryan Carey (the Inappropriate Thesaurus), musician Ed Pratico (bassist for Jesse Elliot's band), and blogger Beanie Zee (sirkaytheawesome.com) as we talk about the White Album, Yellow Submarine, Abbey Road, and Let It Be!
Bill and Brian welcome substitute guest (Ryan Carey unfortunately had to go have Valentine's dinner) Matt Pischl (who you'll be hearing talk about a group with the initials DMB in a couple weeks) to talk about "Dream of the Wild Horses" by Gary Lucas, a song which would have been worked into a collaboration between Jeff Buckley and Lucas if it weren't for the singer's untimely death halting progress on the follow up to Buckley's Grace. Brian, Bill, and Matt discuss how this song would have worked in Buckley's catalog and how they can hear Buckley in the song. We also read a listener email that gives great insight into poetic lyrics, Bob Dylan, Paul Westerberg, and Kurt Cobain's lyrics.
Bill and Brian welcome journalist and blogger (The Inappropriate Thesaurus) Ryan Carey to talk about Jeff Buckley's landmark but only album Grace (1994, Columbia). Although critically revered, the album never became a commercial success within his lifetime (he tragically died in a drowning accident in 1997 at the age of 30), but has since gone on to become one of the most respected and well known albums from the 90s. Buckley was the son of folk singer Tim Buckley, who gained attention in the 70s before his own untimely death. Although he tried to distance himself from his father, Buckley ended up following in his footsteps as a skilled musician and uncanny singer. (After a slight detour into a discussion about politics) Bill, Brian, and Ryan talk about how Ryan discovered Buckley's music a little later than others, the epic nature of each song on this complex album, Buckley's start in a New York City coffee house, his perfect hair, Buckley's legacy as an artist with a single album, Gary Lucas' influence, how perfect "Hallelujah" really is, cowrites and covers, William Wordsworth, and as always a track by track review.
Ryan Carey - Photography Education and Experience Today’s featured guest is Ryan Carey It often happens during my interview that we fins a common theme to discuss and this interview with Ryan Carey was no different. Ryan reminds us on the important's of education. She opens it up further with experience being a big part of the education. I find with many who stay or self teach photography get to a point where they are prepared to make it a business. This is one of the hardest things to do and many new photographers fail because they were not fully prepared for the business side of things. Assisting other photographers, reaching out to your local photographers for a cup a coffee and picking their brain, attending workshops and seminars, and taking on free jobs in those early stages to lean more about the true practice of running a photography business. Ryan is an Orlando based wedding and lifestyle photographer with her business Earth Muse however she travels around the world working with clients such as Bank of America, Avon, and Tupperware. Ryan has an extensive list of CEO’s and presidents of organizations she has worked with to fill the commercial side of her business. Ryan studied at Daytona State College of Photography and is in her eleventh year working as a professional photographer. She has documented and photographed for national accounts such as Bank of America, Polaris, Swarovski, Jockey, Tupperware, Avon and Pampered Chef. Ryan has shot and worked closely with the CEO’s and Presidents of these organizations, Patrick Duffy of the hit Television show Dallas, Big Bad Voodoo Daddy, General Michael Hayden Former Director of the CIA & NSA, Captain James Lovell of Apollo 13 and the creators of Proactive, to name a few. Ryan travels around the world shooting weddings, corporate and lifestyle photography. She is passionate about her art and enjoys working with and getting to know each one of her clients.
Before this podcast Ryan and I were not best friends. After this podcast we're still not best friends, but we're making progress.