Podcast appearances and mentions of holly mandel

  • 9PODCASTS
  • 14EPISODES
  • 52mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jun 25, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about holly mandel

Latest podcast episodes about holly mandel

The Nonlinear Library
AF - Formal verification, heuristic explanations and surprise accounting by Jacob Hilton

The Nonlinear Library

Play Episode Listen Later Jun 25, 2024 14:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Formal verification, heuristic explanations and surprise accounting, published by Jacob Hilton on June 25, 2024 on The AI Alignment Forum. ARC's current research focus can be thought of as trying to combine mechanistic interpretability and formal verification. If we had a deep understanding of what was going on inside a neural network, we would hope to be able to use that understanding to verify that the network was not going to behave dangerously in unforeseen situations. ARC is attempting to perform this kind of verification, but using a mathematical kind of "explanation" instead of one written in natural language. To help elucidate this connection, ARC has been supporting work on Compact Proofs of Model Performance via Mechanistic Interpretability by Jason Gross, Rajashree Agrawal, Lawrence Chan and others, which we were excited to see released along with this post. While we ultimately think that provable guarantees for large neural networks are unworkable as a long-term goal, we think that this work serves as a useful springboard towards alternatives. In this post, we will: Summarize ARC's takeaways from this work and the problems we see with provable guarantees Explain ARC's notion of a heuristic explanation and how it is intended to overcome these problems Describe with the help of a worked example how the quality of a heuristic explanation can be quantified, using a process we have been calling surprise accounting We are also sharing a draft by Gabriel Wu (currently visiting ARC) describing a heuristic explanation for the same model that appears in the above paper: max_of_k Heuristic Estimator Thanks to Stephanie He for help with the diagrams in this post. Thanks to Eric Neyman, Erik Jenner, Gabriel Wu, Holly Mandel, Jason Gross, Mark Xu, and Mike Winer for comments. Formal verification for neural networks In Compact Proofs of Model Performance via Mechanistic Interpretability, the authors train a small transformer on an algorithmic task to high accuracy, and then construct several different formal proofs of lower bounds on the network's accuracy. Without foraying into the details, the most interesting takeaway from ARC's perspective is the following picture: In the top right of the plot is the brute-force proof, which simply checks every possible input to the network. This gives the tightest possible bound, but is very long. Meanwhile, in the bottom left is the trivial proof, which simply states that the network is at least 0% accurate. This is very short, but gives the loosest possible bound. In between these two extremes, along the orange Pareto frontier, there are proofs that exploit more structure in the network, leading to tighter bounds for a given proof length, or put another way, shorter proofs for a given bound tightness. It is exciting to see a clear demonstration that shorter proofs better explain why the neural network has high accuracy, paralleling a common mathematical intuition that shorter proofs offer more insight. One might therefore hope that if we understood the internals of a neural network well enough, then we would be able to provide provable guarantees for very complex behaviors, even when brute-force approaches are infeasible. However, we think that such a hope is not realistic for large neural networks, because the notion of proof is too strict. The basic problem with provable guarantees is that they must account for every possible way in which different parts of the network interact with one another, even when those interactions are incidental to the network's behavior. These interactions manifest as error terms, which the proof must provide a worst-case bound for, leading to a looser bound overall. The above picture provides a good demonstration of this: moving towards the left of the plot, the best bound gets looser and looser. Mo...

The Nonlinear Library
AF - Common misconceptions about OpenAI by Jacob Hilton

The Nonlinear Library

Play Episode Listen Later Aug 25, 2022 9:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Common misconceptions about OpenAI, published by Jacob Hilton on August 25, 2022 on The AI Alignment Forum. I have recently encountered a number of people with misconceptions about OpenAI. Some common impressions are accurate, and others are not. This post is intended to provide clarification on some of these points, to help people know what to expect from the organization and to figure out how to engage with it. It is not intended as a full explanation or evaluation of OpenAI's strategy. The post has three sections: Common accurate impressions Common misconceptions Personal opinions The bolded claims in the first two sections are intended to be uncontroversial, i.e., most informed people would agree with how they are labeled (correct versus incorrect). I am less sure about how commonly believed they are. The bolded claims in the last section I think are probably true, but they are more open to interpretation and I expect others to disagree with them. Note: I am an employee of OpenAI. Sam Altman (CEO of OpenAI) and Mira Murati (CTO of OpenAI) reviewed a draft of this post, and I am also grateful to Steven Adler, Steve Dowling, Benjamin Hilton, Shantanu Jain, Daniel Kokotajlo, Jan Leike, Ryan Lowe, Holly Mandel and Cullen O'Keefe for feedback. I chose to write this post and the views expressed in it are my own. Common accurate impressions Correct: OpenAI is trying to directly build safe AGI. OpenAI's Charter states: "We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome." OpenAI leadership describes trying to directly build safe AGI as the best way to currently pursue OpenAI's mission, and have expressed concern about scenarios in which a bad actor is first to build AGI, and chooses to misuse it. Correct: the majority of researchers at OpenAI are working on capabilities. Researchers on different teams often work together, but it is still reasonable to loosely categorize OpenAI's researchers (around half the organization) at the time of writing as approximately: Capabilities research: 100 Alignment research: 30 Policy research: 15 Correct: the majority of OpenAI employees did not join with the primary motivation of reducing existential risk from AI specifically. My strong impressions, which are not based on survey data, are as follows. Across the company as a whole, a minority of employees would cite reducing existential risk from AI as their top reason for joining. A significantly larger number would cite reducing risk of some kind, or other principles of beneficence put forward in the OpenAI Charter, as their top reason for joining. Among people who joined to work in a safety-focused role, a larger proportion of people would cite reducing existential risk from AI as a substantial motivation for joining, compared to the company as a whole. Some employees have become motivated by existential risk reduction since joining OpenAI. Correct: most interpretability research at OpenAI stopped after the Anthropic split. Chris Olah led interpretability research at OpenAI before becoming a cofounder of Anthropic. Although several members of Chris's former team still work at OpenAI, most of them are no longer working on interpretability. Common misconceptions Incorrect: OpenAI is not working on scalable alignment. OpenAI has teams focused both on practical alignment (trying to make OpenAI's deployed models as aligned as possible) and on scalable alignment (researching methods for aligning models that are beyond human supervision, which could potentially scale to AGI). These teams work closely with one another. Its recently-released alignment research includes self-critiquing models (AF discussion), InstructGPT, WebGPT (AF discussion) and book summarization (AF discussion). OpenAI's ap...

The Nonlinear Library
AF - ELK First Round Contest Winners by Mark Xu

The Nonlinear Library

Play Episode Listen Later Jan 26, 2022 1:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ELK First Round Contest Winners, published by Mark Xu on January 26, 2022 on The AI Alignment Forum. Thank you to all those who have submitted proposals to the ELK proposal competition. We have evaluated all proposals submitted before January 14th[1]. Decisions are still being made on proposals submitted after January 14th. The deadline for submissions is February 15th, after which we will release summaries of the proposals and associated counterexamples. We evaluated 30 distinct proposals from 25 people. We awarded a total of $70,000 for proposals from the following 8 people: $20,000 for a proposal from Sam Marks $10,000 for a proposal from Dmitrii Krasheninnikov $5,000 for a proposal from Maria Shakhova $10,000 for proposals from P, who asked to remain anonymous $10,000 for a proposal from Scott Viteri $5,000 for a proposal from Jacob Hilton and Holly Mandel $10,000 for a proposal from Jacob Hilton We would have also awarded $15,000 for proposals from Holden Karnofsky, however he is on ARC's board and is ineligible to receive the prize. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - Truthful LMs as a warm-up for aligned AGI by Jacob Hilton

The Nonlinear Library

Play Episode Listen Later Jan 17, 2022 21:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Truthful LMs as a warm-up for aligned AGI, published by Jacob Hilton on January 17, 2022 on The AI Alignment Forum. This post is heavily informed by prior work, most notably that of Owain Evans, Owen Cotton-Barratt and others (Truthful AI), Beth Barnes (Risks from AI persuasion), Paul Christiano (unpublished) and Dario Amodei (unpublished), but was written by me and is not necessarily endorsed by those people. I am also very grateful to Paul Christiano, Leo Gao, Beth Barnes, William Saunders, Owain Evans, Owen Cotton-Barratt, Holly Mandel and Daniel Ziegler for invaluable feedback. In this post I propose to work on building competitive, truthful language models or truthful LMs for short. These are AI systems that are: Useful for a wide range of language-based tasks Competitive with the best contemporaneous systems at those tasks Truthful in the sense of rarely stating negligent falsehoods in deployment Such systems will likely be fine-tuned from large language models such as GPT-3, hence the name. WebGPT is an early attempt in this direction. The purpose of this post is to explain some of the motivation for building WebGPT, and to seek feedback on this direction. Truthful LMs are intended as a warm-up for aligned AGI. This term is used in a specific way in this post to refer to an empirical ML research direction with the following properties: Practical. The goal of the direction is plausibly achievable over the timescale of a few years. Valuable. The direction naturally leads to research projects that look helpful for AGI alignment. Mirrors aligned AGI. The goal is structurally similar to aligned AGI on a wide variety of axes. The remainder of the post discusses: The motivation for warm-ups (more) Why truthful LMs serve as a good warm-up (more) The motivation for focusing on negligent falsehoods specifically (more) A medium-term vision for truthful LMs (more) How working on truthful LMs compares to similar alternatives (more) Common objections to working on truthful LMs (more) Warm-ups for aligned AGI There are currently a number of different empirical ML research projects aimed at helping with AGI alignment. A common strategy for selecting such projects is to select a research goal that naturally leads to helpful progress, such as summarizing books or rarely describing injury in fiction. Often, work on the project is output-driven, taking a no-holds-barred approach to achieving the selected goal, which has a number of advantages that aren't discussed here. On the other hand, goal selection is usually method-driven, tailored to test a particular method, such as recursive decomposition or adversarial training. The idea of a warm-up for aligned AGI, as defined above, is to take the output-driven approach one step further. Instead of selecting projects individually, we attempt to choose a more ambitious research goal that naturally leads to helpful projects. Because it is harder to predict the course of research over multiple projects, we also try to make the goal structurally similar to aligned AGI, to make it more likely that unforeseen and auxiliary projects will also be valuable. Whether this output-driven approach to project selection is preferable to the method-driven approach depends on more specific details that will be discussed later. But it is worth discussing first the advantages of each approach in broad strokes: Momentum versus focus. The output-driven approach involves having a consistent high-level goal, which allows different projects to more directly build upon and learn from one another. On the other hand, the method-driven approach involves more frequent re-evaluation of goals, posing less of a risk of being distracted from the even higher-level goal of aligned AGI. Testing assumptions versus testing methods. The output-driven approach makes it easie...

Living Connected - NVC
Energy Flows with Holly Mandel

Living Connected - NVC

Play Episode Listen Later Aug 24, 2021 62:32


Holly Mandel has inspired so many people in the improv work that she does. She has created programs such as the school of improv called ImprovOlution, she has a podcast and wrote a book called “Good Girls Aren't Funny”. Her work is based on women and girls on perfectionism, limits of obeying cultural ‘shoulds' and liberating one's self for the sake of all women. Holly has her degree in sociology and psychology from UCLA. She was a joy to talk to and I am super excited to share our time we had together with you listeners. There are so many avenues that NVC can be integrated in and we want to explore strategies that can also incorporate NVC. The more strategies that we have in our tool belt the better. Maybe this way of looking at the world works for you.Holly and I talk a lot about the energy that our thoughts produce and what we can manifest just by the energy we put towards certain things. What this energy is like and how energy can effect our every day mood and thoughts. Our mindset and energy can be so powerful and the changes we can see with just a bit of practice. Having this awareness of our energy can really bring in awareness to our feelings and needs in tough moments and calm moments.   Holly's Resources: 1) https://groundlings.com/people/holly-mandel2) https://www.hollymandel.com/about3) https://www.improvolution.org/about-us Book: Good Girls Aren't Funnyhttps://www.goodgirlsarentfunny.com/CONTACT INFORMATION: Email: Livingconnected.nvc@gmail.comInstagram: livingconnectednvcFacebook Page: https://www.facebook.com/LivingConnectedNVC/Website: https://www.buzzsprout.com/1153175 Music is brought to you by: https://www.purple-planet.com/

Electric Priests Podcast
Episode 7 - Holly Mandel

Electric Priests Podcast

Play Episode Listen Later Jul 14, 2021 101:04


Sean chats with Holly about her time at The Groundlings, character improv and setting up her own improv school in New York Improvolution.

groundlings holly mandel
How's Your Mother?
Holly Mandel of Improvolution and Groundlings joins Kendra

How's Your Mother?

Play Episode Listen Later Dec 17, 2020 47:07


How’s Your Mother? Host Kendra Cunningham Holly Mandel , founder/owner Improvolution, perfomer, hustler, entrepreneur, good person Founder owner speaker Holly Mandel https://www.hollymandel.com/about Improvolution http://www.improvolution.org/ How’s Your Mother? Patreon is live! You can support the production of the podcast and comedic series https://www.patreon.com/kendracunningham www.kendracunningham.com twitter/instagram @theotherkendra Theme song “How’s Your Mother?” by the fabulous Rebecca Vigil (www.rebeccavigil.com) and the amazing Dan Reitz (www.danreitz.com)

founders groundlings your mother rebecca vigil holly mandel dan reitz
Growth From Failure
Holly Mandel - Improv Specialist, Entrepreneur and Ultimate Inspirer on the Power of Yes, And...

Growth From Failure

Play Episode Listen Later Jun 17, 2020 47:16


This is the story of Holly Mandel. In this episode, Holly and I discuss how improv changed her life. Her journey took her from St. Louis, Missouri to Los Angeles. After graduating from UCLA, she worked at Walt Disney Studios and eventually became a Groundlings member. After six wonderful years with The Groundlings as a main company member, she moved to New York and opened up her own improv school called Improvolution. On the back of that success, she also founded iMergence which is a training program that uses the technology of improvisation training corporate clients like Netflix, PBS, Comedy Central and dozens more.  Holly helps unpacks the mantra of Improv - the two words; Yes And... This conversation is packed with sociology and psychology, and helped me understand that improv (and Yes, And) is a way of life. Holly inspires people to Let go. Trust. Listen. To learn. And ultimately to grow. I have no doubt you'll enjoy conversation with the wonderful Holly Mandel. Please enjoy - Yinh 

Improv Comedy Connection
COVID-19 Summit -- How the Improv World Is Responding to COVID-19

Improv Comedy Connection

Play Episode Listen Later Mar 30, 2020 110:44


This conversation hosted by Whit Shiller with Joe Bill, Holly Mandel, Ed Morgan, Rebecca Northan, David Razowsky, Patti Stiles, and Jay Sukow, is simply a conversation about how each is doing individually and how our improv communities and the global improv community are doing in the face of COVID-19.

covid-19 summit responding improv joe bill jay sukow david razowsky ed morgan patti stiles holly mandel
Improv Comedy Connection
Holly Mandel

Improv Comedy Connection

Play Episode Listen Later Dec 15, 2019 68:28


Holly Mandel has a diverse career — she is a speaker, inspirer, creative consultant, improviser, director, teacher, writer and entrepreneur. One town isn’t big enough to contain her, so you she splits her time between LA and New York. In this episode, we start with a conversation about improv — including an exploration of character-based improv, which Holly grew up in at the Groundlings and continued in Improvolution in New York and Los Angeles. This episode also goes into Holly’s highly-regarded Good Girls Aren’t Funny workshops and presentations, which explores and seeks to understand differences in gender experiences in improv and beyond. A great place to survey all the things that Holly does is at hollymandel.com, where you can also get links to Good Girls Aren’t Funny and Improvolution.

Shezam
024 – Interview – Holly Mandel

Shezam

Play Episode Listen Later Mar 4, 2019 64:51


This Week’s Episode contains some language, viewer discretion advised! This week we have a very special guest… Holly Mandel! You can check out some of the really cool things she does at her Website

interview writing improv kayla drescher carisa hendrix holly mandel
Shezam
024 – Interview – Holly Mandel

Shezam

Play Episode Listen Later Mar 4, 2019 64:51


This Week’s Episode contains some language, viewer discretion advised! This week we have a very special guest… Holly Mandel! You can check out some of the really cool things she does at her Website

interview writing improv kayla drescher carisa hendrix holly mandel
Minor Revelations with Drew Droege
Falling Off Rockets

Minor Revelations with Drew Droege

Play Episode Listen Later Oct 10, 2016 53:54


Brian Jordan Alvarez and Holly Mandel join us this week to unveil the hidden pleasures of singledom, Adderall, and making ghastly mistakes in our youth that can maybe eventually one day propel us into the fireworks we were meant to be. Of course, we're still waiting for that moment.

Feminist Crush
Ep. 13 - Holly Mandel

Feminist Crush

Play Episode Listen Later Sep 16, 2016 61:30


Through comedy, feminist improviser and creator of goodgirlsarentfunny.com Holly Mandel empowers funny women to say HELL yes, and... "f*ck it!", to bust their inner good girls, and to "build their own sandboxes" in which to play (No boys necessary!).

hell holly mandel