Podcasts about vnm

  • 30PODCASTS
  • 62EPISODES
  • 31mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 14, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about vnm

Latest podcast episodes about vnm

VOV - Kinh tế Tài chính
Trước giờ mở cửa - Khối ngoại bất ngờ đảo chiều mua ròng phiên chiều qua

VOV - Kinh tế Tài chính

Play Episode Listen Later Mar 14, 2025 4:54


VOV1 - Khối ngoại đảo chiều mua ròng hơn 83 tỷ đồng trong phiên 13/3, chấm dứt chuỗi phiên bán ròng trước đó. VIC và SSI dẫn đầu danh sách mua ròng, trong khi VCB, VNM tiếp tục bị xả mạnh. Dù vậy, áp lực bán vẫn hiện hữu trên nhiều cổ phiếu bluechips.

The Disciplined Investor
TDI Podcast: Carson Block Rocks! (#906)

The Disciplined Investor

Play Episode Listen Later Feb 2, 2025 65:52


Markets get a gut punch on the wild AI ride. Big tech reporting – some interesting moves. Inflation – PCE inline. And our guest, Carson Block, Founder of Muddy Waters Research  NEW! DOWNLOAD THIS EPISODE'S AI GENERATED SHOW NOTES (Guest Segment) Carson Block is the Chief Investment Officer of Muddy Waters Capital LLC, an activist investment firm.  Muddy Waters conducts extensive due diligence based investment research on companies around the globe. Mr. Block is also the founder of Zer0es TV (www.zer0es.tv), an online channel dedicated to short selling related video content. Bloomberg Markets Magazine named Mr. Block as one of the “50 Most Influential in Global Finance” in 2011.  The following year, Muddy Waters received the prestigious Boldness in Business Award from the Financial Times. In September 2015, Mr. Block was featured in the book, The Most Dangerous Trade: How Short Sellers Uncover Fraud, Keep Markets Honest, and Make and Lose Billions, by former Bloomberg writer Richard Teitelbaum. He is also featured in the 2018 documentary The China Hustle. Mr. Block appears frequently as a commentator on Bloomberg Television, CNBC and the BBC. He has written op-eds in the Wall Street Journal, Financial Times, and New York Times on various topics related to improving corporate governance and market transparency.  Prior to forming Muddy Waters, Mr. Block was an entrepreneur in China and worked as a lawyer in the Shanghai office of the U.S. law firm Jones Day. In 2007, he co-authored Doing Business in China for Dummies, a primer on doing business in China.  He holds a B.S. in business from the University of Southern California and a J.D. from the Chicago-Kent College of Law, where he has also served as an adjunct professor. Follow @muddywatersre Learn More at http://www.ibkr.com/funds Follow @andrewhorowitz Looking for style diversification? More information on the TDI Managed Growth Strategy - https://thedisciplinedinvestor.com/blog/tdi-strategy/ eNVESTOLOGY Info - https://envestology.com/ Stocks mentioned in this episode: (CVNA), (TSLA), (GLD), (VNM)

The Nonlinear Library
AF - The Obliqueness Thesis by Jessica Taylor

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 30:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Obliqueness Thesis, published by Jessica Taylor on September 19, 2024 on The AI Alignment Forum. In my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.) First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation: Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal. To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) an agent combining those, but some combinations are much more natural and statistically likely than others. Let's consider Yudkowsky's formulations as alternatives. Quoting Arbital: The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal. The strong form of the Orthogonality Thesis says that there's no extra difficulty or complication in the existence of an intelligent agent that pursues a goal, above and beyond the computational tractability of that goal. As an example of the computational tractability consideration, sufficiently complex goals may only be well-represented by sufficiently intelligent agents. "Complication" may be reflected in, for example, code complexity; to my mind, the strong form implies that the code complexity of an agent with a given level of intelligence and goals is approximately the code complexity of the intelligence plus the code complexity of the goal specification, plus a constant. Code complexity would influence statistical likelihood for the usual Kolmogorov/Solomonoff reasons, of course. I think, overall, it is more productive to examine Yudkowsky's formulation than Bostrom's, as he has already helpfully factored the thesis into weak and strong forms. Therefore, by criticizing Yudkowsky's formulations, I am less likely to be criticizing a strawman. I will use "Weak Orthogonality" to refer to Yudkowsky's "Orthogonality Thesis" and "Strong Orthogonality" to refer to Yudkowsky's "strong form of the Orthogonality Thesis". Land, alternatively, describes a "diagonal" between intelligence and goals as an alternative to orthogonality, but I don't see a specific formulation of a "Diagonality Thesis" on his part. Here's a possible formulation: Diagonality Thesis: Final goals tend to converge to a point as intelligence increases. The main criticism of this thesis is that formulations of ideal agency, in the form of Bayesianism and VNM utility, leave open free parameters, e.g. priors over un-testable propositions, and the utility function. Since I expect few readers to accept the Diagonality Thesis, I will not concentrate on criticizing it. What about my own view? I like Tsvi's naming of it as an "obliqueness thesis". Obliqueness Thesis: The Diagonality Thesis and the Strong Orthogonality Thesis are false. Agents do not tend to factorize into an Orthogonal value-like component and a Diagonal belief-like component; rather, there are Oblique components that do not factorize neatly. (Here, by Orthogonal I mean basically independent of intelligence, and by Diagonal I mean converging to a point in the limit of intelligence.) While I will address Yudkowsky's arguments for the Orthogonality Thesis, I think arguing directly for my view first will be more helpful. In general, it seems ...

The Nonlinear Library
LW - Executable philosophy as a failed totalizing meta-worldview by jessicata

The Nonlinear Library

Play Episode Listen Later Sep 5, 2024 7:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Executable philosophy as a failed totalizing meta-worldview, published by jessicata on September 5, 2024 on LessWrong. (this is an expanded, edited version of an x.com post) It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative. So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital: Two motivations of "executable philosophy" are as follows: 1. We need a philosophical analysis to be "effective" in Turing's sense: that is, the terms of the analysis must be useful in writing programs. We need ideas that we can compile and run; they must be "executable" like code is executable. 2. We need to produce adequate answers on a time scale of years or decades, not centuries. In the entrepreneurial sense of "good execution", we need a methodology we can execute on in a reasonable timeframe. There is such a thing as common sense rationality, which says the world is round, you shouldn't play the lottery, etc. Formal notions like Bayesianism, VNM utility theory, and Solomonoff induction formalize something strongly related to this common sense rationality. Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications. In the Sequences, Yudkowsky presents these formal ideas as the basis for a totalizing meta-worldview, of epistemic and instrumental rationality, and uses the meta-worldview to argue for his object-level worldview (which includes many-worlds, AGI foom, importance of AI alignment, etc.). While one can get totalizing (meta-)worldviews from elsewhere (such as interdisciplinary academic studies), Yudkowsky's (meta-)worldview is relatively easy to pick up for analytically strong people (who tend towards STEM), and is effective ("correct" and "winning") relative to its simplicity. Yudkowsky's source material and his own writing do not form a closed meta-worldview, however. There are open problems as to how to formalize and solve real problems. Many of the more technical sort are described in MIRI's technical agent foundations agenda. These include questions about how to parse a physically realistic problem as a set of VNM lotteries ("decision theory"), how to use something like Bayesianism to handle uncertainty about mathematics ("logical uncertainty"), how to formalize realistic human values ("value loading"), and so on. Whether or not the closure of this meta-worldview leads to creation of friendly AGI, it would certainly have practical value. It would allow real world decisions to be made by first formalizing them within a computational framework (related to Yudkowsky's notion of "executable philosophy"), whether or not the computation itself is tractable (with its tractable version being friendly AGI). The practical strategy of MIRI as a technical research institute is to go meta on these open problems by recruiting analytically strong STEM people (especially mathematicians and computer scientists) to work on them, as part of the agent foundations agenda. I was one of these people. While we made some progress on these problems (such as with the Logical Induction paper), we didn't come close to completing the meta-worldview, let alone building friendly AGI. With the Agent Foundations team at MIRI eliminated, MIRI's agent foundations agenda is now unambiguously a failed project. I had called MIRI technical research as likely to fail around 2017 with the increase in internal secrecy, but at thi...

The Nonlinear Library: LessWrong
LW - Executable philosophy as a failed totalizing meta-worldview by jessicata

The Nonlinear Library: LessWrong

Play Episode Listen Later Sep 5, 2024 7:32


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Executable philosophy as a failed totalizing meta-worldview, published by jessicata on September 5, 2024 on LessWrong. (this is an expanded, edited version of an x.com post) It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative. So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital: Two motivations of "executable philosophy" are as follows: 1. We need a philosophical analysis to be "effective" in Turing's sense: that is, the terms of the analysis must be useful in writing programs. We need ideas that we can compile and run; they must be "executable" like code is executable. 2. We need to produce adequate answers on a time scale of years or decades, not centuries. In the entrepreneurial sense of "good execution", we need a methodology we can execute on in a reasonable timeframe. There is such a thing as common sense rationality, which says the world is round, you shouldn't play the lottery, etc. Formal notions like Bayesianism, VNM utility theory, and Solomonoff induction formalize something strongly related to this common sense rationality. Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications. In the Sequences, Yudkowsky presents these formal ideas as the basis for a totalizing meta-worldview, of epistemic and instrumental rationality, and uses the meta-worldview to argue for his object-level worldview (which includes many-worlds, AGI foom, importance of AI alignment, etc.). While one can get totalizing (meta-)worldviews from elsewhere (such as interdisciplinary academic studies), Yudkowsky's (meta-)worldview is relatively easy to pick up for analytically strong people (who tend towards STEM), and is effective ("correct" and "winning") relative to its simplicity. Yudkowsky's source material and his own writing do not form a closed meta-worldview, however. There are open problems as to how to formalize and solve real problems. Many of the more technical sort are described in MIRI's technical agent foundations agenda. These include questions about how to parse a physically realistic problem as a set of VNM lotteries ("decision theory"), how to use something like Bayesianism to handle uncertainty about mathematics ("logical uncertainty"), how to formalize realistic human values ("value loading"), and so on. Whether or not the closure of this meta-worldview leads to creation of friendly AGI, it would certainly have practical value. It would allow real world decisions to be made by first formalizing them within a computational framework (related to Yudkowsky's notion of "executable philosophy"), whether or not the computation itself is tractable (with its tractable version being friendly AGI). The practical strategy of MIRI as a technical research institute is to go meta on these open problems by recruiting analytically strong STEM people (especially mathematicians and computer scientists) to work on them, as part of the agent foundations agenda. I was one of these people. While we made some progress on these problems (such as with the Logical Induction paper), we didn't come close to completing the meta-worldview, let alone building friendly AGI. With the Agent Foundations team at MIRI eliminated, MIRI's agent foundations agenda is now unambiguously a failed project. I had called MIRI technical research as likely to fail around 2017 with the increase in internal secrecy, but at thi...

The Nonlinear Library
LW - How I got 3.2 million Youtube views without making a single video by Closed Limelike Curves

The Nonlinear Library

Play Episode Listen Later Sep 3, 2024 2:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I got 3.2 million Youtube views without making a single video, published by Closed Limelike Curves on September 3, 2024 on LessWrong. Just over a month ago, I wrote this. The Wikipedia articles on the VNM theorem, Dutch Book arguments, money pump, Decision Theory, Rational Choice Theory, etc. are all a horrific mess. They're also completely disjoint, without any kind of Wikiproject or wikiboxes for tying together all the articles on rational choice. It's worth noting that Wikipedia is the place where you - yes, you! - can actually have some kind of impact on public discourse, education, or policy. There is just no other place you can get so many views with so little barrier to entry. A typical Wikipedia article will get more hits in a day than all of your LessWrong blog posts have gotten across your entire life, unless you're @Eliezer Yudkowsky. I'm not sure if we actually "failed" to raise the sanity waterline, like people sometimes say, or if we just didn't even try. Given even some very basic low-hanging fruit interventions like "write a couple good Wikipedia articles" still haven't been done 15 years later, I'm leaning towards the latter. edit me senpai EDIT: Discord to discuss editing here. An update on this. I've been working on Wikipedia articles for just a few months, and Veritasium just put a video out on Arrow's impossibility theorem - which is almost completely based on my Wikipedia article on Arrow's impossibility theorem! Lots of lines and the whole structure/outline of the video are taken almost verbatim from what I wrote. I think there's a pretty clear reason for this: I recently rewrote the entire article to make it easy-to-read and focus heavily on the most important points. Relatedly, if anyone else knows any educational YouTubers like CGPGrey, Veritasium, Kurzgesagt, or whatever - please let me know! I'd love a chance to talk with them about any of the fields I've done work teaching or explaining (including social or rational choice, economics, math, and statistics). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - How I got 3.2 million Youtube views without making a single video by Closed Limelike Curves

The Nonlinear Library: LessWrong

Play Episode Listen Later Sep 3, 2024 2:05


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I got 3.2 million Youtube views without making a single video, published by Closed Limelike Curves on September 3, 2024 on LessWrong. Just over a month ago, I wrote this. The Wikipedia articles on the VNM theorem, Dutch Book arguments, money pump, Decision Theory, Rational Choice Theory, etc. are all a horrific mess. They're also completely disjoint, without any kind of Wikiproject or wikiboxes for tying together all the articles on rational choice. It's worth noting that Wikipedia is the place where you - yes, you! - can actually have some kind of impact on public discourse, education, or policy. There is just no other place you can get so many views with so little barrier to entry. A typical Wikipedia article will get more hits in a day than all of your LessWrong blog posts have gotten across your entire life, unless you're @Eliezer Yudkowsky. I'm not sure if we actually "failed" to raise the sanity waterline, like people sometimes say, or if we just didn't even try. Given even some very basic low-hanging fruit interventions like "write a couple good Wikipedia articles" still haven't been done 15 years later, I'm leaning towards the latter. edit me senpai EDIT: Discord to discuss editing here. An update on this. I've been working on Wikipedia articles for just a few months, and Veritasium just put a video out on Arrow's impossibility theorem - which is almost completely based on my Wikipedia article on Arrow's impossibility theorem! Lots of lines and the whole structure/outline of the video are taken almost verbatim from what I wrote. I think there's a pretty clear reason for this: I recently rewrote the entire article to make it easy-to-read and focus heavily on the most important points. Relatedly, if anyone else knows any educational YouTubers like CGPGrey, Veritasium, Kurzgesagt, or whatever - please let me know! I'd love a chance to talk with them about any of the fields I've done work teaching or explaining (including social or rational choice, economics, math, and statistics). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

limbo
Anna Plaskota: była szefową wytwórni PROSTO. Jak zacząć pracę w branży i jak się wybić jako artysta?

limbo

Play Episode Listen Later Jul 17, 2024 57:12


Dzisiaj miałam okazję porozmawiać z Anią. Zapraszam do odsłuchania rozmowy, a także do słuchania kolejnych odcinków w każdą środę, kiedy będę gościć innych rozmówców! :)  Anna Plaskota była szefową w wytwórni muzycznej PROSTO. Specjalizuje się w planowaniu i realizacji strategii wydawniczej, dystrybucji cyfrowej oraz licencjonowaniu praw autorskich. Na swoim koncie ma pracę przy około 100 płytach, przy których współpracowała z takimi artystami jak: Sokół, VNM, PRO8L3M, Kękę i inni. Jeśli macie małe wyobrażenie o branży muzycznej i jej funkcjonowaniu, to Ania z pewnością uzupełni Wam ten obraz. Jak to się stało, że trafiła do branży i jak trafić do branży w dzisiejszych czasach? Jak wygląda taka praca? Jakie są wyzwania i co było wypalające dla Ani w tej pracy? Ania udzieli rady artystom, którzy próbują się wybić, ale też osobom, które chcą pracować na podobnym stanowisku. O kobietach, znajomościach i innych realiach świata muzyki.  Instagram Ani: @aniawooPlan odcinka: 00:00 - Jak Ania znalazła pracę w wytwórni i jak wyglądało PROSTO wtedy?4:20 - Czy Hit na Spotify Wystarczy? Jak wytwórnia wybiera artystów do współpracy i co myślą o takich artystach jak Wersow i Wieniawa?11:42 - Firmy tworzące сały utwory za pieniądze: czy ktoś naprawdę przebił się w ten sposób?16:10 - Praca Ani: artyści, płyty, wyróżnienia, koncerty. Jak powstaje teledysk i o ewentualnych trudnościach pracy z artystami.32:00 - O przyczynie wypalenia Ani, zwolnienie się z PROSTO. 35:55 - Jak Ania czuła się na wysokiej pozycji w branży i na czym polegają jej konsultacje dla firm i artystów?43:44 - Co powinna robić osoba, która chce zacząć pracę na podobnym stanowisku jak Ania? 48:16 - Jak artyści i wytwórnia reaguje na porażki artysty. Jakie ma znaczenie płeć i znajomości w tej branży?53:39 - Jaką ma radę Ania dla artysty, który dopiero zaczyna?

Pestka Shoots
ANALITYK W KUCHNI - @JedynakGotuje - #PP 74

Pestka Shoots

Play Episode Listen Later Apr 4, 2024 66:22


Dziś na główne danie wjeżdża rozmowa z Dawidem Jedynakiem @JedynakGotuje. Z wykształcenia analityk, który poświęcił się swojej największej zajawce, czyli gotowaniu. Zapraszam, bo w tej rozmowie Jedynak serwuje samo mięso! Rozmawiamy o jego drodze, inspiracjach, zmianie podejścia do TikToka oraz o tym, jak wykorzystać wiedzę analityka przy tworzeniu contentu w social mediach. ☕️Postaw mi kawę: BuyCoffee Podziękowania dla Wspierających Odcinek: Dom Produkcyjny - ⁠Paradox Media⁠ Yerba Mate - ⁠Bear Mate⁠ Polecane w Rozmowie: jedynak.gotuje - Instagram link TikTok link XY Tattoo Studio - Instagram link VNM - okładka albumu - Spotify Link Rozmowa z Igorem Leśniewskim - Spotify Link Słoiki z Łodzi - Strona www The Food Emperor - YouTube Link Rozdziały: (00:00:00) Skrót tego odcinka (00:01:13) Pójdę na grubo z tatuażami (00:05:12) Dużo zawdzięczam rodzicom (00:09:57) Liczy się kreatywność i brak wstydu (00:14:38) Nie chcę tworzyć czegoś na siłę (00:26:37) Smak jest na pierwszym miejscu (00:29:55) Wpadki dobrze się sprzedają (00:33:45) Jestem częścią TikTokowej epidemii (00:36:50) Inspiruje mnie zagraniczna scena (00:41:00) Klucz do dużych zasięgów na TikToku (00:45:40) Kanał Zero (00:48:47) Muszę usunąć swoje stare filmy (00:51:39) Dużo poświęciłem, aby tworzyć w internecie (00:57:45) TikTok zaburza realacje ----------- Pestka Podcast ® Rozmawiamy o kreatywności. Fotograf Maciej 'Magic' Pestka rozmawia o kreatywności z kreatywnymi osobami. www: ⁠http://www.pestkashoots.com/⁠ Insta: ⁠https://www.instagram.com/pestkashoots/⁠ #Rozmowa #Podcast #Pestka #Marta Siniło #Stylistka #Business Woman #Fashion # Moda --- Send in a voice message: https://podcasters.spotify.com/pod/show/pestkapodcast/message

313.fm
Planet Funk Ep 516.mp3

313.fm

Play Episode Listen Later Feb 2, 2024 177:43


VNM$ & Ganja Girl Planet Funk Ep 516

The Nonlinear Library
LW - 'Theories of Values' and 'Theories of Agents': confusions, musings and desiderata by Mateusz Bagiński

The Nonlinear Library

Play Episode Listen Later Nov 16, 2023 37:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'Theories of Values' and 'Theories of Agents': confusions, musings and desiderata, published by Mateusz Bagiński on November 16, 2023 on LessWrong. Meta: Content signposts: we talk about limits to expected utility theory; what values are (and ways in which we're confused about what values are); the need for a "generative"/developmental logic of agents (and their values); types of constraints on the "shape" of agents; relationships to FEP/active inference; and (ir)rational/(il)legitimate value change. Context: we're basically just chatting about topics of mutual interests, so the conversation is relatively free-wheeling and includes a decent amount of "creative speculation". Epistemic status: involves a bunch of "creative speculation" that we don't think is true at face value and which may or may not turn out to be useful for making progress on deconfusing our understanding of the respective territory. Expected utility theory (stated in terms of the VNM axioms or something equivalent) thinks of rational agents as composed of two "parts", i.e., beliefs and preferences. Beliefs are expressed in terms of probabilities that are being updated in the process of learning (e.g., Bayesian updating). Preferences can be expressed as an ordering over alternative states of the world or outcomes or something similar. If we assume an agent's set of preferences to satisfy the four VNM axioms (or some equivalent desiderata), then those preferences can be expressed with some real-valued utility function u and the agent will behave as if they were maximizing that u. On this account, beliefs change in response to evidence, whereas values/preferences in most cases don't. Rational behavior comes down to (behaving as if one is) ~maximizing one's preference satisfaction/expected utility. Most changes to one's preferences are detrimental to their satisfaction, so rational agents should want to keep their preferences unchanged (i.e., utility function preservation is an instrumentally convergent goal). Thus, for a preference modification to be rational, it would have to result in higher expected utility than leaving the preferences unchanged. My impression is that the most often discussed setup where this is the case involves interactions between two or more agents. For example, if you and and some other agent have somewhat conflicting preferences, you may go on a compromise where each one of you makes them preferences somewhat more similar to the preferences of the other. This costs both of you a bit of (expected subjective) utility, but less than you would lose (in expectation) if you engaged in destructive conflict. Another scenario justifying modification of one's preferences is when you realize the world is different than you expected on your priors, such that you need to abandon the old ontology and/or readjust it. If your preferences were defined in terms of (or strongly entangled with) concepts from the previous ontology, then you will also need to refactor your preferences. You think that this is a confused way to think about rationality. For example, you see self-induced/voluntary value change as something that in some cases is legitimate/rational. I'd like to elicit some of your thoughts about value change in humans. What makes a specific case of value change (il)legitimate? How is that tied to the concepts of rationality, agency, etc? Once we're done with that, we can talk more generally about arguments for why the values of an agent/system should not be fixed. Sounds good? On a meta note: I've been using the words "preference" and "value" more or less interchangeably, without giving much thought to it. Do you view them as interchangeable or would you rather first make some conceptual/terminological clarification? Sounds great! (And I'm happy to use "preferences" and "values" interc...

The Nonlinear Library: LessWrong
LW - 'Theories of Values' and 'Theories of Agents': confusions, musings and desiderata by Mateusz Bagiński

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 16, 2023 37:06


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'Theories of Values' and 'Theories of Agents': confusions, musings and desiderata, published by Mateusz Bagiński on November 16, 2023 on LessWrong. Meta: Content signposts: we talk about limits to expected utility theory; what values are (and ways in which we're confused about what values are); the need for a "generative"/developmental logic of agents (and their values); types of constraints on the "shape" of agents; relationships to FEP/active inference; and (ir)rational/(il)legitimate value change. Context: we're basically just chatting about topics of mutual interests, so the conversation is relatively free-wheeling and includes a decent amount of "creative speculation". Epistemic status: involves a bunch of "creative speculation" that we don't think is true at face value and which may or may not turn out to be useful for making progress on deconfusing our understanding of the respective territory. Expected utility theory (stated in terms of the VNM axioms or something equivalent) thinks of rational agents as composed of two "parts", i.e., beliefs and preferences. Beliefs are expressed in terms of probabilities that are being updated in the process of learning (e.g., Bayesian updating). Preferences can be expressed as an ordering over alternative states of the world or outcomes or something similar. If we assume an agent's set of preferences to satisfy the four VNM axioms (or some equivalent desiderata), then those preferences can be expressed with some real-valued utility function u and the agent will behave as if they were maximizing that u. On this account, beliefs change in response to evidence, whereas values/preferences in most cases don't. Rational behavior comes down to (behaving as if one is) ~maximizing one's preference satisfaction/expected utility. Most changes to one's preferences are detrimental to their satisfaction, so rational agents should want to keep their preferences unchanged (i.e., utility function preservation is an instrumentally convergent goal). Thus, for a preference modification to be rational, it would have to result in higher expected utility than leaving the preferences unchanged. My impression is that the most often discussed setup where this is the case involves interactions between two or more agents. For example, if you and and some other agent have somewhat conflicting preferences, you may go on a compromise where each one of you makes them preferences somewhat more similar to the preferences of the other. This costs both of you a bit of (expected subjective) utility, but less than you would lose (in expectation) if you engaged in destructive conflict. Another scenario justifying modification of one's preferences is when you realize the world is different than you expected on your priors, such that you need to abandon the old ontology and/or readjust it. If your preferences were defined in terms of (or strongly entangled with) concepts from the previous ontology, then you will also need to refactor your preferences. You think that this is a confused way to think about rationality. For example, you see self-induced/voluntary value change as something that in some cases is legitimate/rational. I'd like to elicit some of your thoughts about value change in humans. What makes a specific case of value change (il)legitimate? How is that tied to the concepts of rationality, agency, etc? Once we're done with that, we can talk more generally about arguments for why the values of an agent/system should not be fixed. Sounds good? On a meta note: I've been using the words "preference" and "value" more or less interchangeably, without giving much thought to it. Do you view them as interchangeable or would you rather first make some conceptual/terminological clarification? Sounds great! (And I'm happy to use "preferences" and "values" interc...

Pigeon Hour
#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can

Pigeon Hour

Play Episode Listen Later Oct 17, 2023 97:43


* Listen on Spotify or Apple Podcasts* Be sure to check out and follow Holly's Substack and org Pause AI. Blurb and summary from ClongBlurbHolly and Aaron had a wide-ranging discussion touching on effective altruism, AI alignment, genetic conflict, wild animal welfare, and the importance of public advocacy in the AI safety space. Holly spoke about her background in evolutionary biology and how she became involved in effective altruism. She discussed her reservations around wild animal welfare and her perspective on the challenges of AI alignment. They talked about the value of public opinion polls, the psychology of AI researchers, and whether certain AI labs like OpenAI might be net positive actors. Holly argued for the strategic importance of public advocacy and pushing the Overton window within EA on AI safety issues.Detailed summary* Holly's background - PhD in evolutionary biology, got into EA through New Atheism and looking for community with positive values, did EA organizing at Harvard* Worked at Rethink Priorities on wild animal welfare but had reservations about imposing values on animals and whether we're at the right margin yet* Got inspired by FLI letter to focus more on AI safety advocacy and importance of public opinion* Discussed genetic conflict and challenges of alignment even with "closest" agents* Talked about the value of public opinion polls and influencing politicians* Discussed the psychology and motives of AI researchers* Disagreed a bit on whether certain labs like OpenAI might be net positive actors* Holly argued for importance of public advocacy in AI safety, thinks we have power to shift Overton window* Talked about the dynamics between different AI researchers and competition for status* Discussed how rationalists often dismiss advocacy and politics* Holly thinks advocacy is neglected and can push the Overton window even within EA* Also discussed Holly's evolutionary biology takes, memetic drive, gradient descent vs. natural selectionFull transcript (very imperfect)AARONYou're an AI pause, Advocate. Can you remind me of your shtick before that? Did you have an EA career or something?HOLLYYeah, before that I was an academic. I got into EA when I was doing my PhD in evolutionary biology, and I had been into New Atheism before that. I had done a lot of organizing for that in college. And while the enlightenment stuff and what I think is the truth about there not being a God was very important to me, but I didn't like the lack of positive values. Half the people there were sort of people like me who are looking for community after leaving their religion that they grew up in. And sometimes as many as half of the people there were just looking for a way for it to be okay for them to upset people and take away stuff that was important to them. And I didn't love that. I didn't love organizing a space for that. And when I got to my first year at Harvard, harvard Effective Altruism was advertising for its fellowship, which became the Elite Fellowship eventually. And I was like, wow, this is like, everything I want. And it has this positive organizing value around doing good. And so I was totally made for it. And pretty much immediately I did that fellowship, even though it was for undergrad. I did that fellowship, and I was immediately doing a lot of grad school organizing, and I did that for, like, six more years. And yeah, by the time I got to the end of grad school, I realized I was very sick in my fifth year, and I realized the stuff I kept doing was EA organizing, and I did not want to keep doing work. And that was pretty clear. I thought, oh, because I'm really into my academic area, I'll do that, but I'll also have a component of doing good. I took giving what we can in the middle of grad school, and I thought, I actually just enjoy doing this more, so why would I do anything else? Then after grad school, I started applying for EA jobs, and pretty soon I got a job at Rethink Priorities, and they suggested that I work on wild animal welfare. And I have to say, from the beginning, it was a little bit like I don't know, I'd always had very mixed feelings about wild animal welfare as a cause area. How much do they assume the audience knows about EA?AARONA lot, I guess. I think as of right now, it's a pretty hardcore dozen people. Also. Wait, what year is any of this approximately?HOLLYSo I graduated in 2020.AARONOkay.HOLLYYeah. And then I was like, really?AARONOkay, this is not extremely distant history. Sometimes people are like, oh, yeah, like the OG days, like four or something. I'm like, oh, my God.HOLLYOh, yeah, no, I wish I had been in these circles then, but no, it wasn't until like, 2014 that I really got inducted. Yeah, which now feels old because everybody's so young. But yeah, in 2020, I finished my PhD, and I got this awesome remote job at Rethink Priorities during the Pandemic, which was great, but I was working on wild animal welfare, which I'd always had some. So wild animal welfare, just for anyone who's not familiar, is like looking at the state of the natural world and seeing if there's a way that usually the hedonic so, like, feeling pleasure, not pain sort of welfare of animals can be maximized. So that's in contrast to a lot of other ways of looking at the natural world, like conservation, which are more about preserving a state of the world the way preserving, maybe ecosystem balance, something like that. Preserving species diversity. The priority with wild animal welfare is the effect of welfare, like how it feels to be the animals. So it is very understudied, but I had a lot of reservations about it because I'm nervous about maximizing our values too hard onto animals or imposing them on other species.AARONOkay, that's interesting, just because we're so far away from the margin of I'm like a very pro wild animal animal welfare pilled person.HOLLYI'm definitely pro in theory.AARONHow many other people it's like you and formerly you and six other people or whatever seems like we're quite far away from the margin at which we're over optimizing in terms of giving heroin to all the sheep or I don't know, the bugs and stuff.HOLLYBut it's true the field is moving in more my direction and I think it's just because they're hiring more biologists and we tend to think this way or have more of this perspective. But I'm a big fan of Brian domestics work. But stuff like finding out which species have the most capacity for welfare I think is already sort of the wrong scale. I think a lot will just depend on how much. What are the conditions for that species?AARONYeah, no, there's like seven from the.HOLLYCoarseness and the abstraction, but also there's a lot of you don't want anybody to actually do stuff like that and it would be more possible to do the more simple sounding stuff. My work there just was consisted of being a huge downer. I respect that. I did do some work that I'm proud of. I have a whole sequence on EA forum about how we could reduce the use of rodenticide, which I think was the single most promising intervention that we came up with in the time that I was there. I mean, I didn't come up with it, but that we narrowed down. And even that just doesn't affect that many animals directly. It's really more about the impact is from what you think you'll get with moral circle expansion or setting precedents for the treatment of non human animals or wild animals, or semi wild animals, maybe like being able to be expanded into wild animals. And so it all felt not quite up to EA standards of impact. And I felt kind of uncomfortable trying to make this thing happen in EA when I wasn't sure that my tentative conclusion on wild animal welfare, after working on it and thinking about it a lot for three years, was that we're sort of waiting for transformative technology that's not here yet in order to be able to do the kinds of interventions that we want. And there are going to be other issues with the transformative technology that we have to deal with first.AARONYeah, no, I've been thinking not that seriously or in any formal way, just like once in a while I just have a thought like oh, I wonder how the field of, like, I guess wild animal sorry, not wild animal. Just like animal welfare in general and including wild animal welfare might make use of AI above and beyond. I feel like there's like a simple take which is probably mostly true, which is like, oh, I mean the phrase that everybody loves to say is make AI go well or whatever that but that's basically true. Probably you make aligned AI. I know that's like a very oversimplification and then you can have a bunch of wealth or whatever to do whatever you want. I feel like that's kind of like the standard line, but do you have any takes on, I don't know, maybe in the next couple of years or anything more specifically beyond just general purpose AI alignment, for lack of a better term, how animal welfare might put to use transformative AI.HOLLYMy last work at Rethink Priorities was like looking a sort of zoomed out look at the field and where it should go. And so we're apparently going to do a public version, but I don't know if that's going to happen. It's been a while now since I was expecting to get a call about it. But yeah, I'm trying to think of what can I scrape from that?AARONAs much as you can, don't reveal any classified information. But what was the general thing that this was about?HOLLYThere are things that I think so I sort of broke it down into a couple of categories. There's like things that we could do in a world where we don't get AGI for a long time, but we get just transformative AI. Short of that, it's just able to do a lot of parallel tasks. And I think we could do a lot we could get a lot of what we want for wild animals by doing a ton of surveillance and having the ability to make incredibly precise changes to the ecosystem. Having surveillance so we know when something is like, and the capacity to do really intense simulation of the ecosystem and know what's going to happen as a result of little things. We could do that all without AGI. You could just do that with just a lot of computational power. I think our ability to simulate the environment right now is not the best, but it's not because it's impossible. It's just like we just need a lot more observations and a lot more ability to simulate a comparison is meteorology. Meteorology used to be much more of an art, but it became more of a science once they started just literally taking for every block of air and they're getting smaller and smaller, the blocks. They just do Bernoulli's Law on it and figure out what's going to happen in that block. And then you just sort of add it all together and you get actually pretty good.AARONDo you know how big the blocks are?HOLLYThey get smaller all the time. That's the resolution increase, but I don't know how big the blocks are okay right now. And shockingly, that just works. That gives you a lot of the picture of what's going to happen with weather. And I think that modeling ecosystem dynamics is very similar to weather. You could say more players than ecosystems, and I think we could, with enough surveillance, get a lot better at monitoring the ecosystem and then actually have more of a chance of implementing the kinds of sweeping interventions we want. But the price would be just like never ending surveillance and having to be the stewards of the environment if we weren't automating. Depending on how much you want to automate and depending on how much you can automate without AGI or without handing it over to another intelligence.AARONYeah, I've heard this. Maybe I haven't thought enough. And for some reason, I'm just, like, intuitively. I feel like I'm more skeptical of this kind of thing relative to the actual. There's a lot of things that I feel like a person might be skeptical about superhuman AI. And I'm less skeptical of that or less skeptical of things that sound as weird as this. Maybe because it's not. One thing I'm just concerned about is I feel like there's a larger scale I can imagine, just like the choice of how much, like, ecosystem is like yeah, how much ecosystem is available for wild animals is like a pretty macro level choice that might be not at all deterministic. So you could imagine spreading or terraforming other planets and things like that, or basically continuing to remove the amount of available ecosystem and also at a much more practical level, clean meat development. I have no idea what the technical bottlenecks on that are right now, but seems kind of possible that I don't know, AI can help it in some capacity.HOLLYOh, I thought you're going to say that it would increase the amount of space available for wild animals. Is this like a big controversy within, I don't know, this part of the EA animal movement? If you advocate diet change and if you get people to be vegetarians, does that just free up more land for wild animals to suffer on? I thought this was like, guys, we just will never do anything if we don't choose sort of like a zone of influence and accomplish something there. It seemed like this could go on forever. It was like, literally, I rethink actually. A lot of discussions would end in like, okay, so this seems like really good for all of our target populations, but what about wild animals? I could just reverse everything. I don't know. The thoughts I came to on that were that it is worthwhile to try to figure out what are all of the actual direct effects, but I don't think we should let that guide our decision making. Only you have to have some kind of theory of change, of what is the direct effect going to lead to? And I just think that it's so illegible what you're trying to do. If you're, like, you should eat this kind of fish to save animals. It doesn't lead society to adopt, to understand and adopt your values. It's so predicated on a moment in time that might be convenient. Maybe I'm not looking hard enough at that problem, but the conclusion I ended up coming to was just like, look, I just think we have to have some idea of not just the direct impacts, but something about the indirect impacts and what's likely to facilitate other direct impacts that we want in the future.AARONYeah. I also share your I don't know. I'm not sure if we share the same or I also feel conflicted about this kind of thing. Yeah. And I don't know, at the very least, I have a very high bar for saying, actually the worst of factory farming is like, we should just like, yeah, we should be okay with that, because some particular model says that at this moment in time, it has some net positive effect on animal welfare.HOLLYWhat morality is that really compatible with? I mean, I understand our morality, but maybe but pretty much anyone else who hears that conclusion is going to think that that means that the suffering doesn't matter or something.AARONYeah, I don't know. I think maybe more than you, I'm willing to bite the bullet if somebody really could convince me that, yeah, chicken farming is actually just, in fact, good, even though it's counterintuitive, I'll be like, all right, fine.HOLLYSurely there are other ways of occupying.AARONYeah.HOLLYSame with sometimes I would get from very classical wild animal suffering people, like, comments on my rodenticide work saying, like, well, what if it's good to have more rats? I don't know. There are surely other vehicles for utility other than ones that humans are bent on destroying.AARONYeah, it's kind of neither here nor there, but I don't actually know if this is causally important, but at least psychologically. I remember seeing a mouse in a glue trap was very had an impact on me from maybe turning me, like, animal welfare pills or something. That's like, neither here nor there. It's like a random anecdote, but yeah, seems bad. All right, what came after rethink for you?HOLLYYeah. Well, after the publication of the FLI Letter and Eliezer's article in Time, I was super inspired by pause. A number of emotional changes happened to me about AI safety. Nothing intellectual changed, but just I'd always been confused at and kind of taken it as a sign that people weren't really serious about AI risk when they would say things like, I don't know, the only option is alignment. The only option is for us to do cool, nerd stuff that we love doing nothing else would. I bought the arguments, but I just wasn't there emotionally. And seeing Eliezer advocate political change because he wants to save everyone's lives and he thinks that's something that we can do. Just kind of I'm sure I didn't want to face it before because it was upsetting. Not that I haven't faced a lot of upsetting and depressing things like I worked in wild animal welfare, for God's sake, but there was something that didn't quite add up for me, or I hadn't quite grocked about AI safety until seeing Eliezer really show that his concern is about everyone dying. And he's consistent with that. He's not caught on only one way of doing it, and it just kind of got in my head and I kept wanting to talk about it at work and it sort of became clear like they weren't going to pursue that sort of intervention. But I kept thinking of all these parallels between animal advocacy stuff that I knew and what could be done in AI safety. And these polls kept coming out showing that there was really high support for Paws and I just thought, this is such a huge opportunity, I really would love to help out. Originally I was looking around for who was going to be leading campaigns that I could volunteer in, and then eventually I thought, it just doesn't seem like somebody else is going to do this in the Bay Area. So I just ended up quitting rethink and being an independent organizer. And that has been really I mean, honestly, it's like a tough subject. It's like a lot to deal with, but honestly, compared to wild animal welfare, it's not that bad. And I think I'm pretty used to dealing with tough and depressing low tractability causes, but I actually think this is really tractable. I've been shocked how quickly things have moved and I sort of had this sense that, okay, people are reluctant in EA and AI safety in particular, they're not used to advocacy. They kind of vaguely think that that's bad politics is a mind killer and it's a little bit of a threat to the stuff they really love doing. Maybe that's not going to be so ascendant anymore and it's just stuff they're not familiar with. But I have the feeling that if somebody just keeps making this case that people will take to it, that I could push the Oberson window with NEA and that's gone really well.AARONYeah.HOLLYAnd then of course, the public is just like pretty down. It's great.AARONYeah. I feel like it's kind of weird because being in DC and I've always been, I feel like I actually used to be more into politics, to be clear. I understand or correct me if I'm wrong, but advocacy doesn't just mean in the political system or two politicians or whatever, but I assume that's like a part of what you're thinking about or not really.HOLLYYeah. Early on was considering working on more political process type advocacy and I think that's really important. I totally would have done it. I just thought that it was more neglected in our community to do advocacy to the public and a lot of people had entanglements that prevented them from doing so. They work sort of with AI labs or it's important to their work that they not declare against AI labs or something like that or be perceived that way. And so they didn't want to do public advocacy that could threaten what else they're doing. But I didn't have anything like that. I've been around for a long time in EA and I've been keeping up on AI safety, but I've never really worked. That's not true. I did a PiBBs fellowship, but.AARONI've.HOLLYNever worked for anybody in like I was just more free than a lot of other people to do the public messaging and so I kind of felt that I should. Yeah, I'm also more willing to get into conflict than other EA's and so that seems valuable, no?AARONYeah, I respect that. Respect that a lot. Yeah. So like one thing I feel like I've seen a lot of people on Twitter, for example. Well, not for example. That's really just it, I guess, talking about polls that come out saying like, oh yeah, the public is super enthusiastic about X, Y or Z, I feel like these are almost meaningless and maybe you can convince me otherwise. It's not exactly to be clear, I'm not saying that. I guess it could always be worse, right? All things considered, like a poll showing X thing is being supported is better than the opposite result, but you can really get people to say anything. Maybe I'm just wondering about the degree to which the public how do you imagine the public and I'm doing air quotes to playing into policies either of, I guess, industry actors or government actors?HOLLYWell, this is something actually that I also felt that a lot of EA's were unfamiliar with. But it does matter to our representatives, like what the constituents think it matters a mean if you talk to somebody who's ever interned in a congressperson's office, one person calling and writing letters for something can have actually depending on how contested a policy is, can have a largeish impact. My ex husband was an intern for Jim Cooper and they had this whole system for scoring when calls came in versus letters. Was it a handwritten letter, a typed letter? All of those things went into how many points it got and that was something they really cared about. Politicians do pay attention to opinion polls and they pay attention to what their vocal constituents want and they pay attention to not going against what is the norm opinion. Even if nobody in particular is pushing them on it or seems to feel strongly about it. They really are trying to calibrate themselves to what is the norm. So those are always also sometimes politicians just get directly convinced by arguments of what a policy should be. So yeah, public opinion is, I think, underappreciated by ya's because it doesn't feel like mechanistic. They're looking more for what's this weird policy hack that's going to solve what's? This super clever policy that's going to solve things rather than just like what's acceptable discourse, like how far out of his comfort zone does this politician have to go to advocate for this thing? How unpopular is it going to be to say stuff that's against this thing that now has a lot of public support?AARONYeah, I guess mainly I'm like I guess I'm also I definitely could be wrong with this, but I would expect that a lot of the yeah, like for like when politicians like, get or congresspeople like, get letters and emails or whatever on a particular especially when it's relevant to a particular bill. And it's like, okay, this bill has already been filtered for the fact that it's going to get some yes votes and some no votes and it's close to or something like that. Hearing from an interested constituency is really, I don't know, I guess interesting evidence. On the other hand, I don't know, you can kind of just get Americans to say a lot of different things that I think are basically not extremely unlikely to be enacted into laws. You know what I mean? I don't know. You can just look at opinion. Sorry. No great example comes to mind right now. But I don't know, if you ask the public, should we do more safety research into, I don't know, anything. If it sounds good, then people will say yes, or am I mistaken about this?HOLLYI mean, on these polls, usually they ask the other way around as well. Do you think AI is really promising for its benefits and should be accelerated? They answer consistently. It's not just like, well now that sounds positive. Okay. I mean, a well done poll will correct for these things. Yeah. I've encountered a lot of skepticism about the polls. Most of the polls on this have been done by YouGov, which is pretty reputable. And then the ones that were replicated by rethink priorities, they found very consistent results and I very much trust Rethink priorities on polls. Yeah. I've had people say, well, these framings are I don't know, they object and wonder if it's like getting at the person's true beliefs. And I kind of think like, I don't know, basically this is like the kind of advocacy message that I would give and people are really receptive to it. So to me that's really promising. Whether or not if you educated them a lot more about the topic, they would think the same is I don't think the question but that's sometimes an objection that I get. Yeah, I think they're indicative. And then I also think politicians just care directly about these things. If they're able to cite that most of the public agrees with this policy, that sort of gives them a lot of what they want, regardless of whether there's some qualification to does the public really think this or are they thinking hard enough about it? And then polls are always newsworthy. Weirdly. Just any poll can be a news story and journalists love them and so it's a great chance to get exposure for the whatever thing. And politicians do care what's in the news. Actually, I think we just have more influence over the political process than EA's and less wrongers tend to believe it's true. I think a lot of people got burned in AI safety, like in the previous 20 years because it would be dismissed. It just wasn't in the overton window. But I think we have a lot of power now. Weirdly. People care what effective altruists think. People see us as having real expertise. The AI safety community does know the most about this. It's pretty wild now that's being recognized publicly and journalists and the people who influence politicians, not directly the people, but the Fourth Estate type, people pay attention to this and they influence policy. And there's many levels of I wrote if people want a more detailed explanation of this, but still high level and accessible, I hope I wrote a thing on EA forum called The Case for AI Safety Advocacy. And that kind of goes over this concept of outside versus inside game. So inside game is like working within a system to change it. Outside game is like working outside the system to put pressure on that system to change it. And I think there's many small versions of this. I think that it's helpful within EA and AI safety to be pushing the overton window of what I think that people have a wrong understanding of how hard it is to communicate this topic and how hard it is to influence governments. I want it to be more acceptable. I want it to feel more possible in EA and AI safety to go this route. And then there's the public public level of trying to make them more familiar with the issue, frame it in the way that I want, which is know, with Sam Altman's tour, the issue kind of got framed as like, well, AI is going to get built, but how are we going to do it safely? And then I would like to take that a step back and be like, should AI be built or should AGI be just if we tried, we could just not do that, or we could at least reduce the speed. And so, yeah, I want people to be exposed to that frame. I want people to not be taken in by other frames that don't include the full gamut of options. I think that's very possible. And then there's a lot of this is more of the classic thing that's been going on in AI safety for the last ten years is trying to influence AI development to be more safety conscious. And that's like another kind of dynamic. There, like trying to change sort of the general flavor, like, what's acceptable? Do we have to care about safety? What is safety? That's also kind of a window pushing exercise.AARONYeah. Cool. Luckily, okay, this is not actually directly responding to anything you just said, which is luck. So I pulled up this post. So I should have read that. Luckily, I did read the case for slowing down. It was like some other popular post as part of the, like, governance fundamentals series. I think this is by somebody, Zach wait, what was it called? Wait.HOLLYIs it by Zach or.AARONKatya, I think yeah, let's think about slowing down AI. That one. So that is fresh in my mind, but yours is not yet. So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.HOLLYWell, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.AARONThat's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.HOLLYMaybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.AARONI'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.HOLLYYeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?AARONYeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.HOLLYI think that's like 99 it's like a way of showing magnanimity showing, but.AARONI don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.HOLLYLiterally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.AARONYeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.HOLLYThere are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.AARONAside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.HOLLYYeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.AARONNot personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.HOLLYYou know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.AARONOkay. Yeah.HOLLYI just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.AARONMaybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't know. I don't think that actually that's not an actual possibility. But, like, maybe, like, we have a different idea about, like, the degree to which, like, a problem is just, like, a million different not even a million, but, say, like, a thousand different actors, like, having increasingly powerful models versus, like, the actual, like like the actual, like, state of the art right now, being plausibly near a dangerous threshold or something. Does this make any sense to you?HOLLYBoth those things are yeah, and this is one thing I really like about the pause position is that unlike a lot of proposals that try to allow for alignment, it's not really close to a bad choice. It's just more safe. I mean, it might be foregoing some value if there is a way to get an aligned AI faster. But, yeah, I like the pause position because it's kind of robust to this. I can't claim to know more about alignment than OpenAI or anthropic staff. I think they know much more about it. But I have fundamental doubts about the concept of alignment that make me think I'm concerned about even if things go right, like, what perverse consequences go nominally right, like, what perverse consequences could follow from that. I have, I don't know, like a theory of psychology that's, like, not super compatible with alignment. Like, I think, like yeah, like humans in living in society together are aligned with each other, but the society is a big part of that. The people you're closest to are also my background in evolutionary biology has a lot to do with genetic conflict.AARONWhat is that?HOLLYGenetic conflict is so interesting. Okay, this is like the most fascinating topic in biology, but it's like, essentially that in a sexual species, you're related to your close family, you're related to your ken, but you're not the same as them. You have different interests. And mothers and fathers of the same children have largely overlapping interests, but they have slightly different interests in what happens with those children. The payoff to mom is different than the payoff to dad per child. One of the classic genetic conflict arenas and one that my advisor worked on was my advisor was David Haig, was pregnancy. So mom and dad both want an offspring that's healthy. But mom is thinking about all of her offspring into the future. When she thinks about how much.AARONWhen.HOLLYMom is giving resources to one baby, that is in some sense depleting her ability to have future children. But for dad, unless the species is.AARONPerfect, might be another father in the future.HOLLYYeah, it's in his interest to take a little more. And it's really interesting. Like the tissues that the placenta is an androgenetic tissue. This is all kind of complicated. I'm trying to gloss over some details, but it's like guided more by genes that are active in when they come from the father, which there's this thing called genomic imprinting that first, and then there's this back and forth. There's like this evolution between it's going to serve alleles that came from dad imprinted, from dad to ask for more nutrients, even if that's not good for the mother and not what the mother wants. So the mother's going to respond. And you can see sometimes alleles are pretty mismatched and you get like, mom's alleles want a pretty big baby and a small placenta. So sometimes you'll see that and then dad's alleles want a big placenta and like, a smaller baby. These are so cool, but they're so hellishly complicated to talk about because it involves a bunch of genetic concepts that nobody talks about for any other reason.AARONI'm happy to talk about that. Maybe part of that dips below or into the weeds threshold, which I've kind of lost it, but I'm super interested in this stuff.HOLLYYeah, anyway, so the basic idea is just that even the people that you're closest with and cooperate with the most, they tend to be clearly this is predicated on our genetic system. There's other and even though ML sort of evolves similarly to natural selection through gradient descent, it doesn't have the same there's no recombination, there's not genes, so there's a lot of dis analogies there. But the idea that being aligned to our psychology would just be like one thing. Our psychology is pretty conditional. I would agree that it could be one thing if we had a VNM utility function and you could give it to AGI, I would think, yes, that captures it. But even then, that utility function, it covers when you're in conflict with someone, it covers different scenarios. And so I just am like not when people say alignment. I think what they're imagining is like an omniscient. God, who knows what would be best? And that is different than what I think could be meant by just aligning values.AARONNo, I broadly very much agree, although I do think at least this is my perception, is that based on the right 95 to 2010 Miri corpus or whatever, alignment was like alignment meant something that was kind of not actually possible in the way that you're saying. But now that we have it seems like actually humans have been able to get ML models to understand basically human language pretty shockingly. Well, and so actually, just the concern about maybe I'm sort of losing my train of thought a little bit, but I guess maybe alignment and misalignment aren't as binary as they were initially foreseen to be or something. You can still get a language model, for example, that tries to well, I guess there's different types of misleading but be deceptive or tamper with its reward function or whatever. Or you can get one that's sort of like earnestly trying to do the thing that its user wants. And that's not an incoherent concept anymore.HOLLYNo, it's not. Yeah, so yes, there is like, I guess the point of bringing up the VNM utility function was that there was sort of in the past a way that you could mathematically I don't know, of course utility functions are still real, but that's not what we're thinking anymore. We're thinking more like training and getting the gist of what and then getting corrections when you're not doing the right thing according to our values. But yeah, sorry. So the last piece I should have said originally was that I think with humans we're already substantially unaligned, but a lot of how we work together is that we have roughly similar capabilities. And if the idea of making AGI is to have much greater capabilities than we have, that's the whole point. I just think when you scale up like that, the divisions in your psyche or are just going to be magnified as well. And this is like an informal view that I've been developing for a long time, but just that it's actually the low capabilities that allows alignment or similar capabilities that makes alignment possible. And then there are, of course, mathematical structures that could be aligned at different capabilities. So I guess I have more hope if you could find the utility function that would describe this. But if it's just a matter of acting in distribution, when you increase your capabilities, you're going to go out of distribution or you're going to go in different contexts, and then the magnitude of mismatch is going to be huge. I wish I had a more formal way of describing this, but that's like my fundamental skepticism right now that makes me just not want anyone to build it. I think that you could have very sophisticated ideas about alignment, but then still just with not when you increase capabilities enough, any little chink is going to be magnified and it could be yeah.AARONSeems largely right, I guess. You clearly have a better mechanistic understanding of ML.HOLLYI don't know. My PiBBs project was to compare natural selection and gradient descent and then compare gradient hacking to miotic drive, which is the most analogous biological this is a very cool thing, too. Meatic drive. So Meiosis, I'll start with that for everyone.AARONThat's one of the cell things.HOLLYYes. Right. So Mitosis is the one where cells just divide in your body to make more skin. But Meiosis is the special one where you go through two divisions to make gametes. So you go from like we normally have two sets of chromosomes in each cell, but the gametes, they recombine between the chromosomes. You get different combinations with new chromosomes and then they divide again to bring them down to one copy each. And then like that, those are your gametes. And the gametes eggs come together with sperm to make a zygote and the cycle goes on. But during Meiosis, the point of it is to I mean, I'm going to just assert some things that are not universally accepted, but I think this is by far the best explanation. But the point of it is to take this like, you have this huge collection of genes that might have individually different interests, and you recombine them so that they don't know which genes they're going to be with in the next generation. They know which genes they're going to be with, but which allele of those genes. So I'm going to maybe simplify some terminology because otherwise, what's to stop a bunch of genes from getting together and saying, like, hey, if we just hack the Meiosis system or like the division system to get into the gametes, we can get into the gametes at a higher rate than 50%. And it doesn't matter. We don't have to contribute to making this body. We can just work on that.AARONWhat is to stop that?HOLLYYeah, well, Meiosis is to stop that. Meiosis is like a government system for the genes. It makes it so that they can't plan to be with a little cabal in the next generation because they have some chance of getting separated. And so their best chance is to just focus on making a good organism. But you do see lots of examples in nature of where that cooperation is breaking down. So some group of genes has found an exploit and it is fucking up the species. Species do go extinct because of this. It's hard to witness this happening. But there are several species. There's this species of cedar that has a form of this which is, I think, maternal genome. It's maternal genome elimination. So when the zygote comes together, the maternal chromosomes are just thrown away and it's like terrible because that affects the way that the thing works and grows, that it's put them in a death spiral and they're probably going to be extinct. And they're trees, so they live a long time, but they're probably going to be extinct in the next century. There's lots of ways to hack meiosis to get temporary benefit for genes. This, by the way, I just think is like nail in the coffin. Obviously, gene centered view is the best evolutionarily. What is the best the gene centered view of evolution.AARONAs opposed to sort of standard, I guess, high school college thing would just be like organisms.HOLLYYeah, would be individuals. Not that there's not an accurate way to talk in terms of individuals or even in terms of groups, but to me, conceptually.AARONThey'Re all legit in some sense. Yeah, you could talk about any of them. Did anybody take like a quirk level? Probably not. That whatever comes below the level of a gene, like an individual.HOLLYWell, there is argument about what is a gene because there's multiple concepts of genes. You could look at what's the part that makes a protein or you can look at what is the unit that tends to stay together in recombination or something like over time.AARONI'm sorry, I feel like I cut you off. It's something interesting. There was meiosis.HOLLYMeiotic drive is like the process of hacking meiosis so that a handful of genes can be more represented in the next generation. So otherwise the only way to get more represented in the next generation is to just make a better organism, like to be naturally selected. But you can just cheat and be like, well, if I'm in 90% of the sperm, I will be next in the next generation. And essentially meiosis has to work for natural selection to work in large organisms with a large genome and then yeah, ingredient descent. We thought the analogy was going to be with gradient hacking, that there would possibly be some analogy. But I think that the recombination thing is really the key in Meadic Drive. And then there's really nothing like that in.AARONThere'S. No selection per se. I don't know, maybe that doesn't. Make a whole lot of sense.HOLLYWell, I mean, in gradient, there's no.AARONG in analog, right?HOLLYThere's no gene analog. Yeah, but there is, like I mean, it's a hill climbing algorithm, like natural selection. So this is especially, I think, easy to see if you're familiar with adaptive landscapes, which looks very similar to I mean, if you look at a schematic or like a model of an illustration of gradient descent, it looks very similar to adaptive landscapes. They're both, like, in dimensional spaces, and you're looking at vectors at any given point. So the adaptive landscape concept that's usually taught for evolution is, like, on one axis you have fitness, and on the other axis you have well, you can have a lot of things, but you have and you have fitness of a population, and then you have fitness on the other axis. And what it tells you is the shape of the curve there tells you which direction evolution is going to push or natural selection is going to push each generation. And so with gradient descent, there's, like, finding the gradient to get to the lowest value of the cost function, to get to a local minimum at every step. And you follow that. And so that part is very similar to natural selection, but the Miosis hacking just has a different mechanism than gradient hacking would. Gradient hacking probably has to be more about I kind of thought that there was a way for this to work. If fine tuning creates a different compartment that doesn't there's not full backpropagation, so there's like kind of two different compartments in the layers or something. But I don't know if that's right. My collaborator doesn't seem to think that that's very interesting. I don't know if they don't even.AARONKnow what backup that's like a term I've heard like a billion times.HOLLYIt's updating all the weights and all the layers based on that iteration.AARONAll right. I mean, I can hear those words. I'll have to look it up later.HOLLYYou don't have to full I think there are probably things I'm not understanding about the ML process very well, but I had thought that it was something like yeah, like in yeah, sorry, it's probably too tenuous. But anyway, yeah, I've been working on this a little bit for the last year, but I'm not super sharp on my arguments about that.AARONWell, I wouldn't notice. You can kind of say whatever, and I'll nod along.HOLLYI got to guard my reputation off the cuff anymore.AARONWe'll edit it so you're correct no matter what.HOLLYHave you ever edited the Oohs and UMS out of a podcast and just been like, wow, I sound so smart? Like, even after you heard yourself the first time, you do the editing yourself, but then you listen to it and you're like, who is this person? Looks so smart.AARONI haven't, but actually, the 80,000 Hours After hours podcast, the first episode of theirs, I interviewed Rob and his producer Kieran Harris, and that they have actual professional sound editing. And so, yeah, I went from totally incoherent, not totally incoherent, but sarcastically totally incoherent to sounding like a normal person. Because of that.HOLLYI used to use it to take my laughter out of I did a podcast when I was an organizer at Harvard. Like, I did the Harvard Effective Alchruism podcast, and I laughed a lot more than I did now than I do now, which is kind of like and we even got comments about it. We got very few comments, but they were like, girl hosts laughs too much. But when I take my laughter out, I would do it myself. I was like, wow, this does sound suddenly, like, so much more serious.AARONYeah, I don't know. Yeah, I definitely say like and too much. So maybe I will try to actually.HOLLYRealistically, that sounds like so much effort, it's not really worth it. And nobody else really notices. But I go through periods where I say like, a lot, and when I hear myself back in interviews, that really bugs me.AARONYeah.HOLLYGod, it sounds so stupid.AARONNo. Well, I'm definitely worse. Yeah. I'm sure there'll be a way to automate this. Well, not sure, but probably not too distant.HOLLYFuture people were sending around, like, transcripts of Trump to underscore how incoherent he is. I'm like, I sound like that sometimes.AARONOh, yeah, same. I didn't actually realize that this is especially bad. When I get this transcribed, I don't know how people this is a good example. Like the last 10 seconds, if I get it transcribed, it'll make no sense whatsoever. But there's like a free service called AssemblyAI Playground where it does free drAARONased transcription and that makes sense. But if we just get this transcribed without identifying who's speaking, it'll be even worse than that. Yeah, actually this is like a totally random thought, but I actually spent not zero amount of effort trying to figure out how to combine the highest quality transcription, like whisper, with the slightly less goodAARONased transcriptions. You could get the speaker you could infer who's speaking based on the lower quality one, but then replace incorrect words with correct words. And I never I don't know, I'm.HOLLYSure somebody that'd be nice. I would do transcripts if it were that easy, but I just never have but it is annoying because I do like to give people the chance to veto certain segments and that can get tough because even if I talk you.AARONHave podcasts that I don't know about.HOLLYWell, I used to have the Harvard one, which is called the turning test. And then yeah, I do have I.AARONProbably listened to that and didn't know it was you.HOLLYOkay, maybe Alish was the other host.AARONI mean, it's been a little while since yeah.HOLLYAnd then on my I like, publish audio stuff sometimes, but it's called low effort. To underscore.AARONOh, yeah, I didn't actually. Okay. Great minds think alike. Low effort podcasts are the future. In fact, this is super intelligent.HOLLYI just have them as a way to catch up with friends and stuff and talk about their lives in a way that might recorded conversations are just better. You're more on and you get to talk about stuff that's interesting but feels too like, well, you already know this if you're not recording it.AARONOkay, well, I feel like there's a lot of people that I interact with casually that I don't actually they have these rich online profiles and somehow I don't know about it or something. I mean, I could know about it, but I just never clicked their substack link for some reason. So I will be listening to your casual.HOLLYActually, in the 15 minutes you gave us when we pushed back the podcast, I found something like a practice talk I had given and put it on it. So that's audio that I just cool. But that's for paid subscribers. I like to give them a little something.AARONNo, I saw that. I did two minutes of research or whatever. Cool.HOLLYYeah. It's a little weird. I've always had that blog as very low effort, just whenever I feel like it. And that's why it's lasted so long. But I did start doing paid and I do feel like more responsibility to the paid subscribers now.AARONYeah. Kind of the reason that I started this is because whenever I feel so much I don't know, it's very hard for me to write a low effort blog post. Even the lowest effort one still takes at the end of the day, it's like several hours. Oh, I'm going to bang it out in half an hour and no matter what, my brain doesn't let me do that.HOLLYThat usually takes 4 hours. Yeah, I have like a four hour and an eight hour.AARONWow. I feel like some people apparently Scott Alexander said that. Oh, yeah. He just writes as fast as he talks and he just clicks send or whatever. It's like, oh, if I could do.HOLLYThat, I would have written in those paragraphs. It's crazy. Yeah, you see that when you see him in person. I've never met him, I've never talked to him, but I've been to meetups where he was and I'm at this conference or not there right now this week that he's supposed to be at.AARONOh, manifest.HOLLYYeah.AARONNice. Okay.HOLLYCool Lighthaven. They're now calling. It looks amazing. Rose Garden. And no.AARONI like, vaguely noticed. Think I've been to Berkeley, I think twice. Right? Definitely. This is weird. Definitely once.HOLLYBerkeley is awesome. Yeah.AARONI feel like sort of decided consciously not to try to, or maybe not decided forever, but had a period of time where I was like, oh, I should move there, or we'll move there. But then I was like I think being around other EA's in high and rational high concentration activates my status brain or something. It is very less personally bad. And DC is kind of sus that I was born here and also went to college here and maybe is also a good place to live. But I feel like maybe it's actually just true.HOLLYI think it's true. I mean, I always like the DCAS. I think they're very sane.AARONI think both clusters should be more like the other one a little bit.HOLLYI think so. I love Berkeley and I think I'm really enjoying it because I'm older than you. I think if you have your own personality before coming to Berkeley, that's great, but you can easily get swept. It's like Disneyland for all the people I knew on the internet, there's a physical version of them here and you can just walk it's all in walking distance. That's all pretty cool. Especially during the pandemic. I was not around almost any friends and now I see friends every day and I get to do cool stuff. And the culture is som

The Nonlinear Library
AF - Direction of Fit by Nicholas Kees Dupuis

The Nonlinear Library

Play Episode Listen Later Oct 2, 2023 5:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Direction of Fit, published by Nicholas Kees Dupuis on October 2, 2023 on The AI Alignment Forum. This concept has recently become a core part of my toolkit for thinking about the world, and I find it helps explain a lot of things that previously felt confusing to me. Here I explain how I understand "direction of fit," and give some examples of where I find the concept can be useful. Handshake Robot A friend recently returned from an artificial life conference and told me about a robot which was designed to perform a handshake. It was given a prior about handshakes, or how it expected a handshake to be. When it shook a person's hand, it then updated this prior, and the degree to which the robot would update its prior was determined by a single parameter. If the parameter was set low, the robot would refuse to update, and the handshake would be firm and forceful. If the parameter was set high, the robot would completely update, and the handshake would be passive and weak. This parameter determines the direction of fit: whether the object in its mind will adapt to match the world, or whether the robot will adapt the world to match the object in its mind. This concept is often used in philosophy of mind to distinguish between a belief, which has a mind-to-world direction of fit, and a desire, which has a world-to-mind direction of fit. In this frame, beliefs and desires are both of a similar type: they both describe ways the world could be. The practical differences only emerge through how they end up interacting with the outside world. Many objects seem not to be perfectly separable into one of these two categories, and rather appear to exist somewhere on the spectrum. For example: An instrumental goal can simultaneously be a belief about the world (that achieving the goal will help fulfill some desire) as well as behaving like a desired state of the world in its own right. Strongly held beliefs (e.g. religious beliefs) are on the surface ideas which are fit to the world, but in practice behave much more like desires, as people make the world around them fit their beliefs. You can change your mind about what you desire. For example you may dislike something at first, but after repeated exposure you may come to feel neutral about it, or even actively like it (e.g. the taste of certain foods). Furthermore, the direction of fit might be context dependent (e.g. political beliefs), beliefs could be self fulfilling (e.g. believing that a presentation will go well could make it go well), and many beliefs or desires could refer to other beliefs or desires (wanting to believe, believing that you want, etc.). Idealized Rational Agents The concept of a rational agent, in this frame, is a system which cleanly distinguishes between these two directions of fit, between objects which describe how the world actually is, and objects which prescribe how the world "should" be. This particular concept of a rational agent can itself have a varying direction of fit. You might describe a system as a rational agent to help your expectations match your observations, but the idea might also prescribe that you should develop this clean split between belief and value. When talking about AI systems, we might be interested in the behavior of systems where this distinction is especially clear. We might observe that many current AI systems are not well described in this way, or we could speculate about pressures which might lead them toward this kind of split. Note that this is very different from talking about VNM-rationality, which starts by assuming this clean split, and instead demonstrates why we might expect the different parts of the value model to become coherent and avoid getting in each other's way. The direction-of-fit frame highlights a separate (but equally important) question of whether...

VOV - Kinh tế Tài chính
Trước giở mở cửa - Tăng đầu tư, đổi mới công nghệ để hút FDI

VOV - Kinh tế Tài chính

Play Episode Listen Later Jun 26, 2023 5:22


- Tăng đầu tư, đổi mới công nghệ để hút FDI.- Trong bối cảnh toàn ngành dệt may gặp nhiều khó khăn, Tập đoàn Dệt May Việt Nam dự chi 300 tỷ đồng trả cổ tức năm 2022.- Cổ phiếu VNM lập kỷ lục về giao dịch, VN-Index sát mốc 1.130 điểm. Chủ đề : Tăng đầu tư, đổi mới công nghệ, hút FDI --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1kd/support

The Nonlinear Library
LW - Crystal Healing — or the Origins of Expected Utility Maximizers by Alexander Gietelink Oldenziel

The Nonlinear Library

Play Episode Listen Later Jun 25, 2023 10:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Crystal Healing — or the Origins of Expected Utility Maximizers, published by Alexander Gietelink Oldenziel on June 25, 2023 on LessWrong. (Note: John discusses similar ideas here. We drafted this before he published his post, so some of the concepts might jar if you read it with his framing in mind. ) Traditionally, focus in Agent Foundations has been around the characterization of ideal agents, often in terms of coherence theorems that state that, under certain conditions capturing rational decision-making, an agent satisfying these conditions must behave as if it maximizes a utility function. In this post, we are not so much interested in characterizing ideal agents — at least not directly. Rather, we are interested in how true agents and not-so true agents may be classified and taxonomized, how agents and pseudo-agents may be hierarchically aggregated and composed out of subagents and how agents with different preferences may be formed and selected for in different training regimes. First and foremost, we are concerned with how unified goal-directed agents form from a not-quite-agentic substratum. In other words, we are interested in Selection Theorems rather than Coherence Theorems. (We take the point of view that a significant part of the content of the coherence theorems is not so much in the theorems or rationality conditions themselves but in the money-pump arguments that are used to defend the conditions.) This post concerns how expected utility maximizers may form from entities with incomplete preferences. Incomplete preferences on the road to maximizing utility The classical model of a rational agent assumes it has vNM-preferences. We assume, in particular, that the agent is complete — i.e., that for any options x,y, we have x≥y or y≥x. However, in real-life agents, we often see incompleteness — i.e., a preference for default states, or maybe a path-dependent preference, or maybe a sense that a preference between two options is yet to be defined; we will leave the precise meaning of incompleteness somewhat open for the purposes of this post. The aim of this post is to understand the selection pressures that push agents towards completeness. Here are the main reasons we consider this important: Completeness is a prerequisite for applying the money-pump arguments that justify other coherence axioms. It thereby underlies the conception of agents as (rational) goal-directed optimizers. Understanding the pressures towards completeness from a plausibly incomplete state of nature can help us understand what kinds of intelligent optimization processes are likely to arise, and in which kinds of conditions / training regimes. In particular, since something like completeness seems to be an important facet of consequentialism, understanding the pressures towards completeness helps us understand the pressures towards consequentialism. Review of "Why Subagents" Let's review John Wentworth's post "Why subagents?" . John observes that inexploitability (= not taking sure losses) is not sufficient to be a vNM expected utility maximizer. An agent with incomplete preferences can be inexploitable without being an expected utility maximizer. Although these agents are inexploitable, they can have path-dependent preferences. If John had grown up in Manchester he'd be a United fan. If he had grown up in Liverpool he'd be cracking the skulls of Manchester United fans. We could model this as a dynamical change of preferences ("preference formation"). Alternatively, we can model this as John having incomplete preferences: if he grows up in Manchester he loves United and wouldn't take the offer to switch to Liverpool. If he grows up in Liverpool he loves whatever the team in Liverpool is and doesn't switch to United. In other words, incompleteness is a frame to look at preference for...

The Nonlinear Library: LessWrong
LW - Crystal Healing — or the Origins of Expected Utility Maximizers by Alexander Gietelink Oldenziel

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 25, 2023 10:12


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Crystal Healing — or the Origins of Expected Utility Maximizers, published by Alexander Gietelink Oldenziel on June 25, 2023 on LessWrong. (Note: John discusses similar ideas here. We drafted this before he published his post, so some of the concepts might jar if you read it with his framing in mind. ) Traditionally, focus in Agent Foundations has been around the characterization of ideal agents, often in terms of coherence theorems that state that, under certain conditions capturing rational decision-making, an agent satisfying these conditions must behave as if it maximizes a utility function. In this post, we are not so much interested in characterizing ideal agents — at least not directly. Rather, we are interested in how true agents and not-so true agents may be classified and taxonomized, how agents and pseudo-agents may be hierarchically aggregated and composed out of subagents and how agents with different preferences may be formed and selected for in different training regimes. First and foremost, we are concerned with how unified goal-directed agents form from a not-quite-agentic substratum. In other words, we are interested in Selection Theorems rather than Coherence Theorems. (We take the point of view that a significant part of the content of the coherence theorems is not so much in the theorems or rationality conditions themselves but in the money-pump arguments that are used to defend the conditions.) This post concerns how expected utility maximizers may form from entities with incomplete preferences. Incomplete preferences on the road to maximizing utility The classical model of a rational agent assumes it has vNM-preferences. We assume, in particular, that the agent is complete — i.e., that for any options x,y, we have x≥y or y≥x. However, in real-life agents, we often see incompleteness — i.e., a preference for default states, or maybe a path-dependent preference, or maybe a sense that a preference between two options is yet to be defined; we will leave the precise meaning of incompleteness somewhat open for the purposes of this post. The aim of this post is to understand the selection pressures that push agents towards completeness. Here are the main reasons we consider this important: Completeness is a prerequisite for applying the money-pump arguments that justify other coherence axioms. It thereby underlies the conception of agents as (rational) goal-directed optimizers. Understanding the pressures towards completeness from a plausibly incomplete state of nature can help us understand what kinds of intelligent optimization processes are likely to arise, and in which kinds of conditions / training regimes. In particular, since something like completeness seems to be an important facet of consequentialism, understanding the pressures towards completeness helps us understand the pressures towards consequentialism. Review of "Why Subagents" Let's review John Wentworth's post "Why subagents?" . John observes that inexploitability (= not taking sure losses) is not sufficient to be a vNM expected utility maximizer. An agent with incomplete preferences can be inexploitable without being an expected utility maximizer. Although these agents are inexploitable, they can have path-dependent preferences. If John had grown up in Manchester he'd be a United fan. If he had grown up in Liverpool he'd be cracking the skulls of Manchester United fans. We could model this as a dynamical change of preferences ("preference formation"). Alternatively, we can model this as John having incomplete preferences: if he grows up in Manchester he loves United and wouldn't take the offer to switch to Liverpool. If he grows up in Liverpool he loves whatever the team in Liverpool is and doesn't switch to United. In other words, incompleteness is a frame to look at preference for...

The Nonlinear Library
LW - Towards Measures of Optimisation by mattmacdermott

The Nonlinear Library

Play Episode Listen Later May 13, 2023 6:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Measures of Optimisation, published by mattmacdermott on May 12, 2023 on LessWrong. We would like a mathematical theory which characterises the intuitive notion of ‘optimisation'. Before Shannon introduced his mathematical theory of communication, the concept of ‘information' was vague and informal. Then Shannon devised several standardised measures of it, with useful properties and clear operational interpretations. It turned out the concept of information is universal, in that it can be quantified on a consistent scale (bits) across various contexts. Could something similar be true for ‘optimisation'? In this post we review a few proposed ways to measure optimisation power and play around with them a bit. Our general setup will be that we're choosing actions from a set A to achieve an outcome in a set X. We have some beliefs about how our actions will affect outcomes in the form of a probability distribution pa∈ΔX for each a∈A. We also have some preferences, either in the form of a preference ordering over outcomes, or a utility function over outcomes. We will also assume there is some default distribution p∈ΔX, which we can interpret either as our beliefs about X if we don't act, or if we take some default action[1]. Yudkowsky Yudkowsky's proposed definition just makes use of a preference ordering ⪰ over X. To measure the optimisation power of some outcome x, we count up all the outcomes which are at least as good as x, and divide that number by the total number of possible outcomes. It's nice to take a negative log to turn this fraction into a number of bits: OP(x)=−log|{x′∈X∣x′⪰x}X|. If I achieve the second best outcome out of eight, that's −log28=2 bits of optimisation power. If the outcome space is infinite, then we can't count the number of outcomes at least as good as the one we got, so we need a measure to integrate over. If we make use of our default probability distribution here, the resulting quantity has a nice interpretation. ∫x′⪰xp(x′)dx′∫p(x′)dx′ is just ∫x′⪰xp(x′)dx′: the default probability of doing as well as we did. Since we're always assuming we've got a default distribution, we might as well define OP like this even in the finite-domain case. Again we'll take a log to get OP(x)=−log∫x′⪰xp(x′)dx′. Now 2 bits of optimisation power means the default probability of doing this well was 14. So far we've just been thinking about the optimisation power of achieving a specific outcome. We can define the optimisation power of an action as the expected optimisation power under the distribution it induces over outcomes: OP(a)=∫pa(x)OP(x)dx. The above definitions just make use of a preference ordering. If we do have a utility function u:XR then we'd like our definitions to make use of that too. Intuitively, achieving the second best outcome out of three should constitute more optimisation power in a case where it's almost as good as the first and much better than the third, compared to a case where it's only slightly better than the third and much less good than the first[2]. Analogously to how we previously asked ‘what fraction of the default probability mass is on outcomes at least as good as this one?' we could try to ask ‘what fraction of the default expected utility comes from outcomes at least as good as this one?'. But making use of utility functions in the above definition is tricky. Recall that utility functions originate from the Von Neumann-Morgenstern theorem, which says that if an agent choosing between probabilistic mixtures of options satisfies some weak rationality criteria then it acts as if it maximises expected utility according to a utility function u:XR. The utility function produced by the VNM-theorem is only defined up to positive affine transformations, meaning that the utility function u′=au+b, for any a∈R>0 and b∈R, equally...

The Nonlinear Library: LessWrong
LW - Towards Measures of Optimisation by mattmacdermott

The Nonlinear Library: LessWrong

Play Episode Listen Later May 13, 2023 6:45


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Measures of Optimisation, published by mattmacdermott on May 12, 2023 on LessWrong. We would like a mathematical theory which characterises the intuitive notion of ‘optimisation'. Before Shannon introduced his mathematical theory of communication, the concept of ‘information' was vague and informal. Then Shannon devised several standardised measures of it, with useful properties and clear operational interpretations. It turned out the concept of information is universal, in that it can be quantified on a consistent scale (bits) across various contexts. Could something similar be true for ‘optimisation'? In this post we review a few proposed ways to measure optimisation power and play around with them a bit. Our general setup will be that we're choosing actions from a set A to achieve an outcome in a set X. We have some beliefs about how our actions will affect outcomes in the form of a probability distribution pa∈ΔX for each a∈A. We also have some preferences, either in the form of a preference ordering over outcomes, or a utility function over outcomes. We will also assume there is some default distribution p∈ΔX, which we can interpret either as our beliefs about X if we don't act, or if we take some default action[1]. Yudkowsky Yudkowsky's proposed definition just makes use of a preference ordering ⪰ over X. To measure the optimisation power of some outcome x, we count up all the outcomes which are at least as good as x, and divide that number by the total number of possible outcomes. It's nice to take a negative log to turn this fraction into a number of bits: OP(x)=−log|{x′∈X∣x′⪰x}X|. If I achieve the second best outcome out of eight, that's −log28=2 bits of optimisation power. If the outcome space is infinite, then we can't count the number of outcomes at least as good as the one we got, so we need a measure to integrate over. If we make use of our default probability distribution here, the resulting quantity has a nice interpretation. ∫x′⪰xp(x′)dx′∫p(x′)dx′ is just ∫x′⪰xp(x′)dx′: the default probability of doing as well as we did. Since we're always assuming we've got a default distribution, we might as well define OP like this even in the finite-domain case. Again we'll take a log to get OP(x)=−log∫x′⪰xp(x′)dx′. Now 2 bits of optimisation power means the default probability of doing this well was 14. So far we've just been thinking about the optimisation power of achieving a specific outcome. We can define the optimisation power of an action as the expected optimisation power under the distribution it induces over outcomes: OP(a)=∫pa(x)OP(x)dx. The above definitions just make use of a preference ordering. If we do have a utility function u:XR then we'd like our definitions to make use of that too. Intuitively, achieving the second best outcome out of three should constitute more optimisation power in a case where it's almost as good as the first and much better than the third, compared to a case where it's only slightly better than the third and much less good than the first[2]. Analogously to how we previously asked ‘what fraction of the default probability mass is on outcomes at least as good as this one?' we could try to ask ‘what fraction of the default expected utility comes from outcomes at least as good as this one?'. But making use of utility functions in the above definition is tricky. Recall that utility functions originate from the Von Neumann-Morgenstern theorem, which says that if an agent choosing between probabilistic mixtures of options satisfies some weak rationality criteria then it acts as if it maximises expected utility according to a utility function u:XR. The utility function produced by the VNM-theorem is only defined up to positive affine transformations, meaning that the utility function u′=au+b, for any a∈R>0 and b∈R, equally...

The Nonlinear Library
AF - Towards Measures of Optimisation by Matt MacDermott

The Nonlinear Library

Play Episode Listen Later May 12, 2023 6:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Measures of Optimisation, published by Matt MacDermott on May 12, 2023 on The AI Alignment Forum. We would like a mathematical theory which characterises the intuitive notion of ‘optimisation'. Before Shannon introduced his mathematical theory of communication, the concept of ‘information' was vague and informal. Then Shannon devised several standardised measures of it, with useful properties and clear operational interpretations. It turned out the concept of information is universal, in that it can be quantified on a consistent scale (bits) across various contexts. Could something similar be true for ‘optimisation'? In this post we review a few proposed ways to measure optimisation power and play around with them a bit. Our general setup will be that we're choosing actions from a set A to achieve an outcome in a set X. We have some beliefs about how our actions will affect outcomes in the form of a probability distribution pa∈ΔX for each a∈A. We also have some preferences, either in the form of a preference ordering over outcomes, or a utility function over outcomes. We will also assume there is some default distribution p∈ΔX, which we can either interpret as our beliefs about X if we don't act, or if we take some default action[1]. Yudkowsky Yudkowsky's proposed definition just makes use of a preference ordering ⪰ over X. To measure the optimisation power of some outcome x, we count up all the outcomes which are at least as good as x, and divide that number by the total number of possible outcomes. It's nice to take a negative log to turn this fraction into a number of bits: OP(x)=−log|{x′∈X∣x′⪰x}X|. If I achieve the second best outcome out of eight, that's −log28=2 bits of optimisation power. If the outcome space is infinite, then we can't count the number of outcomes at least as good as the one we got, so we need a measure to integrate over. If we make use of our default probability distribution here, the resulting quantity has a nice interpretation. ∫x′⪰xp(x′)dx′∫p(x′)dx′ is just ∫x′⪰xp(x′)dx′: the default probability of doing as well as we did. Since we're always assuming we've got a default distribution, we might as well define OP like this even in the finite-domain case. Again we'll take a log to get OP(x)=−log∫x′⪰xp(x′)dx′. Now 2 bits of optimisation power means the default probability of doing this well was 14. So far we've just been thinking about the optimisation power of achieving a specific outcome. We can define the optimisation power of an action as the expected optimisation power under the distribution it induces over outcomes: OP(a)=∫pa(x)OP(x)dx. The above definitions just make use of a preference ordering. If we do have a utility function u:XR then we'd like our definitions to make use of that too. Intuitively, achieving the second best outcome out of three should constitute more optimisation power in a case where it's almost as good as the first and much better than the third, compared to a case where it's only slightly better than the third and much less good than the first[2]. Analogously to how we previously asked ‘what fraction of the default probability mass is on outcomes at least as good as this one?' we could try to ask ‘what fraction of the default expected utility comes from outcomes at least as good as this one?'. But making use of utility functions in the above definition is tricky. Recall that utility functions originate from the Von Neumann-Morgenstern theorem, which says that if an agent choosing between probabilistic mixtures of options satisfies some weak rationality criteria then it acts as if it maximises expected utility according to a utility function u:XR. The utility function produced by the VNM-theorem is only defined up to positive affine transformations, meaning that the utility function u′=au+b, for any a∈R>0 an...

The Nonlinear Library
LW - Concave Utility Question by Scott Garrabrant

The Nonlinear Library

Play Episode Listen Later Apr 15, 2023 4:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concave Utility Question, published by Scott Garrabrant on April 15, 2023 on LessWrong. This post will just be a concrete math question. I am interested in this question because I have recently come tor reject the independence axiom of VNM, and am thus playing with some weaker versions. Let Ω be a finite set of deterministic outcomes. Let L be the space of all lotteries over these outcomes, and let ⪰ be a relation on L. We write A∼B if A ⪰ B and B ⪰ A. We write A≻B if A⪰B but not A∼B. Here are some axioms we can assume about ⪰: A1. For all A,B∈L, either A⪰B or B⪰A (or both). A2. For all A,B,C∈L, if A⪰B, and B⪰C, then A⪰C. A3. For all A,B,C∈L, if A⪰B, and B⪰C, then there exists a p∈[0,1] such that B∼pA+(1−p)C. A4. For all A,B∈L, and p∈[0,1] if A⪰B, then pA+(1−p)B⪰B. A5. For all A,B∈L, and p∈[0,1], if p>0 and B⪰pA+(1−p)B, then B⪰A. Here is one bonus axiom: B1. For all A,B,C∈L, and p∈[0,1], A⪰B if and only if pA+(1−p)C⪰pB+(1−p)C. (Note that B1 is stronger than both A4 and A5) Finally, here are some conclusions of successively increasing strength: C1. There exists a function u:L[0,1] such that A⪰B if and only if u(A)≥u(B). C2. Further, we require u is quasi-concave. C3. Further, we require u is continuous. C4. Further, we require u is concave. C5. Further, we require u is linear. The standard VNM utility theorem can be thought of as saying A1, A2, A3, and B1 together imply C5. Here is the main question I am curious about: Q1: Do A1, A2, A3, A4, and A5 together imply C4? [ANSWER: NO] (If no, how can we salvage C4, by adding or changing some axioms?) Here are some sub-questions that would constitute significant partial progress, and that I think are interesting in their own right: Q2: Do A1, A2, A3, and A4 together imply C3? [ANSWER: NO] Q3: Do C3 and A5 together imply C4? [ANSWER: NO] (Feel free to give answers that are only partial progress, and use this space to think out loud or discuss anything else related to weaker versions of VNM.) EDIT: AlexMennen actually resolved the question in the negative as stated, but my curiosity is not resolved, since his argument is violating continuity, and I really care about concavity. My updated main question is now: Q4: Do A1, A2, A3, A4, and A5 together imply that there exists a concave function u:L[0,1] such that A⪰B if and only if u(A)≥u(B)? [ANSWER: NO] (i.e. We do not require u to be continuous.) This modification also implies interest in the subquestion: Q5: Do A1, A2, A3, and A4 together imply C2? EDIT 2: Here is another bonus axiom: B2. For all A,B∈L, if A≻B, then there exists some C∈L such that A≻C≻B. (Really, we don't need to assume C is already in L. We just need it to be possible to add a C, and extend our preferences in a way that satisfies the other axioms, and A3 will imply that such a lottery was already in L. We might want to replace this with a cleaner axiom later.) Q6: Do A1, A2, A3, A5, and B2 together imply C4? [ANSWER: NO] EDIT 3: We now have negative answers to everything other than Q5, which I still think is pretty interesting. We could also weaken Q5 to include other axioms, like A5 and B2. Weakening the conclusion doesn't help, since it is easy to get C2 from C1 and A4. I would still really like some axioms that get us all the way to a concave function, but I doubt there will be any simple ones. Concavity feels like it really needs more structure that does not translate well to a preference relation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - Concave Utility Question by Scott Garrabrant

The Nonlinear Library

Play Episode Listen Later Apr 15, 2023 4:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concave Utility Question, published by Scott Garrabrant on April 15, 2023 on The AI Alignment Forum. This post will just be a concrete math question. I am interested in this question because I have recently come tor reject the independence axiom of VNM, and am thus playing with some weaker versions. Let Ω be a finite set of deterministic outcomes. Let L be the space of all lotteries over these outcomes, and let ⪰ be a relation on L. We write A∼B if A ⪰ B and B ⪰ A. We write A≻B if A⪰B but not A∼B. Here are some axioms we can assume about ⪰: A1. For all A,B∈L, either A⪰B or B⪰A (or both). A2. For all A,B,C∈L, if A⪰B, and B⪰C, then A⪰C. A3. For all A,B,C∈L, if A⪰B, and B⪰C, then there exists a p∈[0,1] such that B∼pA+(1−p)C. A4. For all A,B∈L, and p∈[0,1] if A⪰B, then pA+(1−p)B⪰B. A5. For all A,B∈L, and p∈[0,1], if p>0 and B⪰pA+(1−p)B, then B⪰A. Here is one bonus axiom: B1. For all A,B,C∈L, and p∈[0,1], A⪰B if and only if pA+(1−p)C⪰pB+(1−p)C. (Note that B1 is stronger than both A4 and A5) Finally, here are some conclusions of successively increasing strength: C1. There exists a function u:L[0,1] such that A⪰B if and only if u(A)≥u(B). C2. Further, we require u is quasi-concave. C3. Further, we require u is continuous. C4. Further, we require u is concave. C5. Further, we require u is linear. The standard VNM utility theorem can be thought of as saying A1, A2, A3, and B1 together imply C5. Here is the main question I am curious about: Q1: Do A1, A2, A3, A4, and A5 together imply C4? [ANSWER: NO] (If no, how can we salvage C4, by adding or changing some axioms?) Here are some sub-questions that would constitute significant partial progress, and that I think are interesting in their own right: Q2: Do A1, A2, A3, and A4 together imply C3? [ANSWER: NO] Q3: Do C3 and A5 together imply C4? [ANSWER: NO] (Feel free to give answers that are only partial progress, and use this space to think out loud or discuss anything else related to weaker versions of VNM.) EDIT: AlexMennen actually resolved the question in the negative as stated, but my curiosity is not resolved, since his argument is violating continuity, and I really care about concavity. My updated main question is now: Q4: Do A1, A2, A3, A4, and A5 together imply that there exists a concave function u:L[0,1] such that A⪰B if and only if u(A)≥u(B)? [ANSWER: NO] (i.e. We do not require u to be continuous.) This modification also implies interest in the subquestion: Q5: Do A1, A2, A3, and A4 together imply C2? EDIT 2: Here is another bonus axiom: B2. For all A,B∈L, if A≻B, then there exists some C∈L such that A≻C≻B. (Really, we don't need to assume C is already in L. We just need it to be possible to add a C, and extend our preferences in a way that satisfies the other axioms, and A3 will imply that such a lottery was already in L. We might want to replace this with a cleaner axiom later.) Q6: Do A1, A2, A3, A5, and B2 together imply C4? [ANSWER: NO] EDIT 3: We now have negative answers to everything other than Q5, which I still think is pretty interesting. We could also weaken Q5 to include other axioms, like A5 and B2. Weakening the conclusion doesn't help, since it is easy to get C2 from C1 and A4. I would still really like some axioms that get us all the way to a concave function, but I doubt there will be any simple ones. Concavity feels like it really needs more structure that does not translate well to a preference relation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Concave Utility Question by Scott Garrabrant

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 15, 2023 4:32


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concave Utility Question, published by Scott Garrabrant on April 15, 2023 on LessWrong. This post will just be a concrete math question. I am interested in this question because I have recently come tor reject the independence axiom of VNM, and am thus playing with some weaker versions. Let Ω be a finite set of deterministic outcomes. Let L be the space of all lotteries over these outcomes, and let ⪰ be a relation on L. We write A∼B if A ⪰ B and B ⪰ A. We write A≻B if A⪰B but not A∼B. Here are some axioms we can assume about ⪰: A1. For all A,B∈L, either A⪰B or B⪰A (or both). A2. For all A,B,C∈L, if A⪰B, and B⪰C, then A⪰C. A3. For all A,B,C∈L, if A⪰B, and B⪰C, then there exists a p∈[0,1] such that B∼pA+(1−p)C. A4. For all A,B∈L, and p∈[0,1] if A⪰B, then pA+(1−p)B⪰B. A5. For all A,B∈L, and p∈[0,1], if p>0 and B⪰pA+(1−p)B, then B⪰A. Here is one bonus axiom: B1. For all A,B,C∈L, and p∈[0,1], A⪰B if and only if pA+(1−p)C⪰pB+(1−p)C. (Note that B1 is stronger than both A4 and A5) Finally, here are some conclusions of successively increasing strength: C1. There exists a function u:L[0,1] such that A⪰B if and only if u(A)≥u(B). C2. Further, we require u is quasi-concave. C3. Further, we require u is continuous. C4. Further, we require u is concave. C5. Further, we require u is linear. The standard VNM utility theorem can be thought of as saying A1, A2, A3, and B1 together imply C5. Here is the main question I am curious about: Q1: Do A1, A2, A3, A4, and A5 together imply C4? [ANSWER: NO] (If no, how can we salvage C4, by adding or changing some axioms?) Here are some sub-questions that would constitute significant partial progress, and that I think are interesting in their own right: Q2: Do A1, A2, A3, and A4 together imply C3? [ANSWER: NO] Q3: Do C3 and A5 together imply C4? [ANSWER: NO] (Feel free to give answers that are only partial progress, and use this space to think out loud or discuss anything else related to weaker versions of VNM.) EDIT: AlexMennen actually resolved the question in the negative as stated, but my curiosity is not resolved, since his argument is violating continuity, and I really care about concavity. My updated main question is now: Q4: Do A1, A2, A3, A4, and A5 together imply that there exists a concave function u:L[0,1] such that A⪰B if and only if u(A)≥u(B)? [ANSWER: NO] (i.e. We do not require u to be continuous.) This modification also implies interest in the subquestion: Q5: Do A1, A2, A3, and A4 together imply C2? EDIT 2: Here is another bonus axiom: B2. For all A,B∈L, if A≻B, then there exists some C∈L such that A≻C≻B. (Really, we don't need to assume C is already in L. We just need it to be possible to add a C, and extend our preferences in a way that satisfies the other axioms, and A3 will imply that such a lottery was already in L. We might want to replace this with a cleaner axiom later.) Q6: Do A1, A2, A3, A5, and B2 together imply C4? [ANSWER: NO] EDIT 3: We now have negative answers to everything other than Q5, which I still think is pretty interesting. We could also weaken Q5 to include other axioms, like A5 and B2. Weakening the conclusion doesn't help, since it is easy to get C2 from C1 and A4. I would still really like some axioms that get us all the way to a concave function, but I doubt there will be any simple ones. Concavity feels like it really needs more structure that does not translate well to a preference relation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

WINI Rozmawia
WINI x VNM - rozmowa | Prawdziwa rozmowa z muzykiem o muzyce. Tak jak to powinno być.

WINI Rozmawia

Play Episode Listen Later Apr 13, 2023 62:46


"Wini Rozmawia" z VNM-em, jednym z najlepszych technicznie raperów polskiej sceny rapowej. W trakcie ponad godzinnej rozmowy poruszyli wiele tematów, ale motyw przewodni mógł być tylko jedne - muzyka. Jak podsumował Wini, wyszła im prawdziwa, chaotyczna rozmowa z muzykiem, o muzyce. WSPIERAJ "WINI ROZMAWIA": ⁠⁠www.patronite.pl/winiwizja⁠⁠ KONTAKT - ⁠⁠blazej@winiego.com⁠⁠

The Nonlinear Library
AF - Counting-down vs. counting-up coherence by Tsvi Benson-Tilsen

The Nonlinear Library

Play Episode Listen Later Feb 27, 2023 21:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Counting-down vs. counting-up coherence, published by Tsvi Benson-Tilsen on February 27, 2023 on The AI Alignment Forum. [Metadata: crossposted from. First completed 25 October 2022.] Counting-down coherence is the coherence of a mind viewed as the absence of deviation downward in capability from ideal, perfectly efficient agency: the utility left on the table, the waste, the exploitability. Counting-up coherence is the coherence of a mind viewed as the deviation upward in capability from a rock: the elements of the mind, and how they combine to perform tasks. What determines the effects of a mind? Supranormally capable minds can have large effects. To control those effects, we'd have to understand what determines the effects of a mind. Pre-theoretically, we have the idea of "values", "aims", "wants". The more capable a mind is, the more it's that case that what the mind wants, is what will happen in the world; so the mind's wants, its values, determine the mind's effect on the world. A more precise way of describing the situation is: "Coherent decisions imply consistent utilities". A mind like that is incorrigible: if it knows it will eventually be more competent than any other mind at pushing the world towards high-utility possibilities, then it does not defer to any other mind. So to understand how a mind can be corrigible, some assumptions about minds and their values may have to be loosened. The question remains, what are values? That is, what determines the effects that a mind has on the world, besides what the mind is capable of doing or understanding? This essay does not address this question, but instead describes two complementary standpoints from which to view the behavior of a mind insofar as it has effects. Counting-down coherence Counting-down coherence is the coherence of a mind viewed as the absence of deviation downward in capability from ideal, perfectly efficient agency: the utility left on the table, the waste, the exploitability. Counting-down coherence could also be called anti-waste coherence, since it has a flavor of avoiding visible waste, or universal coherence, since it has a flavor of tracking how much a mind everywhere conforms to certain patterns of behavior. Some overlapping ways of describing counting-down incoherence: Exploitable, Dutch bookable, pumpable for resources. That is, someone could make a set of trades with the mind that leaves the mind worse off, and could do so repeatedly to pump the mind for resources. See Garrabrant induction. VNM violating. Choosing between different outcomes, or different probabilities of different outcomes, in a way that doesn't satisfy the Von Neumann–Morgenstern axioms, leaves a mind open to being exploited by Dutch books. See related LessWrong posts. Doesn't maximize expected utility. A mind that satisfies the VNM axioms behaves as though it maximizes the expected value of a fixed utility function over atomic (not probabilistic) outcomes. So deviating from that policy exposes a mind to Dutch books. Missed opportunities. Leaving possible gains on the table; failing to pick up a $20 bill lying on the sidewalk. Opposing pushes. Working at cross-purposes to oneself; starting to do X one day, and then undoing X the next day; pushing and pulling on the door handle at the same time. Internal conflict. At war with oneself; having elements of oneself that try to harm each other or interfere with each other's functioning. Inconsistent beliefs, non-Bayesian beliefs. Sometimes acting as though X and sometimes acting as though not-X, where X is something that is either true or false. Or some more complicated inconsistency, or more generally failing to act as though one has a Bayesian belief state and belief revisions. Any of these also open one up to being Dutch booked. Inefficient allocation. Choosing to inve...

The Dive - A League of Legends Esports Podcast
FREE RP GIVEAWAY, NO SCAM, REAL | The Dive

The Dive - A League of Legends Esports Podcast

Play Episode Listen Later Feb 7, 2023 66:03


Greetings, Dive fans! We're cruising right along this Spring Split and have another eventful episode for you. The gang breaks down which teams are hot and which are not, discuss how YOU could win 2k RP, and interview CLG General Manager Jonathon McDaniel about the reality of scrims and scheduling in the LCS. Also, the dive might be haunted…Keep submitting those twitter questions, and keep taking those hot takes! You miss 100% of the hot takes you don't take. More importantly, make sure those mics are hot when doing Anchor.fm questions. The Spring Split continues this Thursday at 2pm PT/5pm ET - see you there! P.S., the RP GIVEAWAY IS NOT A SCAM!! Conditions for the Giveaway: all qualifying social posts must have #TheDiveLol, not all regions qualify. RP not available for those on RU, VNM, CHINA or KR regions. Winners chosen by Azael, Mark, & Kobe. --- Send in a voice message: https://anchor.fm/the-dive-esports-podcast/message

The Nonlinear Library
LW - Consequentialists: One-Way Pattern Traps by David Udell

The Nonlinear Library

Play Episode Listen Later Jan 17, 2023 24:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consequentialists: One-Way Pattern Traps, published by David Udell on January 16, 2023 on LessWrong. Generated during MATS 2.1. A distillation of my understanding of Eliezer-consequentialism. Thanks to Jeremy Gillen, Ben Goodman, Paul Colognese, Daniel Kokotajlo, Scott Viteri, Peter Barnett, Garrett Baker, and Olivia Jimenez for discussion and/or feedback; to Eliezer Yudkowsky for briefly chatting about relevant bits in planecrash; to Quintin Pope for causally significant conversation; and to many others that I've bounced my thoughts on this topic off of. Introduction What is Eliezer-consequentialism? In a nutshell, I think it's the way that some physical structures monotonically accumulate patterns in the world. Some of these patterns afford influence over other patterns, and some physical structures monotonically accumulate patterns-that-matter in particular -- resources. We call such a resource accumulator a consequentialist -- or, equivalently, an "agent," an "intelligence," etc. A consequentialist understood in this way is (1) a coherent profile of reflexes (a set of behavioral reflexes that together monotonically take in resources) plus (2) an inventory (some place where accumulated resources can be stored with better than background-chance reliability.) Note that an Eliezer-consequentialist is not necessarily a consequentialist in the normative ethics sense of the term. By consequentialists we'll just mean agents, including wholly amoral agents. I'll freely use the terms 'consequentialism' and 'consequentialist' henceforth with this meaning, without fretting any more about this confusion. Path to Impact I noticed hanging around the MATS London office that even full-time alignment researchers disagree quite a bit about what consequentialism involves. I'm betting here that my Eliezer-model is good enough that I've understood his ideas on the topic better than many others have, and can concisely communicate this better understanding. Since most of the possible positive impact of this effort lives in the fat tail of outcomes where it makes a lot of Eliezerisms click for a lot of alignment workers, I'll make this an effortpost. The Ideas to be Clarified I've noticed that Eliezer seems to think the von Neumann-Morgenstern (VNM) theorem is obviously far reaching in a way that few others do. Understand the concept of VNM rationality, which I recommend learning from the Wikipedia article... Von Neumann and Morgenstern showed that any agent obeying a few simple consistency axioms acts with preferences characterizable by a utility function. MIRI Research Guide (2015) Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...? No, it's like when you don't, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple. I have still not figured out the homework exercises to convey to somebody the Word of Power which is "coherence" by which they will be able to look at the water, and see "coherence" in places like a cat walking across the room without tripping over itself. When you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is... still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet? And one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identifie...

The Nonlinear Library: LessWrong
LW - Consequentialists: One-Way Pattern Traps by David Udell

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 17, 2023 24:23


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consequentialists: One-Way Pattern Traps, published by David Udell on January 16, 2023 on LessWrong. Generated during MATS 2.1. A distillation of my understanding of Eliezer-consequentialism. Thanks to Jeremy Gillen, Ben Goodman, Paul Colognese, Daniel Kokotajlo, Scott Viteri, Peter Barnett, Garrett Baker, and Olivia Jimenez for discussion and/or feedback; to Eliezer Yudkowsky for briefly chatting about relevant bits in planecrash; to Quintin Pope for causally significant conversation; and to many others that I've bounced my thoughts on this topic off of. Introduction What is Eliezer-consequentialism? In a nutshell, I think it's the way that some physical structures monotonically accumulate patterns in the world. Some of these patterns afford influence over other patterns, and some physical structures monotonically accumulate patterns-that-matter in particular -- resources. We call such a resource accumulator a consequentialist -- or, equivalently, an "agent," an "intelligence," etc. A consequentialist understood in this way is (1) a coherent profile of reflexes (a set of behavioral reflexes that together monotonically take in resources) plus (2) an inventory (some place where accumulated resources can be stored with better than background-chance reliability.) Note that an Eliezer-consequentialist is not necessarily a consequentialist in the normative ethics sense of the term. By consequentialists we'll just mean agents, including wholly amoral agents. I'll freely use the terms 'consequentialism' and 'consequentialist' henceforth with this meaning, without fretting any more about this confusion. Path to Impact I noticed hanging around the MATS London office that even full-time alignment researchers disagree quite a bit about what consequentialism involves. I'm betting here that my Eliezer-model is good enough that I've understood his ideas on the topic better than many others have, and can concisely communicate this better understanding. Since most of the possible positive impact of this effort lives in the fat tail of outcomes where it makes a lot of Eliezerisms click for a lot of alignment workers, I'll make this an effortpost. The Ideas to be Clarified I've noticed that Eliezer seems to think the von Neumann-Morgenstern (VNM) theorem is obviously far reaching in a way that few others do. Understand the concept of VNM rationality, which I recommend learning from the Wikipedia article... Von Neumann and Morgenstern showed that any agent obeying a few simple consistency axioms acts with preferences characterizable by a utility function. MIRI Research Guide (2015) Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...? No, it's like when you don't, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple. I have still not figured out the homework exercises to convey to somebody the Word of Power which is "coherence" by which they will be able to look at the water, and see "coherence" in places like a cat walking across the room without tripping over itself. When you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is... still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet? And one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identifie...

Hello APGD
Richard Minino of Rock 'Em Socks and the artist behind "Horsebites"

Hello APGD

Play Episode Listen Later Jan 6, 2023 80:44


Meet Richard (a.k.a. Horsebites)--co-founder of The VNM, founding member of The Black Axe, professional artist, drummer, Audubon Park resident, and all-around swell guy. This episode coincides with two January Orlando Music History events, with an art show (featuring Horsebites) which is on display at Stardust Video & Coffee for the month of January.https://linktr.ee/helloapgdpodhttps://www.helloapgd.com/hello-apgd-podcasthttps://instagram.com/helloapgdpod?igshid=NzNkNDdiOGI=

The Nonlinear Library
LW - Why The Focus on Expected Utility Maximisers? by DragonGod

The Nonlinear Library

Play Episode Listen Later Dec 27, 2022 7:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why The Focus on Expected Utility Maximisers?, published by DragonGod on December 27, 2022 on LessWrong. Epistemic Status Unsure, partially noticing my own confusion. Hoping Cunningham's Law can help resolve it. Confusions About Arguments From Expected Utility Maximisation Some MIRI people (e.g. Rob Bensinger) still highlight EU maximisers as the paradigm case for existentially dangerous AI systems. I'm confused by this for a few reasons: Not all consequentialist/goal directed systems are expected utility maximisers E.g. humans Some recent developments make me sceptical that VNM expected utility are a natural form of generally intelligent systems Wentworth's subagents provide a model for inexploitable agents that don't maximise a simple unitary utility function The main requirement for subagents to be a better model than unitary agents is path dependent preferences or hidden state variables Alternatively, subagents natively admit partial orders over preferences If I'm not mistaken, utility functions seem to require a (static) total order over preferences This might be a very unreasonable ask; it does not seem to describe humans, animals, or even existing sophisticated AI systems I think the strongest implication of Wentworth's subagents is that expected utility maximisation is not the limit or idealised form of agency Shard Theory suggests that trained agents (via reinforcement learning) form value "shards" Values are inherently "contextual influences on decision making" Hence agents do not have a static total order over preferences (what a utility function implies) as what preferences are active depends on the context Preferences are dynamic (change over time), and the ordering of them is not necessarily total This explains many of the observed inconsistencies in human decision making A multitude of value shards do not admit analysis as a simple unitary utility function Reward is not the optimisation target Reinforcement learning does not select for reward maximising agents in general Reward "upweight certain kinds of actions in certain kinds of situations, and therefore reward chisels cognitive grooves into agents" I'm thus very sceptical that systems optimised via reinforcement learning to be capable in a wide variety of domains/tasks converge towards maximising a simple expected utility function I am not aware that humanity actually knows training paradigms that select for expected utility maximisers Our most capable/economically transformative AI systems are not agents and are definitely not expected utility maximisers Such systems might converge towards general intelligence under sufficiently strong selection pressure but do not become expected utility maximisers in the limit The do not become agents in the limit and expected utility maximisation is a particular kind of agency I am seriously entertaining the hypothesis that expected utility maximisation is anti-natural to selection for general intelligence I'm not under the impression that systems optimised by stochastic gradient descent to be generally capable optimisers converge towards expected utility maximisers The generally capable optimisers produced by evolution aren't expected utility maximisers I'm starting to suspect that "search like" optimisation processes for general intelligence do not in general converge towards expected utility maximisers I.e. it may end up being the case that the only way to create a generally capable expected utility maximiser is to explicitly design one And we do not know how to design capable optimisers for rich environments We can't even design an image classifier I currently disbelieve the strong orthogonality thesis translated to practice While it may be in theory feasible to design systems at any intelligence level with any final goal In practice, we cannot design capab...

The Nonlinear Library: LessWrong
LW - Why The Focus on Expected Utility Maximisers? by DragonGod

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 27, 2022 7:12


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why The Focus on Expected Utility Maximisers?, published by DragonGod on December 27, 2022 on LessWrong. Epistemic Status Unsure, partially noticing my own confusion. Hoping Cunningham's Law can help resolve it. Confusions About Arguments From Expected Utility Maximisation Some MIRI people (e.g. Rob Bensinger) still highlight EU maximisers as the paradigm case for existentially dangerous AI systems. I'm confused by this for a few reasons: Not all consequentialist/goal directed systems are expected utility maximisers E.g. humans Some recent developments make me sceptical that VNM expected utility are a natural form of generally intelligent systems Wentworth's subagents provide a model for inexploitable agents that don't maximise a simple unitary utility function The main requirement for subagents to be a better model than unitary agents is path dependent preferences or hidden state variables Alternatively, subagents natively admit partial orders over preferences If I'm not mistaken, utility functions seem to require a (static) total order over preferences This might be a very unreasonable ask; it does not seem to describe humans, animals, or even existing sophisticated AI systems I think the strongest implication of Wentworth's subagents is that expected utility maximisation is not the limit or idealised form of agency Shard Theory suggests that trained agents (via reinforcement learning) form value "shards" Values are inherently "contextual influences on decision making" Hence agents do not have a static total order over preferences (what a utility function implies) as what preferences are active depends on the context Preferences are dynamic (change over time), and the ordering of them is not necessarily total This explains many of the observed inconsistencies in human decision making A multitude of value shards do not admit analysis as a simple unitary utility function Reward is not the optimisation target Reinforcement learning does not select for reward maximising agents in general Reward "upweight certain kinds of actions in certain kinds of situations, and therefore reward chisels cognitive grooves into agents" I'm thus very sceptical that systems optimised via reinforcement learning to be capable in a wide variety of domains/tasks converge towards maximising a simple expected utility function I am not aware that humanity actually knows training paradigms that select for expected utility maximisers Our most capable/economically transformative AI systems are not agents and are definitely not expected utility maximisers Such systems might converge towards general intelligence under sufficiently strong selection pressure but do not become expected utility maximisers in the limit The do not become agents in the limit and expected utility maximisation is a particular kind of agency I am seriously entertaining the hypothesis that expected utility maximisation is anti-natural to selection for general intelligence I'm not under the impression that systems optimised by stochastic gradient descent to be generally capable optimisers converge towards expected utility maximisers The generally capable optimisers produced by evolution aren't expected utility maximisers I'm starting to suspect that "search like" optimisation processes for general intelligence do not in general converge towards expected utility maximisers I.e. it may end up being the case that the only way to create a generally capable expected utility maximiser is to explicitly design one And we do not know how to design capable optimisers for rich environments We can't even design an image classifier I currently disbelieve the strong orthogonality thesis translated to practice While it may be in theory feasible to design systems at any intelligence level with any final goal In practice, we cannot design capab...

The Nonlinear Library
LW - Geometric Rationality is Not VNM Rational by Scott Garrabrant

The Nonlinear Library

Play Episode Listen Later Nov 27, 2022 5:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geometric Rationality is Not VNM Rational, published by Scott Garrabrant on November 27, 2022 on LessWrong. One elephant in the room throughout my geometric rationality sequence, is that it is sometimes advocating for randomizing between actions, and so geometrically rational agents cannot possibly satisfy the Von Neumann–Morgenstern axioms. That is correct: I am rejecting the VNM axioms. In this post, I will say more about why I am making such a bold move. A Model of Geometric Rationality I have been rather vague on what I mean by geometric rationality. I still want to be vague in general, but for the purposes of this post, I will give a concrete definition, and I will use the type signature of the VNM utility theorem. (I do not think this definition is good enough, and want it to restrict its scope to this post.) A preference ordering on lotteries over outcomes is called geometrically rational if there exists some probability distribution P over interval valued utility functions on outcomes such that L⪯M if and only if GU∼PEO∼LU(O)≤GU∼PEO∼MU(O). For comparison, an agent is VNM rational there exists a single utility function U, such that L⪯M if and only if EO∼LU(O)≤EO∼MU(O). Geometric Rationality is weaker than VNM rationality, since under reasonable assumptions, we can assume the utility function of a VNM rational agent is interval valued, and then we can always take the probability distribution that assigns probability 1 to this utility function. Geometric Rationality is strictly weaker, because it sometimes strictly prefers lotteries over any of the deterministic outcomes, and VNM rational agents never do this. The VNM utility theorem says that any preference ordering on lotteries that satisfies some simple axioms must be VNM rational (i.e. have a utility function as above). Since I am advocating for a weaker notion of rationality, I must reject some of these axioms. Against Independence The VNM axiom that I am rejecting is the independence axiom. It states that given lotteries A, B, and C, and probability p, A⪯B if and only if pC+(1−p)A⪯pC+(1−p)B. Thus, mixing in a probability p of C will not change my preference between A and B. Let us go through an example. Alice and Bob are a married couple. They are trying to decide where to move, buy a house, and live for the rest of their lives. Alice prefers Atlanta, Bob prefers Boston. The agent I am modeling here is the married couple consisting of Alice and Bob. Bob's preference for Boston is sufficiently stronger than Alice's preference for Atlanta, that given only these options, they would move to Boston (A≺B). Bob is presented with a unique job opportunity, where he (and Alice) can move to California, and try to save the world. However, he does not actually have a job offer yet. They estimate an 80 percent chance that he will get a job offer next week. Otherwise, they will move to Atlanta or Boston. California is a substantial improvement for Bob's preferences over either of the other options. For Alice, it is comparable to Boston. Alice and Bob are currently deciding on a policy of what to do conditional on getting and not getting the offer. It is clear that if they get the offer, they will move to California. However, they figure that since Bob's preferences are in expectation being greatly satisfied in the 80 percent of worlds where they are in California, they should move to Atlanta if they do not get the offer (pC+(1−p)B≺pC+(1−p)A). Alice and Bob are collectively violating the independence axiom, and are not VNM rational. Are they making a mistake? Should we not model them as irrational due to their weird obsession with fairness? Dutch Books and Updatelessness You might claim that abandoning the independence axiom opens up Alice and Bob up to get Dutch booked. The argument would go as follows. First, you offer ...

The Nonlinear Library: LessWrong
LW - Geometric Rationality is Not VNM Rational by Scott Garrabrant

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 27, 2022 5:39


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geometric Rationality is Not VNM Rational, published by Scott Garrabrant on November 27, 2022 on LessWrong. One elephant in the room throughout my geometric rationality sequence, is that it is sometimes advocating for randomizing between actions, and so geometrically rational agents cannot possibly satisfy the Von Neumann–Morgenstern axioms. That is correct: I am rejecting the VNM axioms. In this post, I will say more about why I am making such a bold move. A Model of Geometric Rationality I have been rather vague on what I mean by geometric rationality. I still want to be vague in general, but for the purposes of this post, I will give a concrete definition, and I will use the type signature of the VNM utility theorem. (I do not think this definition is good enough, and want it to restrict its scope to this post.) A preference ordering on lotteries over outcomes is called geometrically rational if there exists some probability distribution P over interval valued utility functions on outcomes such that L⪯M if and only if GU∼PEO∼LU(O)≤GU∼PEO∼MU(O). For comparison, an agent is VNM rational there exists a single utility function U, such that L⪯M if and only if EO∼LU(O)≤EO∼MU(O). Geometric Rationality is weaker than VNM rationality, since under reasonable assumptions, we can assume the utility function of a VNM rational agent is interval valued, and then we can always take the probability distribution that assigns probability 1 to this utility function. Geometric Rationality is strictly weaker, because it sometimes strictly prefers lotteries over any of the deterministic outcomes, and VNM rational agents never do this. The VNM utility theorem says that any preference ordering on lotteries that satisfies some simple axioms must be VNM rational (i.e. have a utility function as above). Since I am advocating for a weaker notion of rationality, I must reject some of these axioms. Against Independence The VNM axiom that I am rejecting is the independence axiom. It states that given lotteries A, B, and C, and probability p, A⪯B if and only if pC+(1−p)A⪯pC+(1−p)B. Thus, mixing in a probability p of C will not change my preference between A and B. Let us go through an example. Alice and Bob are a married couple. They are trying to decide where to move, buy a house, and live for the rest of their lives. Alice prefers Atlanta, Bob prefers Boston. The agent I am modeling here is the married couple consisting of Alice and Bob. Bob's preference for Boston is sufficiently stronger than Alice's preference for Atlanta, that given only these options, they would move to Boston (A≺B). Bob is presented with a unique job opportunity, where he (and Alice) can move to California, and try to save the world. However, he does not actually have a job offer yet. They estimate an 80 percent chance that he will get a job offer next week. Otherwise, they will move to Atlanta or Boston. California is a substantial improvement for Bob's preferences over either of the other options. For Alice, it is comparable to Boston. Alice and Bob are currently deciding on a policy of what to do conditional on getting and not getting the offer. It is clear that if they get the offer, they will move to California. However, they figure that since Bob's preferences are in expectation being greatly satisfied in the 80 percent of worlds where they are in California, they should move to Atlanta if they do not get the offer (pC+(1−p)B≺pC+(1−p)A). Alice and Bob are collectively violating the independence axiom, and are not VNM rational. Are they making a mistake? Should we not model them as irrational due to their weird obsession with fairness? Dutch Books and Updatelessness You might claim that abandoning the independence axiom opens up Alice and Bob up to get Dutch booked. The argument would go as follows. First, you offer ...

The Nonlinear Library: LessWrong Daily
LW - Geometric Rationality is Not VNM Rational by Scott Garrabrant

The Nonlinear Library: LessWrong Daily

Play Episode Listen Later Nov 27, 2022 5:39


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geometric Rationality is Not VNM Rational, published by Scott Garrabrant on November 27, 2022 on LessWrong. One elephant in the room throughout my geometric rationality sequence, is that it is sometimes advocating for randomizing between actions, and so geometrically rational agents cannot possibly satisfy the Von Neumann–Morgenstern axioms. That is correct: I am rejecting the VNM axioms. In this post, I will say more about why I am making such a bold move. A Model of Geometric Rationality I have been rather vague on what I mean by geometric rationality. I still want to be vague in general, but for the purposes of this post, I will give a concrete definition, and I will use the type signature of the VNM utility theorem. (I do not think this definition is good enough, and want it to restrict its scope to this post.) A preference ordering on lotteries over outcomes is called geometrically rational if there exists some probability distribution P over interval valued utility functions on outcomes such that L⪯M if and only if GU∼PEO∼LU(O)≤GU∼PEO∼MU(O). For comparison, an agent is VNM rational there exists a single utility function U, such that L⪯M if and only if EO∼LU(O)≤EO∼MU(O). Geometric Rationality is weaker than VNM rationality, since under reasonable assumptions, we can assume the utility function of a VNM rational agent is interval valued, and then we can always take the probability distribution that assigns probability 1 to this utility function. Geometric Rationality is strictly weaker, because it sometimes strictly prefers lotteries over any of the deterministic outcomes, and VNM rational agents never do this. The VNM utility theorem says that any preference ordering on lotteries that satisfies some simple axioms must be VNM rational (i.e. have a utility function as above). Since I am advocating for a weaker notion of rationality, I must reject some of these axioms. Against Independence The VNM axiom that I am rejecting is the independence axiom. It states that given lotteries A, B, and C, and probability p, A⪯B if and only if pC+(1−p)A⪯pC+(1−p)B. Thus, mixing in a probability p of C will not change my preference between A and B. Let us go through an example. Alice and Bob are a married couple. They are trying to decide where to move, buy a house, and live for the rest of their lives. Alice prefers Atlanta, Bob prefers Boston. The agent I am modeling here is the married couple consisting of Alice and Bob. Bob's preference for Boston is sufficiently stronger than Alice's preference for Atlanta, that given only these options, they would move to Boston (A≺B). Bob is presented with a unique job opportunity, where he (and Alice) can move to California, and try to save the world. However, he does not actually have a job offer yet. They estimate an 80 percent chance that he will get a job offer next week. Otherwise, they will move to Atlanta or Boston. California is a substantial improvement for Bob's preferences over either of the other options. For Alice, it is comparable to Boston. Alice and Bob are currently deciding on a policy of what to do conditional on getting and not getting the offer. It is clear that if they get the offer, they will move to California. However, they figure that since Bob's preferences are in expectation being greatly satisfied in the 80 percent of worlds where they are in California, they should move to Atlanta if they do not get the offer (pC+(1−p)B≺pC+(1−p)A). Alice and Bob are collectively violating the independence axiom, and are not VNM rational. Are they making a mistake? Should we not model them as irrational due to their weird obsession with fairness? Dutch Books and Updatelessness You might claim that abandoning the independence axiom opens up Alice and Bob up to get Dutch booked. The argument would go as follows. First, you offer ...

The Nonlinear Library
LW - Utilitarianism Meets Egalitarianism by Scott Garrabrant

The Nonlinear Library

Play Episode Listen Later Nov 22, 2022 10:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Utilitarianism Meets Egalitarianism, published by Scott Garrabrant on November 21, 2022 on LessWrong. This post is mostly propaganda for the Nash Bargaining solution, but also sets up some useful philosophical orientation. This post is also the first post in my geometric rationality sequence. Utilitarianism Let's pretend that you are a utilitarian. You want to satisfy everyone's goals, and so you go behind the veil of ignorance. You forget who you are. Now, you could be anybody. You now want to maximize expected expected utility. The outer (first) expectation is over your uncertainty about who you are. The inner (second) expectation is over your uncertainty about the world, as well as any probabilities that comes from you choosing to include randomness in your action. There is a problem. Actually, there are two problems, but they disguise themselves as one problem. The first problem is that it is not clear where you should get your distribution over your identity from. It does not make sense to just take the uniform distribution; there are many people you can be, and they exist to different extents, especially if you include potential future people whose existences are uncertain. The second problem is that interpersonal utility comparisons don't make sense. Utility functions are not a real thing. Instead, there are preferences over uncertain worlds. If a person's preferences satisfy the VNM axioms, then we can treat that person as having a utility function, but the real thing is more like their preference ordering. When we get utility functions this way, they are only defined up to affine transformation. If you add a constant to a utility function, or multiply a utility function by a positive constant, you get the same preferences. Before you can talk about maximizing the expectation over your uncertainty about who you are, you need to put all the different possible utility functions into comparable units. This involves making a two dimensional choice. You have to choose a zero point for each person, together with a scaling factor for how much their utility goes up as their preferences are satisfied. Luckily, to implement the procedure of maximizing expected expected utility, you don't actually need to know the zero points, since these only shift expected expected utility by a constant. You do, however need to know the scaling factors. This is not an easy task. You cannot just say something like "Make all the scaling factors 1." You don't actually start with utility functions, you start with equivalence classes of utility functions. Thus, to implement utilitarianism, we need to know two things: What is the distribution on people, and how do you scale each person's utilities? This gets disguised as one problem, since the thing you do with these numbers is just multiply them together to get a single weight, but it is actually two things you need to decide. What can we do? Egalitarianism Now, let's pretend you are an egalitarian. You still want to satisfy everyone's goals, and so you go behind the veil of ignorance, and forget who you are. The difference is that now you are not trying to maximize expected expected utility, and instead are trying to maximize worst-case expected utility. Again, the expectation contains uncertainty about the world as well as any randomness in your action. The "worst-case" part is about your uncertainty about who you are. You would like to have reasonably high expected utility, regardless of who you might be. When I say maximize worst-case expected utility, I am sweeping some details under the rug about what to do if you manage to max out someone's utility. The actual proposal is to maximize the minimum utility over all people. Then if there are multiple ways to do this, consider the set of all people for which it is still possible to incre...

The Nonlinear Library: LessWrong
LW - Utilitarianism Meets Egalitarianism by Scott Garrabrant

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 22, 2022 10:00


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Utilitarianism Meets Egalitarianism, published by Scott Garrabrant on November 21, 2022 on LessWrong. This post is mostly propaganda for the Nash Bargaining solution, but also sets up some useful philosophical orientation. This post is also the first post in my geometric rationality sequence. Utilitarianism Let's pretend that you are a utilitarian. You want to satisfy everyone's goals, and so you go behind the veil of ignorance. You forget who you are. Now, you could be anybody. You now want to maximize expected expected utility. The outer (first) expectation is over your uncertainty about who you are. The inner (second) expectation is over your uncertainty about the world, as well as any probabilities that comes from you choosing to include randomness in your action. There is a problem. Actually, there are two problems, but they disguise themselves as one problem. The first problem is that it is not clear where you should get your distribution over your identity from. It does not make sense to just take the uniform distribution; there are many people you can be, and they exist to different extents, especially if you include potential future people whose existences are uncertain. The second problem is that interpersonal utility comparisons don't make sense. Utility functions are not a real thing. Instead, there are preferences over uncertain worlds. If a person's preferences satisfy the VNM axioms, then we can treat that person as having a utility function, but the real thing is more like their preference ordering. When we get utility functions this way, they are only defined up to affine transformation. If you add a constant to a utility function, or multiply a utility function by a positive constant, you get the same preferences. Before you can talk about maximizing the expectation over your uncertainty about who you are, you need to put all the different possible utility functions into comparable units. This involves making a two dimensional choice. You have to choose a zero point for each person, together with a scaling factor for how much their utility goes up as their preferences are satisfied. Luckily, to implement the procedure of maximizing expected expected utility, you don't actually need to know the zero points, since these only shift expected expected utility by a constant. You do, however need to know the scaling factors. This is not an easy task. You cannot just say something like "Make all the scaling factors 1." You don't actually start with utility functions, you start with equivalence classes of utility functions. Thus, to implement utilitarianism, we need to know two things: What is the distribution on people, and how do you scale each person's utilities? This gets disguised as one problem, since the thing you do with these numbers is just multiply them together to get a single weight, but it is actually two things you need to decide. What can we do? Egalitarianism Now, let's pretend you are an egalitarian. You still want to satisfy everyone's goals, and so you go behind the veil of ignorance, and forget who you are. The difference is that now you are not trying to maximize expected expected utility, and instead are trying to maximize worst-case expected utility. Again, the expectation contains uncertainty about the world as well as any randomness in your action. The "worst-case" part is about your uncertainty about who you are. You would like to have reasonably high expected utility, regardless of who you might be. When I say maximize worst-case expected utility, I am sweeping some details under the rug about what to do if you manage to max out someone's utility. The actual proposal is to maximize the minimum utility over all people. Then if there are multiple ways to do this, consider the set of all people for which it is still possible to incre...

RAP KONTENER
Pierwszy sezon Rap Kontenera za nami I RAP KONTENER odcinek #11

RAP KONTENER

Play Episode Listen Later Aug 30, 2022 60:24


Mieliście okazję usłyszeć na żywo m.in. takie klasyki jak WIELKIE JOŁ Tedego, TEMAT ZAKAZANY Pona czy WIEDZIAŁEM, ŻE TAK BĘDZIE Molesty. Wróćmy więc raz jeszcze za mikrofony, przypomnijmy sobie z Sokołem jak powstał legendarny kawałek W AUCIE, napijmy się cytrynówki z VNM'em, zobaczmy co o młodych raperkach sądzi WdoWA, dowiedzmy się kto wymyślił nazwę PROCEDER. To tylko część tematów podjętych w specjalnym odcinku w stylu BEST OF. We wrześniu wracamy z nowym sezonem, z naładowanymi bateriami po wakacjach. Jest czekanko?

The Nonlinear Library
AF - Vingean Agency by Abram Demski

The Nonlinear Library

Play Episode Listen Later Aug 24, 2022 4:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vingean Agency, published by Abram Demski on August 24, 2022 on The AI Alignment Forum. I've been involved with several discussions about different notions of agency (and their importance/relationships) lately, especially with the PIBBSS group including myself, Daniel, Josiah, and Ramana; see here. There's one notion of agency (not necessarily "The" notion of agency, but a coherent and significant notion) which vanishes if you examine it too closely. Imagine that Alice is "smarter than Bob in every way" -- that is, Bob believes that Alice knows everything Bob knows, and possibly more. Bob doesn't necessarily agree with Alice's goals, but Bob expects Alice to pursue them effectively. In particular, Bob expects Alice's actions to be at least as effective as the best plan Bob can think of. Because Bob can't predict what Alice will do, the only way Bob can further constrain his expectations is to figure out what's good/bad for Alice's objectives. In some sense this seems like a best-case for Bob modeling Alice as an agent: Bob understands Alice purely by understanding her as a goal-seeking force. I'll call this Vingean agency, since Vinge talked about the difficulty of predicting agents who are smarter than you. and since this usage is consistent with other uses of the term "Vingean" in relation to decision theory. However Vingean agency might seem hard to reconcile with other notions of agency. We typically think of "modeling X as an agent" as involving attribution of beliefs to X, not just goals. Agents have probabilities and utilities. Bob has minimal use for attributing beliefs to Alice, because Bob doesn't think Alice is mistaken about anything -- the best he can do is to use his own beliefs as a proxy, and try to figure out what Alice will do based on that. When I say Vingean agency "disappears when we look at it too closely", I mean that if Bob becomes smarter than Alice (understands more about the world, or has a greater ability to calculate the consequences of his beliefs), Alice's Vingean agency will vanish. We can imagine a spectrum. At one extreme is an Alice who knows everything Bob knows and more, like we've been considering so far. At the other extreme is an Alice whose behavior is so simple that Bob can predict it completely. In between these two extremes are Alices who know some things that Bob doesn't know, while also lacking some information which Bob has. (Arguably, Eliezer's notion of optimization power is one formalization of Vingean agency, while Alex Flint's attraction-basin notion of optimization defines a notion of agency at the opposite extreme of the spectrum, where we know everything about the whole system and can predict its trajectories through time.) I think this spectrum may be important to keep in mind when modeling different notions of agency. Sometimes we analyze agents from a logically omniscient perspective. In representation theorems (such as Savage or Jeffrey-Bolker, or their lesser sibling, VNM) we tend to take on a perspective where we can predict all the decisions of an agent (including hypothetical decisions which the agent will never face in reality). From this omniscient perspective, we then seek to represent the agent's behavior by ascribing it beliefs and real-valued preferences (ie, probabilities and expected utilities). However, this omniscient perspective eliminates Vingean agency from the picture. Thus, we might lose contact with one of the important pieces of the "agent" phenomenon, which can only be understood from a more bounded perspective. On the other hand, if Bob knows Alice wants cheese, then as soon as Alice starts moving in a given direction, Bob might usefully conclude "Alice probably thinks cheese is in that direction". So modeling Alice as having beliefs is certainly not useless for Bob. Still, because Bob ...

RAP KONTENER
VNM | RAP KONTENER odcinek #2

RAP KONTENER

Play Episode Listen Later Jul 6, 2022 61:01


RAP KONTENER tym razem odwiedzi VNM reprezentant labelu DE NEKST BEST. Rozmowa będzie dotyczyć całej muzycznej drogi V'a – od początków w Elblągu, przez złote płyty w PROSTO, aż do najnowszego mixtapu łączącego 3 pokolenia polskich raperów. Nie zabraknie cięższych tematów typu choroba, depresja, rap jako terapia, ale będzie nie zabraknie wesołych akcentów przy degustacji cytrynówki, ankiecie „1 z 2” oraz rapsach na żywo i wspomnieniach ze studia i ze sceny.

The Nonlinear Library
LW - What is ambitious value learning? by rohinmshah from Value Learning

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 4:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Value Learning, Part 2: What is ambitious value learning?, published by rohinmshah. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. I think of ambitious value learning as a proposed solution to the specification problem, which I define as the problem of defining the behavior that we would want to see from our AI system. I italicize “defining” to emphasize that this is not the problem of actually computing behavior that we want to see -- that's the full AI safety problem. Here we are allowed to use hopelessly impractical schemes, as long as the resulting definition would allow us to in theory compute the behavior that an AI system would take, perhaps with assumptions like infinite computing power or arbitrarily many queries to a human. (Although we do prefer specifications that seem like they could admit an efficient implementation.) In terms of DeepMind's classification, we are looking for a design specification that exactly matches the ideal specification. HCH and indirect normativity are examples of attempts at such specifications. We will consider a model in which our AI system is maximizing the expected utility of some explicitly represented utility function that can depend on history. (It does not matter materially whether we consider utility functions or reward functions, as long as they can depend on history.) The utility function may be learned from data, or designed by hand, but it must be an explicit part of the AI that is then maximized. I will not justify this model for now, but simply assume it by fiat and see where it takes us. I'll note briefly that this model is often justified by the VNM utility theorem and AIXI, and as the natural idealization of reinforcement learning, which aims to maximize the expected sum of rewards, although typically rewards in RL depend only on states. A lot of conceptual arguments, as well as experiences with specification gaming, suggest that we are unlikely to be able to simply think hard and write down a good specification, since even small errors in specifications can lead to bad results. However, machine learning is particularly good at narrowing down on the correct hypothesis among a vast space of possibilities using data, so perhaps we could determine a good specification from some suitably chosen source of data? This leads to the idea of ambitious value learning, where we learn an explicit utility function from human behavior for the AI to maximize. This is very related to inverse reinforcement learning (IRL) in the machine learning literature, though not all work on IRL is relevant to ambitious value learning. For example, much work on IRL is aimed at imitation learning, which would in the best case allow you to match human performance, but not to exceed it. Ambitious value learning is, well, more ambitious -- it aims to learn a utility function that captures “what humans care about”, so that an AI system that optimizes this utility function more capably can exceed human performance, making the world better for humans than they could have done themselves. It may sound like we would have solved the entire AI safety problem if we could do ambitious value learning -- surely if we have a good utility function we would be done. Why then do I think of it as a solution to just the specification problem? This is because ambitious value learning by itself would not be enough for safety, except under the assumption of as much compute and data as desired. These are really powerful assumptions -- for example, I'm assuming you can get data where you put a human in an arbitrarily complicated simulated environment with fake memories of their life so far and see what they do. This allows us to ignore many things that would likely be a problem in practice, such as: Attempting to use the utility ...

The Nonlinear Library
LW - Coherence arguments do not entail goal-directed behavior by rohinmshah from Value Learning

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 11:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Value Learning, Part 9: Coherence arguments do not entail goal-directed behavior, published by rohinmshah. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. One of the most pleasing things about probability and expected utility theory is that there are many coherence arguments that suggest that these are the “correct” ways to reason. If you deviate from what the theory prescribes, then you must be executing a dominated strategy. There must be some other strategy that never does any worse than your strategy, but does strictly better than your strategy with certainty in at least one situation. There's a good explanation of these arguments here. We shouldn't expect mere humans to be able to notice any failures of coherence in a superintelligent agent, since if we could notice these failures, so could the agent. So we should expect that powerful agents appear coherent to us. (Note that it is possible that the agent doesn't fix the failures because it would not be worth it -- in this case, the argument says that we will not be able to notice any exploitable failures.) Taken together, these arguments suggest that we should model an agent much smarter than us as an expected utility (EU) maximizer. And many people agree that EU maximizers are dangerous. So does this mean we're doomed? I don't think so: it seems to me that the problems about EU maximizers that we've identified are actually about goal-directed behavior or explicit reward maximizers. The coherence theorems say nothing about whether an AI system must look like one of these categories. This suggests that we could try building an AI system that can be modeled as an EU maximizer, yet doesn't fall into one of these two categories, and so doesn't have all of the problems that we worry about. Note that there are two different flavors of arguments that the AI systems we build will be goal-directed agents (which are dangerous if the goal is even slightly wrong): Simply knowing that an agent is intelligent lets us infer that it is goal-directed. (EDIT: See these comments for more details on this argument.) Humans are particularly likely to build goal-directed agents. I will only be arguing against the first claim in this post, and will talk about the second claim in the next post. All behavior can be rationalized as EU maximization Suppose we have access to the entire policy of an agent, that is, given any universe-history, we know what action the agent will take. Can we tell whether the agent is an EU maximizer? Actually, no matter what the policy is, we can view the agent as an EU maximizer. The construction is simple: the agent can be thought as optimizing the utility function U, where U(h, a) = 1 if the policy would take action a given history h, else 0. Here I'm assuming that U is defined over histories that are composed of states/observations and actions. The actual policy gets 1 utility at every timestep; any other policy gets less than this, so the given policy perfectly maximizes this utility function. This construction has been given before, eg. at the bottom of page 6 of this paper. (I think I've seen it before too, but I can't remember where.) But wouldn't this suggest that the VNM theorem has no content? Well, we assumed that we were looking at the policy of the agent, which led to a universe-history deterministically. We didn't have access to any probabilities. Given a particular action, we knew exactly what the next state would be. Most of the axioms of the VNM theorem make reference to lotteries and probabilities -- if the world is deterministic, then the axioms simply say that the agent must have transitive preferences over outcomes. Given that we can only observe the agent choose one history over another, we can trivially construct a transitive preference ordering by ...

The Nonlinear Library: LessWrong
LW - What is ambitious value learning? by rohinmshah from Value Learning

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 24, 2021 4:22


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Value Learning, Part 2: What is ambitious value learning?, published by rohinmshah. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. I think of ambitious value learning as a proposed solution to the specification problem, which I define as the problem of defining the behavior that we would want to see from our AI system. I italicize “defining” to emphasize that this is not the problem of actually computing behavior that we want to see -- that's the full AI safety problem. Here we are allowed to use hopelessly impractical schemes, as long as the resulting definition would allow us to in theory compute the behavior that an AI system would take, perhaps with assumptions like infinite computing power or arbitrarily many queries to a human. (Although we do prefer specifications that seem like they could admit an efficient implementation.) In terms of DeepMind's classification, we are looking for a design specification that exactly matches the ideal specification. HCH and indirect normativity are examples of attempts at such specifications. We will consider a model in which our AI system is maximizing the expected utility of some explicitly represented utility function that can depend on history. (It does not matter materially whether we consider utility functions or reward functions, as long as they can depend on history.) The utility function may be learned from data, or designed by hand, but it must be an explicit part of the AI that is then maximized. I will not justify this model for now, but simply assume it by fiat and see where it takes us. I'll note briefly that this model is often justified by the VNM utility theorem and AIXI, and as the natural idealization of reinforcement learning, which aims to maximize the expected sum of rewards, although typically rewards in RL depend only on states. A lot of conceptual arguments, as well as experiences with specification gaming, suggest that we are unlikely to be able to simply think hard and write down a good specification, since even small errors in specifications can lead to bad results. However, machine learning is particularly good at narrowing down on the correct hypothesis among a vast space of possibilities using data, so perhaps we could determine a good specification from some suitably chosen source of data? This leads to the idea of ambitious value learning, where we learn an explicit utility function from human behavior for the AI to maximize. This is very related to inverse reinforcement learning (IRL) in the machine learning literature, though not all work on IRL is relevant to ambitious value learning. For example, much work on IRL is aimed at imitation learning, which would in the best case allow you to match human performance, but not to exceed it. Ambitious value learning is, well, more ambitious -- it aims to learn a utility function that captures “what humans care about”, so that an AI system that optimizes this utility function more capably can exceed human performance, making the world better for humans than they could have done themselves. It may sound like we would have solved the entire AI safety problem if we could do ambitious value learning -- surely if we have a good utility function we would be done. Why then do I think of it as a solution to just the specification problem? This is because ambitious value learning by itself would not be enough for safety, except under the assumption of as much compute and data as desired. These are really powerful assumptions -- for example, I'm assuming you can get data where you put a human in an arbitrarily complicated simulated environment with fake memories of their life so far and see what they do. This allows us to ignore many things that would likely be a problem in practice, such as: Attempting to use the utility ...

The Nonlinear Library: LessWrong
LW - Coherence arguments do not entail goal-directed behavior by rohinmshah from Value Learning

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 24, 2021 11:42


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Value Learning, Part 9: Coherence arguments do not entail goal-directed behavior, published by rohinmshah. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. One of the most pleasing things about probability and expected utility theory is that there are many coherence arguments that suggest that these are the “correct” ways to reason. If you deviate from what the theory prescribes, then you must be executing a dominated strategy. There must be some other strategy that never does any worse than your strategy, but does strictly better than your strategy with certainty in at least one situation. There's a good explanation of these arguments here. We shouldn't expect mere humans to be able to notice any failures of coherence in a superintelligent agent, since if we could notice these failures, so could the agent. So we should expect that powerful agents appear coherent to us. (Note that it is possible that the agent doesn't fix the failures because it would not be worth it -- in this case, the argument says that we will not be able to notice any exploitable failures.) Taken together, these arguments suggest that we should model an agent much smarter than us as an expected utility (EU) maximizer. And many people agree that EU maximizers are dangerous. So does this mean we're doomed? I don't think so: it seems to me that the problems about EU maximizers that we've identified are actually about goal-directed behavior or explicit reward maximizers. The coherence theorems say nothing about whether an AI system must look like one of these categories. This suggests that we could try building an AI system that can be modeled as an EU maximizer, yet doesn't fall into one of these two categories, and so doesn't have all of the problems that we worry about. Note that there are two different flavors of arguments that the AI systems we build will be goal-directed agents (which are dangerous if the goal is even slightly wrong): Simply knowing that an agent is intelligent lets us infer that it is goal-directed. (EDIT: See these comments for more details on this argument.) Humans are particularly likely to build goal-directed agents. I will only be arguing against the first claim in this post, and will talk about the second claim in the next post. All behavior can be rationalized as EU maximization Suppose we have access to the entire policy of an agent, that is, given any universe-history, we know what action the agent will take. Can we tell whether the agent is an EU maximizer? Actually, no matter what the policy is, we can view the agent as an EU maximizer. The construction is simple: the agent can be thought as optimizing the utility function U, where U(h, a) = 1 if the policy would take action a given history h, else 0. Here I'm assuming that U is defined over histories that are composed of states/observations and actions. The actual policy gets 1 utility at every timestep; any other policy gets less than this, so the given policy perfectly maximizes this utility function. This construction has been given before, eg. at the bottom of page 6 of this paper. (I think I've seen it before too, but I can't remember where.) But wouldn't this suggest that the VNM theorem has no content? Well, we assumed that we were looking at the policy of the agent, which led to a universe-history deterministically. We didn't have access to any probabilities. Given a particular action, we knew exactly what the next state would be. Most of the axioms of the VNM theorem make reference to lotteries and probabilities -- if the world is deterministic, then the axioms simply say that the agent must have transitive preferences over outcomes. Given that we can only observe the agent choose one history over another, we can trivially construct a transitive preference ordering by ...

Magazyn Muzyczny
De Nekst Best Mixtape 2

Magazyn Muzyczny

Play Episode Listen Later Dec 23, 2021 25:53


De Nekst Best Mixtape 2 Studio Radia Kampus odwiedzili VNM i DJ Hubson!

The Nonlinear Library: Alignment Forum Top Posts
Coherence arguments do not imply goal-directed behavior

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 4, 2021 11:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Coherence arguments do not imply goal-directed behavior, published by Rohin Shah on the AI Alignment Forum. One of the most pleasing things about probability and expected utility theory is that there are many coherence arguments that suggest that these are the “correct” ways to reason. If you deviate from what the theory prescribes, then you must be executing a dominated strategy. There must be some other strategy that never does any worse than your strategy, but does strictly better than your strategy with certainty in at least one situation. There's a good explanation of these arguments here. We shouldn't expect mere humans to be able to notice any failures of coherence in a superintelligent agent, since if we could notice these failures, so could the agent. So we should expect that powerful agents appear coherent to us. (Note that it is possible that the agent doesn't fix the failures because it would not be worth it -- in this case, the argument says that we will not be able to notice any exploitable failures.) Taken together, these arguments suggest that we should model an agent much smarter than us as an expected utility (EU) maximizer. And many people agree that EU maximizers are dangerous. So does this mean we're doomed? I don't think so: it seems to me that the problems about EU maximizers that we've identified are actually about goal-directed behavior or explicit reward maximizers. The coherence theorems say nothing about whether an AI system must look like one of these categories. This suggests that we could try building an AI system that can be modeled as an EU maximizer, yet doesn't fall into one of these two categories, and so doesn't have all of the problems that we worry about. Note that there are two different flavors of arguments that the AI systems we build will be goal-directed agents (which are dangerous if the goal is even slightly wrong): Simply knowing that an agent is intelligent lets us infer that it is goal-directed. (EDIT: See these comments for more details on this argument.) Humans are particularly likely to build goal-directed agents. I will only be arguing against the first claim in this post, and will talk about the second claim in the next post. All behavior can be rationalized as EU maximization Suppose we have access to the entire policy of an agent, that is, given any universe-history, we know what action the agent will take. Can we tell whether the agent is an EU maximizer? Actually, no matter what the policy is, we can view the agent as an EU maximizer. The construction is simple: the agent can be thought as optimizing the utility function U, where U(h, a) = 1 if the policy would take action a given history h, else 0. Here I'm assuming that U is defined over histories that are composed of states/observations and actions. The actual policy gets 1 utility at every timestep; any other policy gets less than this, so the given policy perfectly maximizes this utility function. This construction has been given before, eg. at the bottom of page 6 of this paper. (I think I've seen it before too, but I can't remember where.) But wouldn't this suggest that the VNM theorem has no content? Well, we assumed that we were looking at the policy of the agent, which led to a universe-history deterministically. We didn't have access to any probabilities. Given a particular action, we knew exactly what the next state would be. Most of the axioms of the VNM theorem make reference to lotteries and probabilities -- if the world is deterministic, then the axioms simply say that the agent must have transitive preferences over outcomes. Given that we can only observe the agent choose one history over another, we can trivially construct a transitive preference ordering by saying that the chosen history is higher in the preference ordering than the one that...

MONEY360
Chứng khoán 14/5: thép đã tăng thế đấy!

MONEY360

Play Episode Listen Later May 14, 2021 40:06


Bình luận về những tiêu điểm mới của thị trường chứng khoán. Cổ phiếu nhóm ngành thép sẽ đi về đâu? Chu kỳ giảm của ngành thép liệu đã tới? Nhà đầu tư sẽ chuyển sang nhóm ngành nào? Tiềm năng của cổ phiếu ngành thiết yếu như VNM sẽ là xu hướng? Nên làm gì khi nhiều công ty chứng khoán kịch trần Margin. Nếu bạn muốn tham gia thị trường chứng khoán Việt Nam, click vào link dưới đây để được mở tài khoản tại CTCK SSI và được chuyên gia Nguyễn Cường Broker hỗ trợ. Mã ID 2318 Nguyễn Anh Cường.  Bạn nào quan tâm đăng ký lớp học đầu tư chứng khoán của anh Cường xin vui lòng điền thông tin vào đây để nhận hỗ trợ đặc biệt dành riêng cho khán giả của Money360 https://docs.google.com/forms/d/e/1FAIpQLScGAG6CJWLc9jDG9_VqdHcrcxpBZo9pShEMV5Wiq5bTQPYjiA/viewform    

Vaarplezier Podcast
Vaarplezier podcast afl 03 - Herbert Schoenmakers (VNM)

Vaarplezier Podcast

Play Episode Listen Later May 12, 2021 42:49


We hebben het o.a. over de lancering van de Wajer 77, goed nieuws over de Kustzeilers 24 uurs race: www.24uurszeilrace.nl en ook goed nieuws over de botenhandel met Engeland.De Vaarplezier Podgast is Herbert Schoenmakers van het Verbond Nederlandse Motorbootsport en we praten over de rol van de VNM in belangenbehartiging en ondersteuning van de verenigingen, de vuilwatertank problematiek, dat de Doerak uit Nederhemert komt, trekkerstochten met sloepen en we gaan elektrisch varen als special doen in de Vaarplezier Podcast in oktober en november.Herbert roept iedereen op voor de Motorboot Varen Prijs: https://motorbootvarenprijs.nl .

Thai Pham
THỊ TRƯỜNG CHỨNG KHOÁN LƯỠNG LỰ CHỜ CHỐT QUỸ ETFs

Thai Pham

Play Episode Listen Later Mar 16, 2021 41:52


THỊ TRƯỜNG CHỨNG KHOÁN LƯỠNG LỰ CHỜ CHỐT QUỸ ETFs Các qũy ETF là FTSE và VNM sẽ có kết quả review vào thứ 6 tuần tới, vậy thì những kịch bản nào sẽ xảy ra đối với thị trường chứng khoán Việt Nam? Đón xem nhé bạn hữu! Đừng tiếc like, share và chia sẻ Podcast này để tôi có thêm động lực phục vụ các bạn nhé! Cám ơn các bạn ⭕ Tham gia khóa học Kungfu chứng khoán cùng tôi: thaipham.live/khoa-hoc/ ⭕ Tham gia khóa học Thiết Kế Cuộc Đời Thịnh Vượng cùng tôi tại đây: thaipham.live/thiet-ke-cuoc-doi/

Jaja w kuchni
Włoski Strajk i VNM

Jaja w kuchni

Play Episode Listen Later Feb 9, 2021 97:43


Panowie z Włoskiego Strajku powiedzą o pizzy, ciężkich czasach w gastronomii, tymczasem VNM zje ich pizzę i powie o najnowszym albumie.

Half In, Half Out
Episode 15: "There's a split difference." -- Interview Tienna Nguyen (UNC, VNM) - (TW: slurs)

Half In, Half Out

Play Episode Listen Later Oct 14, 2020 61:36


TW: slursTienna Nguyen (UNC, VNM) talks about competing for Vietnam, doing the gymnastics that best works for her, and what it means to be a biromantic asexual.Connect with us (link tree): https://t.co/bFODrdDitL?amp=1Resources:https://blacklivesmatter.carrd.co/https://www.change.org/p/justice-for-tia-kiaku-justic-for?_pxhc=1592315236506https://www.change.org/p/honda-100-historically-black-colleges-universities-0-have-gymnastics-it-s-time-for-changehttps://www.huffpost.com/entry/16-books-about-race-that-every-white-person-should-read_n_565f37e8e4b08e945fedaf49

dRatschKathl
trifft Franz

dRatschKathl

Play Episode Listen Later Aug 12, 2020 67:20


Gleichberechtigung ist in aller Munde - doch wie gleichberechtigt sind eigentlich Väter wenn es zur Trennung kommt? Noch viel zu wenig, denkt auch das VäterNetzwerk- München, welches als Dachverband für all die Vereine dienen soll, die sich in München für Väterrechte einsetzen. Franz, Vorstand des VNM, berichtet in dieser Folge über die Arbeit des Vereins und seine persönlichen Erfahrungen als getrennt lebender Vater. https://www.vaeternetzwerk-muenchen.de/

Tutorial
VNM i DJ Hubson, PS5, Call of Duty sezon 4

Tutorial

Play Episode Listen Later Jun 13, 2020 45:28


Kolejny odcinek pełnoprawnego Tutorialu! Dziś naszymi gośćmi są VNM i DJ Hubson. Dlaczego Fani VNM-a powinni zagrać w Final Fantasy i co rodzice Dj Hubsona sprzedali, żeby kupić mu pierwszą konsolę? To wszystko wyjaśni się dziś! Oczywiście omawiamy sprawy związane z PlayStation 5, a także zaglądamy do 4 sezony Call of Duty! Zapraszają Kamil Michałowski i Marcin Osiadacz!

Let's Talk ETFs
Diversifying Away From China: The Long Case For Vietnam (VNM)

Let's Talk ETFs

Play Episode Listen Later Jun 2, 2020 43:27 Transcription Available


With the U.S.-China trade war showing no sign of abating and ongoing COVID-19 imposed shutdowns, the need to diversify global supply chains has never been more acute. WingCapital Investments' Vincent Yip believes Vietnam is the best positioned of the world's emerging economies to be the beneficiary of the inevitable move away from China. Vincent is playing his thesis via the VanEck Vectors Vietnam ETF (VNM). He joins Let's Talk ETFs to walk listeners through the long case for the world's 15th most populous country. Show Notes 3:00 - A continuing look at the on-the-ground economic situation resulting from COVID-19: Tokyo, Japan 8:30 - Why have Vietnamese equities performed so poorly given the continued strength of the Vietnamese economy? 13:00 - A stronger Vietnamese dong should lift the country's stocks 16:00 - Countries that have gone from net importers to net exporters: The cases of Malaysia and Thailand 22:00 - As supply chains diversify away from China, Vietnam is poised to benefit 25:00 - Beyond Apple: Other multinationals establishing footholds in Vietnam 27:00 - Going under the hood of VNM's top holdings  32:45 - Is VNM's 25% allocation to real estate a good or a bad thing?  35:30 - Is Vietnam's financial sector up to the task of powering its continued growth? 37:00 - Understanding the time horizon for the long case for Vietnam and VNM

Bolesne Poranki ft. Piotr Kędzierski, Arek Ras Sitarz & Jan-Rapowanie

VNM i "Hope for the best". O tym, czy fani wolą go smutnego i jak radzić sobie z promocją płyty w dobie pandemii. Czterdzieści to nowe dwadzieścia, a V i tak na kocu wygrywa. Sprawdźcie sami.

sprawd czterdzie vnm
Rap na chacie
Rap na chacie #1: Timbaland czy Pharrell?

Rap na chacie

Play Episode Listen Later May 12, 2019 83:35


Był prolog zwany prequelem (albo na odwrót), nadszedł czas na pierwszy, prawdziwy odcinek "Rapu na chacie". Gotowi? Tym razem rozmawiamy m.in. o płytach Donguralesko i Matheo, VNM-a, KPSN-a, Metro, Pryksona Fiska, Emikae. Wspominamy „Muzykę Klasyczną” Pezeta i Noona, Timbalanda, Pharrella Williamsa i 50Centa. A to nie wszystko. Mamy też do rozdania kilka płyt, m.in. "Joy Division" Proceente i Małego Esz Esz.

Ras w tygodniu
Ras w tygodniu # Gość: VNM

Ras w tygodniu

Play Episode Listen Later Apr 24, 2019 39:46


My raperzy… VNM i Ras rozkminiają, czy warto pomagać słuchaczom swoją twórczością. [27.03.2019]

Danger Dan's Talk Shop
#150 Richard Minino of THE VNM

Danger Dan's Talk Shop

Play Episode Listen Later Nov 29, 2018


Richards is a badasss!!! Thats why I had him design this next months T for MCshopTs.com!!! I hope your signed up cause it rules!!! He also has an amazing fleet of Florida cars and a TC chopper. His company VNM has some badass shit and you should go buy it! Meet him in person at Boogie East Chopper show next year in DAYTONA! Sign up at MCshopts.com before Dec1st so you don’t miss out on his tshirt!!! Damn I forgot to ask him wtf VNM stands for???thevnm.commooscraft.comSportster GiveAway Details:MULTIPLE WAYS TO ENTER: For every month you are a Patron at 5$ a month you will get a number for every 5$. Every month you are a subscriber at MCshopTs.com you will also get a chance. I will bring in a third party to help me draw the winner next Thanksgiving so you can get your bike before Christmas. FREE delivery in the 48 and FREE delivery to the port in Houston for people out of the country.https://www.patreon.com/DangerDansTalkShop^^^^^^^^^^^^ Patreon! Giveaways from Knives By Nick, JP Rodman, and No School Choppers!!!NittyGrittyChopperCity.comHarley-Stunts.comanchorscreenprinting.comratrodtober.comTheStagMag.comhttps://www.patreon.com/DangerDansTalkShopDangerDansTalkShop.comMCshopTs.comKnivesMadeByNick.comMCshopTs.com Your T-shirt of the month club. OLD SHOPS, NEW ART, and FRESH T's EVERY MONTH!!!! Only 23$ a month, sign up at MCshopTs.com Don't miss another month!!!SUPPORT EVERY LOCAL MOTORCYCLE SHOPGo to DangerDansTalkShop.com and become a Patreon Supporter for your chance to win next month. You could win a knife by KnivesbyNick or a custom painted tank by JP Rodman!!! DangerDansTalkShop.comMCshopTs.comtimokeefe.bigcartel.comChemicalCandyCustoms.comDCChoppers.comShowClassMag.com Permalink

St. Paul's Church, East Ham

Bible Sunday

The 80% with Esther O'Moore Donohoe

EPISODE 50 AND FEELIN' NIFTY!I couldn't have timed this week's episode better if I tried and I didn't. Later this evening, the final of Ireland's premier Lovely Girl competition, The Rose of Tralee takes place and the 2018 winner will be crowned with a load of Newbridge Silver forks and carriage clocks on her curly blow dried head. If you're not Irish and have no idea what the Rose of Tralee competition is, this clip from Father Ted will give you an idea: https://youtu.be/89RwsGe-fSw You may also not know then, that Daithi O'Se has been it's presenter for almost ten years (a decade of the Rose-ary if you will. A bit of word play there from your hero a.k.a me) and he was born to do it. But what else can I say about Daithi O'Sé. Firstly his mum calls him David. Secondly, he is a VNM (very nice man) and was a pleasure to talk to. He told me that he planned on being a teacher but life happened and he went with the flow. Instead he has gone on to have an almost 20 year career in television and am I micro jealous? You're God darn right I am. We talked over frothy coffees in town a few weeks ago where he told me about his start on TG4 with Sile Seoige, his family and how he doesn't let the wormhole of social media and opinion affect him. As long as he's got Rita and his son Micheal, nothing else matters.DAITHI O'SE FOR PRESIDENT! But only once Miggledy gets a second term. Then VIVA DO'S, A VNM!Until next Tuesday my cuties...Peace and love, peace and love, EO'MDNow, if you have enjoyed this podcast you can repay me by giving me your email address for my newsletter/Esther-zine. Sign up here http://estheromd.com/newsletter/ I WRITE FUNNY THINGS LIKE ABOUT GETTING MY PASSPORT RENEWED. SO GAS.Email: 80percentpodcast@gmail.com I REPLY PROMPTLYTreat yourself and follow me on Twitter: twitter.com/estheromd?lang=en I'M SO FUNNY!Instagram: www.instagram.com/80percentpodcast/ GET YOUR PICS OUT FOR THE LADS.Support this show http://supporter.acast.com/the-80-with-esther-omoore-donohoe. See acast.com/privacy for privacy and opt-out information.

Business Owner's Freedom Formula | Actionable Advice for Small Business Owners
57: The journey of a serial entrepreneur from the ninth grade to today with QuHarrison Terry

Business Owner's Freedom Formula | Actionable Advice for Small Business Owners

Play Episode Listen Later Oct 24, 2017 46:19


QuHarrison Terry is a serial entrepreneur and self starter. He is the co-founder and president of VNM, USA, the co-founder and CEO of 23VIVI, an online digital marketplace, and known for his marketing work at EatStreet.  He is a frequent writer on LinkedIn was named one of LinkedIn Top Voices in Technology.   He is currently the marketing director of Redox. With the adoption of electronic records, healthcare has been digitized. Redox was started to eradicate the technical barriers to data access and usher forth the future of technology-enable healthcare.   In my conversation with Qu, we go allll the way back to when he was in 9th grade (only about 8 years ago!). At the ripe old age of 21, Qu has done more and accomplished more in the business world than most people have in their lifetime. He started his first business in 9th grade, V-Neck Mafia, where he designed and sold v-neck shirts. Taking that experience, and everything he learned, he went on to start several additional businesses throughout college. Now, he simply works 18 hour days, splitting his time at Redox as well as still running VNM.