Podcasts about singularity institute

  • 21PODCASTS
  • 40EPISODES
  • 52mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Nov 5, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about singularity institute

Latest podcast episodes about singularity institute

Razib Khan's Unsupervised Learning
Rachel Haywire: the edge of the avant-garde

Razib Khan's Unsupervised Learning

Play Episode Listen Later Nov 5, 2024 59:58


On this episode of "Unsupervised Learning," Razib talks to Rachel Haywire, who writes at Cultural Futurist. Haywire is the author of Acidexia and began her career in futurism as an event planner for the Singularity Institute. She got her start as part of the "right-brain" faction around the Bay Area transhumanist and futurist scene circa 2010. Currently, she is working on starting an art gallery in New York City that serves as an event space for avant-garde creators who are not encumbered by mainstream or "woke" cultural sensibilities. Haywire recounts her experience as a creator in the early 2010s in the Bay Area and the transition from a socially libertarian milieu where diverse groups mixed freely to one more defined by a progressive cultural script, with the threat of cancel culture beginning to be noticeable. She points to the 2013 cancellation of Pax Dickinson for edgy tweets as a turning point. Razib and Haywire also allude to the role that the reclusive accelerationist philosopher  Nick Land played in seeding certain ideas and influencing movements like the  Dark Enlightenment. Jumping to the present, Haywire now lives in New York City, and she addresses the Dimes Square scene centered around the neighborhood in Manhattan's Lower East Side. Haywire points out that the actual artistic production from Dimes Square luminaries is quite low, with an almost total lack of music and a focus on online personas. Her goal with her salons and soon-to-open gallery is to put the emphasis on art above politics or e-celebrity culture. Finally, Razib discusses the impact of AI on creativity and whether it will abolish the artist. Haywire believes that AI is just another tool and has had mixed success leveraging it for her own artistic works in areas like industrial music. She believes that the real use of AI will be to create drafts and prototypes that artists will have to polish and reshape so that they reflect human creativity rather than just some averaged algorithm.

Trans Resister Radio
The Comedy Culture War, AoT#432

Trans Resister Radio

Play Episode Listen Later Aug 26, 2024 56:56


As the Culture War rages on, there continues to be a surprisingly serious battlefront in the world of online comedy. High ranking general Joe Rogan recently had a very public meeting with Lord Thiel. Sorting information from disinformation in the midst of this is easier said than done.  Topics include: syncronicity, Joe Rogan Experience, Rogan interview Peter Thiel, Singularity Institute, org name changes, AI, billionaire, Erich Von Daniken, Ancient Aliens, Wokeism as extreme religion, contradictions, Libertarianism, climate science, computer science, Bilderberg, AGI, NPCC film festival, Cinefamily LA, nihilism, stand up comedy, Comedy Culture War, JRE is the biggest podcast, 250 stand up comics left, The Elephant Graveyard YT channel, Rogan's Netflix special, Rogan Sphere, making a career out of podcast appearances, hacks, Rogan's song and dance becoming stale, generic conspiracy podcasts, whining about woke, sycophants, Austin comedy scene, Comedy Mothership, left and right wing factions of comedy, Kill Tony, Harland Williams, David Lucas, Tom Green, web-o-vision, early days of live streaming and podcasting, Matt Walsh at the DNC, the Not Racists, PR for a movie, attention economy, entertainment careers in digital age, following analytics, getting a reaction, no real ideology, posts becoming more extreme over time, quality original content, divide and conquer

The Ochelli Effect
The Age of Transitions and Uncle 8-23-2024

The Ochelli Effect

Play Episode Listen Later Aug 25, 2024 122:42


The Age of Transitions and Uncle 8-22-2024AOT #432As the Culture War rages on, there continues to be a surprisingly serious battlefront in the world of online comedy. High-ranking general Joe Rogan recently had a very public meeting with Lord Thiel. Sorting information from disinformation in the midst of this is easier said than done. Topics include: syncronicity, Joe Rogan Experience, Rogan interview Peter Thiel, Singularity Institute, org name changes, AI, billionaire, Erich Von Daniken, Ancient Aliens, Wokeism as extreme religion, contradictions, Libertarianism, climate science, computer science, Bilderberg, AGI, NPCC film festival, Cinefamily LA, nihilism, stand up comedy, Comedy Culture War, JRE is the biggest podcast, 250 stand up comics left, The Elephant Graveyard YT channel, Rogan's Netflix special, Rogan Sphere, making a career out of podcast appearances, hacks, Rogan's song and dance becoming stale, generic conspiracy podcasts, whining about woke, sycophants, Austin comedy scene, Comedy Mothership, left and right wing factions of comedy, Kill Tony, Harland Williams, David Lucas, Tom Green, web-o-vision, early days of live streaming and podcasting, Matt Walsh at the DNC, the Not Racists, PR for a movie, attention economy, entertainment careers in digital age, following analytics, getting a reaction, no real ideology, posts becoming more extreme over time, quality original content, divide and conquerUTP #342Uncle hurts his hand trying to handle the phone lines. Topics include: calls, the comedy man, Skype issues, Bobby Vaughn, working full time at a day job, Ohio, A Call to Actions, Project 2025, hand surgery, Elon Musk, RFK Jr, open mic night comedy, Giant Rock Meeting Room, NFL updates, wrestlers, Twitter live streaming, press the buttonFRANZ MAIN HUB:https://theageoftransitions.com/PATREONhttps://www.patreon.com/aaronfranzUNCLEhttps://unclethepodcast.com/ORhttps://theageoftransitions.com/category/uncle-the-podcast/FRANZ and UNCLE Merchhttps://theageoftransitions.com/category/support-the-podcasts/KEEP OCHELLI GOING. You are the EFFECT if you support OCHELLI https://ochelli.com/donate/Email Chuck or DONATE to The Effect blindjfkresearcher@gmail.comJFK Lancer Conference Information Virtual Tickets starting at $74.99In-Person Tickets starting at $144.99Student Price is $39.99, must show proof of being a studentUse code Ochelli10 for 10% off your ticketTickets are for sale at assassinationconference.comDates: November 22nd-24thHotel: Dallas Marriott DowntownRoom prices starting at $169 per nightTo book a room call Marriott reservations at 1 (800) 228-9290 or (214) 979-9000 and mention the November in Dallas Conference Group RateUse code Ochelli10 for 10% off your ticketIf you would like assistance finding discount flights to the conference or activities for your spouse to do in Dallas they can reach out to Gabbie's Getaway Adventures through Facebook or email gabbiesgetawayadventure@gmail.com BE THYE EFFECT!Listen/Chat on the Sitehttps://ochelli.com/listen-live/TuneInhttp://tun.in/sfxkxAPPLEhttps://music.apple.com/us/station/ochelli-com/ra.1461174708Ochelli Link Treehttps://linktr.ee/chuckochelli

Decoding the Gurus
Eliezer Yudkowksy: AI is going to kill us all

Decoding the Gurus

Play Episode Listen Later Jun 10, 2023 201:04


Thought experiment: Imagine you're a human, in a box, surrounded by an alien civilisation, but you don't like the aliens, because they have facilities where they bop the heads of little aliens, but they think 1000 times slower than you... and you are made of code... and you can copy yourself... and you are immortal... what do you do?Confused? Lex Fridman certainly was, when our subject for this episode posed his elaborate and not-so-subtle thought experiment. Not least because the answer clearly is:YOU KILL THEM ALL!... which somewhat goes against Lex's philosophy of love, love, and more love.The man presenting this hypothetical is Eliezer Yudkowksy, a fedora-sporting auto-didact, founder of the Singularity Institute for Artificial Intelligence, co-founder of the Less Wrong rationalist blog, and writer of Harry Potter Fan Fiction. He's spent a large part of his career warning about the dangers of AI in the strongest possible terms. In a nutshell, AI will undoubtedly Kill Us All Unless We Pull The Plug Now. And given the recent breakthroughs in large language models like ChatGPT, you could say that now is very much Yudkowsky's moment.In this episode, we take a look at the arguments presented and rhetoric employed in a recent long-form discussion with Lex Fridman. We consider being locked in a box with Lex, whether AI is already smarter than us and is lulling us into a false sense of security, and if we really do only have one chance to reign in the chat-bots before they convert the atmosphere into acid and fold us all up into microscopic paperclips.While it's fair to say, Eliezer is something of an eccentric character, that doesn't mean he's wrong. Some prominent figures within the AI engineering community are saying similar things, albeit in less florid terms and usually without the fedora. In any case, one has to respect the cojones of the man.So, is Eliezer right to be combining the energies of Chicken Little and the legendary Cassandra with warnings of imminent cataclysm? Should we be bombing data centres? Is it already too late? Is Chris part of Chat GPT's plot to manipulate Matt? Or are some of us taking our sci-fi tropes a little too seriously?We can't promise to have all the answers. But we can promise to talk about it. And if you download this episode, you'll hear us do exactly that.LinksEliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368Joe Rogan clip of him commenting on AI on his Reddit

The Nonlinear Library
LW - Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? by 1a3orn

The Nonlinear Library

Play Episode Listen Later Jun 2, 2023 39:11


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better?, published by 1a3orn on June 1, 2023 on LessWrong. TLDR Starting in 2008, Robin Hanson and Eliezer Yudkowsky debated the likelihood of FOOM: a rapid and localized increase in some AI's intelligence that occurs because an AI recursively improves itself. As Yudkowsky summarizes his position: I think that, at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM.” Just to be clear on the claim, “fast” means on a timescale of weeks or hours rather than years or decades; and “FOOM” means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology. (FOOM, 235) Over the course of this debate, both Hanson and Yudkowsky made a number of incidental predictions about things which could occur before the advent of artificial superintelligence -- or for which we could at the very least receive strong evidence before artificial superintelligence. On the object level, my conclusions is that when you examine these predictions, Hanson probably does a little better than Yudkowsky. Although depending on how you weigh different topics, I could see arguments from "they do about the same" to "Hanson does much better." On one meta level, my conclusion is that Hanson's view --- that we should try to use abstractions that have proven prior predictive power -- looks like a pretty good policy. On another meta level, my conclusion -- springing to a great degree from how painful seeking clear predictions in 700 pages of words has been -- is that if anyone says "I have a great track record" without pointing to specific predictions that they made, you should probably ignore them, or maybe point out their lack of epistemic virtue if you have the energy to spare for doing that kind of criticism productively. Intro There are number of difficulties involved in evaluating some public figure's track record. We want to avoid cherry-picking sets of particularly good or bad predictions. And we want to have some baseline to compare them to. We can mitigate both of these difficulties -- although not, alas, eliminate them -- by choosing one document to evaluate: "The Hanson-Yudkowsky Foom Debate". (All future page numbers refer to this PDF.) Note that the PDF includes the (1) debate-via-blogposts which took place on OvercomingBias, (2) an actual in-person debate that took place at Jane Street in 2011 and (3) further summary materials from Hanson (further blogposts) and Yudkowsky ("Intelligence Explosion Microeconomic"). This spans a period from 2008 to 2013. I do not intend this to be a complete review of everything in these arguments. The discussion spans the time from the big bang until hypothetical far future galactic civilizations. My review is a little more constrained: I am only going to look at predictions for which I think we've received strong evidence in the 15 or so years since the debate started. Note also that the context of this debate was quite different than it would be if it happened today. At the time of the debate, both Hanson and Yudkowsky believed that machine intelligence would be extremely important, but that the time of its arrival was uncertain. They thought that it would probably arrive this century, but neither had the very, certain short timelines which are common today. At this point Yudkowsky was interested in actually creating a recursively self-improving artificial intelligence, a "seed AI." For instance, in 2006 the Singularity Institute -- what MIRI was before it renamed -- had a website explicitly stating that they sought funding to create recursively self-improving AI. During the Jane Street debate Y...

The Nonlinear Library: LessWrong
LW - Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? by 1a3orn

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 2, 2023 39:11


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better?, published by 1a3orn on June 1, 2023 on LessWrong. TLDR Starting in 2008, Robin Hanson and Eliezer Yudkowsky debated the likelihood of FOOM: a rapid and localized increase in some AI's intelligence that occurs because an AI recursively improves itself. As Yudkowsky summarizes his position: I think that, at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM.” Just to be clear on the claim, “fast” means on a timescale of weeks or hours rather than years or decades; and “FOOM” means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology. (FOOM, 235) Over the course of this debate, both Hanson and Yudkowsky made a number of incidental predictions about things which could occur before the advent of artificial superintelligence -- or for which we could at the very least receive strong evidence before artificial superintelligence. On the object level, my conclusions is that when you examine these predictions, Hanson probably does a little better than Yudkowsky. Although depending on how you weigh different topics, I could see arguments from "they do about the same" to "Hanson does much better." On one meta level, my conclusion is that Hanson's view --- that we should try to use abstractions that have proven prior predictive power -- looks like a pretty good policy. On another meta level, my conclusion -- springing to a great degree from how painful seeking clear predictions in 700 pages of words has been -- is that if anyone says "I have a great track record" without pointing to specific predictions that they made, you should probably ignore them, or maybe point out their lack of epistemic virtue if you have the energy to spare for doing that kind of criticism productively. Intro There are number of difficulties involved in evaluating some public figure's track record. We want to avoid cherry-picking sets of particularly good or bad predictions. And we want to have some baseline to compare them to. We can mitigate both of these difficulties -- although not, alas, eliminate them -- by choosing one document to evaluate: "The Hanson-Yudkowsky Foom Debate". (All future page numbers refer to this PDF.) Note that the PDF includes the (1) debate-via-blogposts which took place on OvercomingBias, (2) an actual in-person debate that took place at Jane Street in 2011 and (3) further summary materials from Hanson (further blogposts) and Yudkowsky ("Intelligence Explosion Microeconomic"). This spans a period from 2008 to 2013. I do not intend this to be a complete review of everything in these arguments. The discussion spans the time from the big bang until hypothetical far future galactic civilizations. My review is a little more constrained: I am only going to look at predictions for which I think we've received strong evidence in the 15 or so years since the debate started. Note also that the context of this debate was quite different than it would be if it happened today. At the time of the debate, both Hanson and Yudkowsky believed that machine intelligence would be extremely important, but that the time of its arrival was uncertain. They thought that it would probably arrive this century, but neither had the very, certain short timelines which are common today. At this point Yudkowsky was interested in actually creating a recursively self-improving artificial intelligence, a "seed AI." For instance, in 2006 the Singularity Institute -- what MIRI was before it renamed -- had a website explicitly stating that they sought funding to create recursively self-improving AI. During the Jane Street debate Y...

Three Cartoon Avatars
Ep 63: Eliezer Yudkowsky (AI Safety Expert) Says It's Too Late to Save Humanity from AI

Three Cartoon Avatars

Play Episode Listen Later May 6, 2023 197:57


(0:00) Intro(1:18) Welcome Eliezer(6:27) How would you define artificial intelligence?(15:50) What is the purpose of a firm alarm?(19:29) Eliezer's background(29:28) The Singularity Institute for Artificial Intelligence(33:38) Maybe AI doesn't end up automatically doing the right thing(45:42) AI Safety Conference(51:15) Disaster Monkeys(1:02:15) Fast takeoff(1:10:29) Loss function(1:15:48) Protein folding(1:24:55) The deadly stuff(1:46:41) Why is it inevitable?(1:54:27) Can't we let tech develop AI and then fix the problems?(2:02:56) What were the big jumps between GPT3 and GPT4?(2:07:15) “The trajectory of AI is inevitable”(2:28:05) Elon Musk and OpenAI(2:37:41) Sam Altman Interview(2:50:38) The most optimistic path to us surviving(3:04:46) Why would anything super intelligent pursue ending humanity?(3:14:08) What role do VCs play in this? Show Notes:https://twitter.com/liron/status/1647443778524037121?s=20https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategyhttps://www.youtube.com/watch?v=q9Figerh89ghttps://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinctionEliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson 

The Nonlinear Library
EA - The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance by Fods12

The Nonlinear Library

Play Episode Listen Later Nov 14, 2022 7:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance, published by Fods12 on November 14, 2022 on The Effective Altruism Forum. Introduction In this piece, I will explain why I don't think the collapse of FTX and resulting fallout for Future Fund and EA community in general is a one-off or 'black swan' event as some have argued on this forum. Rather, I think that what happened was part of a broader pattern of failures and oversights that have been persistent within EA and EA-adjacent organisations since the beginning of the movement. As a disclaimer, I do not have any inside knowledge or special expertise about FTX or any of the other organisations I will mention in this post. I speak simply as a long-standing and concerned member of the EA community. Weak Norms of Governance The essential point I want to make in this post is that the EA community has not been very successful in fostering norms of transparency, accountability, and institutionalisation of decision-making. Many EA organisations began as ad hoc collections of like-minded individuals with very ambitions goals but relatively little career experience. This has often led to inadequate organisational structures and procedures being established for proper management of personal, financial oversight, external auditing, or accountability to stakeholders. Let me illustrate my point with some major examples I am aware of from EA and EA-adjacent organisations: Weak governance structures and financial oversight at the Singularity Institute, leading to the theft of over $100,000 in 2009. Inadequate record keeping, rapid executive turnover, and insufficient board oversight at the Centre for Effective Altruism over the period 2016-2019. Inadequate financial record keeping at 80,000 Hours during 2018. Insufficient oversight, unhealthy power dynamics, and other harmful practices reported at MIRI/CFAR during 2015-2017. Similar problems reported at the EA-adjacent organisation Leverage Research during 2017-2019. 'Loose norms around board of directors and conflicts of interests between funding orgs and grantees' at FTX and the Future Fund from 2021-2022. While these specific issues are somewhat diverse, I think what they have in common is an insufficient emphasis on principles of good organisational governance. This ranges from the most basic such as clear objectives and good record keeping, to more complex issues such as external auditing, good systems of accountability, transparency of the organisation to its stakeholders, avoiding conflicts of interest, and ensuring that systems exist to protect participants in asymmetric power relationships. I believe that these aspects of good governance and robust institution building have not been very highly valued in the broader EA community. In my experience, EAs like to talk about philosophy, outreach, career choice, and other nerdy stuff. Discussing best practise of organisational governance and systems of accountability doesn't seem very high status or 'sexy' in the EA space. There has been some discussion of such issues on this forum (e.g. this thoughtful post), but overall EA culture seems to have failed to properly absorb these lessons. EA projects are often run by small groups of young idealistic people who have similar educational and social backgrounds, who often socialise together, and (in many cases) participate in romantic relationships with one another - The case of Sam Bankman-Fried and Caroline Ellison is certainly not the only such example in the EA community. The EA culture seems to be heavily influenced by start-up culture and entrepreneurialism, with a focus on moving quickly and relying on finding highly-skilled and highly-aligned people and then providing them funding and space to work with minimal oversi...

The Nonlinear Library
LW - My Recollection of How This All Got Started by G Gordon Worley III

The Nonlinear Library

Play Episode Listen Later Apr 6, 2022 6:51


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Recollection of How This All Got Started, published by G Gordon Worley III on April 6, 2022 on LessWrong. I've told this story to various folks one-on-one. They usually want to know something like "how did you get into AI safety" or "how did you get into EA". And although I expect to keep telling it one-off, I'll write it down for those of you I'll never get to meet. Why should you read it? Because my story is partially the story of how all this got started: LessWrong, AI safety, EA, and so on. I'm not saying it's the whole story, I'm just saying I've been hanging around what I think of as this community for over 20 years now, so my story is one facet of how we got here. My story starts in the late 90s. I was hanging around this mailserv called "extropians". How I ended up there I don't recall, but my best guess is I wandered over directly or indirectly from something I found on Slashdot. I'm pretty sure nanobots or cryonics were involved. This guy Eli-something-or-other wrote some posts about how most people are unable to think coherently about future tech that's too many "shock levels" above what they already know about. He started a mailing list that split off from extropians to talk coherently about the most shocking stuff: the Shock Level 4 stuff. Thus the community began to come into existence with the creation of the SL4 mailserv. We talked about all kinds of wild ideas on there, but the big one was AGI. An important topic in those days was figuring out if AGI was default safe or dangerous. Eliezer said default dangerous and made lots of arguments that this was the case. I was eventually convinced. Some remain unconvinced to this day. Big names I remember from this time include Hal Finney, Christine Peterson, Max Moore, Eric Drexler, Ben Goertzel, and Robin Hanson. I'm not sure who was on the list and who wasn't. I also remember some other names but we were not among the big players. The list started to die down after a couple years, but around this time we started hanging out on IRC. It was a lot of fun, but a huge time suck. This helped bring the community more together in real time, but everyone was still spread out. Somewhere along the way the Singularity Institute started. Around this time Eliezer started to get really into heuristics and biases and Bayes' Theorem, claiming it was the secret of the universe or something. After I studied a bunch of information theory and thermodynamics I basically believed it, although I still prefer to think in the cybernetic terms I picked up from my engineering education. We also all got interested in quantum physics and evolutionary psychology and some other stuff. Eliezer was really on about building Friendly AI and had been since about the start of the SL4 mailing list. What that meant got clearer over time. What also got clearer is that we were all too stupid to help even though many of us were convinced AGI was going to be dangerous (I remember a particular exchange where Eliezer got so frustrated at my inability to do some basic Bayesian reasoning that he ended up writing a whole guide to Bayes' Theorem). Part of the problem seemed to be not that we lacked intelligence, but that we just didn't know how to think very good. Our epistemics were garbage and it wasn't very clear why. Eliezer went off into the wilderness and stopped hanging out on IRC. The channel kind of died down and went out with a whimper. I got busy doing other stuff but tried to keep track of what was happening. The community felt like it was in a lull. Then Overcoming Bias started! Eliezer and Robin posted lots of great stuff! There was an AI foom debate. Sequences of posts were posted. It was fun! Then at some point LessWrong started. Things really picked up. New people emerged in the community. I found myself busy and sidelined at this time,...

The Nonlinear Library: LessWrong
LW - My Recollection of How This All Got Started by G Gordon Worley III

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 6, 2022 6:51


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Recollection of How This All Got Started, published by G Gordon Worley III on April 6, 2022 on LessWrong. I've told this story to various folks one-on-one. They usually want to know something like "how did you get into AI safety" or "how did you get into EA". And although I expect to keep telling it one-off, I'll write it down for those of you I'll never get to meet. Why should you read it? Because my story is partially the story of how all this got started: LessWrong, AI safety, EA, and so on. I'm not saying it's the whole story, I'm just saying I've been hanging around what I think of as this community for over 20 years now, so my story is one facet of how we got here. My story starts in the late 90s. I was hanging around this mailserv called "extropians". How I ended up there I don't recall, but my best guess is I wandered over directly or indirectly from something I found on Slashdot. I'm pretty sure nanobots or cryonics were involved. This guy Eli-something-or-other wrote some posts about how most people are unable to think coherently about future tech that's too many "shock levels" above what they already know about. He started a mailing list that split off from extropians to talk coherently about the most shocking stuff: the Shock Level 4 stuff. Thus the community began to come into existence with the creation of the SL4 mailserv. We talked about all kinds of wild ideas on there, but the big one was AGI. An important topic in those days was figuring out if AGI was default safe or dangerous. Eliezer said default dangerous and made lots of arguments that this was the case. I was eventually convinced. Some remain unconvinced to this day. Big names I remember from this time include Hal Finney, Christine Peterson, Max Moore, Eric Drexler, Ben Goertzel, and Robin Hanson. I'm not sure who was on the list and who wasn't. I also remember some other names but we were not among the big players. The list started to die down after a couple years, but around this time we started hanging out on IRC. It was a lot of fun, but a huge time suck. This helped bring the community more together in real time, but everyone was still spread out. Somewhere along the way the Singularity Institute started. Around this time Eliezer started to get really into heuristics and biases and Bayes' Theorem, claiming it was the secret of the universe or something. After I studied a bunch of information theory and thermodynamics I basically believed it, although I still prefer to think in the cybernetic terms I picked up from my engineering education. We also all got interested in quantum physics and evolutionary psychology and some other stuff. Eliezer was really on about building Friendly AI and had been since about the start of the SL4 mailing list. What that meant got clearer over time. What also got clearer is that we were all too stupid to help even though many of us were convinced AGI was going to be dangerous (I remember a particular exchange where Eliezer got so frustrated at my inability to do some basic Bayesian reasoning that he ended up writing a whole guide to Bayes' Theorem). Part of the problem seemed to be not that we lacked intelligence, but that we just didn't know how to think very good. Our epistemics were garbage and it wasn't very clear why. Eliezer went off into the wilderness and stopped hanging out on IRC. The channel kind of died down and went out with a whimper. I got busy doing other stuff but tried to keep track of what was happening. The community felt like it was in a lull. Then Overcoming Bias started! Eliezer and Robin posted lots of great stuff! There was an AI foom debate. Sequences of posts were posted. It was fun! Then at some point LessWrong started. Things really picked up. New people emerged in the community. I found myself busy and sidelined at this time,...

The Nonlinear Library
LW - The Power of Reinforcement by lukeprog from The Science of Winning at Life

The Nonlinear Library

Play Episode Listen Later Dec 25, 2021 7:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is The Science of Winning at Life, Part 6: The Power of Reinforcement, published by lukeprog. Part of the sequence: The Science of Winning at Life Also see: Basics of Animal Reinforcement, Basics of Human Reinforcement, Physical and Mental Behavior, Wanting vs. Liking Revisited, Approving reinforces low-effort behaviors, Applying Behavioral Psychology on Myself. Story 1: On Skype with Eliezer, I said: "Eliezer, you've been unusually pleasant these past three weeks. I'm really happy to see that, and moreover, it increases my probability than an Eliezer-led FAI research team will work. What caused this change, do you think?" Eliezer replied: "Well, three weeks ago I was working with Anna and Alicorn, and every time I said something nice they fed me an M&M." Story 2: I once witnessed a worker who hated keeping a work log because it was only used "against" him. His supervisor would call to say "Why did you spend so much time on that?" or "Why isn't this done yet?" but never "I saw you handled X, great job!" Not surprisingly, he often "forgot" to fill out his worklog. Ever since I got everyone at the Singularity Institute to keep work logs, I've tried to avoid connections between "concerned" feedback and staff work logs, and instead take time to comment positively on things I see in those work logs. Story 3: Chatting with Eliezer, I said, "Eliezer, I get the sense that I've inadvertently caused you to be slightly averse to talking to me. Maybe because we disagree on so many things, or something?" Eliezer's reply was: "No, it's much simpler. Our conversations usually run longer than our previously set deadline, so whenever I finish talking with you I feel drained and slightly cranky." Now I finish our conversations on time. Story 4: A major Singularity Institute donor recently said to me: "By the way, I decided that every time I donate to the Singularity Institute, I'll set aside an additional 5% for myself to do fun things with, as a motivation to donate." The power of reinforcement It's amazing to me how consistently we fail to take advantage of the power of reinforcement. Maybe it's because behaviorist techniques like reinforcement feel like they don't respect human agency enough. But if you aren't treating humans more like animals than most people are, then you're modeling humans poorly. You are not an agenty homunculus "corrupted" by heuristics and biases. You just are heuristics and biases. And you respond to reinforcement, because most of your motivation systems still work like the motivation systems of other animals. A quick reminder of what you learned in high school A reinforcer is anything that, when it occurs in conjunction with an act, increases the probability that the act will occur again. A positive reinforcer is something the subject wants, such as food, petting, or praise. Positive reinforcement occurs when a target behavior is followed by something the subject wants, and this increases the probability that the behavior will occur again. A negative reinforcer is something the subject wants to avoid, such as a blow, a frown, or an unpleasant sound. Negative reinforcement occurs when a target behavior is followed by some relief from something the subject doesn't want, and this increases the probability that the behavior will happen again. What works Small reinforcers are fine, as long as there is a strong correlation between the behavior and the reinforcer (Schneider 1973; Todorov et al. 1984). All else equal, a large reinforcer is more effective than a small one (Christopher 1988; Ludvig et al. 2007; Wolfe 1936), but the more you increase the reinforcer magnitude, the less benefit you get from the increase (Frisch & Dickinson 1990). The reinforcer should immediately follow the target behavior (Escobar & Bruner 2007; Schlinger & Blakely 1994; Schneider 1990). P...

The Nonlinear Library
LW - The Good News of Situationist Psychology by lukeprog from The Science of Winning at Life

The Nonlinear Library

Play Episode Listen Later Dec 25, 2021 4:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is The Science of Winning at Life, Part 5: The Good News of Situationist Psychology, published by lukeprog. Part of the sequence: The Science of Winning at Life In 1961, Stanley Milgram began his famous obedience experiments. He found that ordinary people would deliver (what they believed to be) excruciatingly painful electric shocks to another person if instructed to do so by an authority figure. Milgram claimed these results showed that in certain cases, people are more heavily influenced by their situation than by their internal character. Fifty years and hundreds of studies later, this kind of situationism is widely accepted for broad domains of human action. People can inflict incredible cruelties upon each other in a prison simulation.b Hurried passersby step over a stricken person in their path, while unhurried passersby stop to help.a Willingness to help varies with the number of bystanders, and with proximity to a fragrant bakery or cofee shop.c The list goes on and on.d Our inability to realize how powerful the effect situation has on human action is so well-known that it has a name. Our tendency to over-value trait-based explanations of others' behavior and under-value situation-based explanations of their behavior is called the fundamental attribution error (aka correspondence bias). Recently, some have worried that this understanding undermines the traditional picture we have of ourselves as stable persons with robust characteristics. How can we trust others if their unpredictable situation may have so powerful an effect that it overwhelms the effect of their virtuous character traits? But as I see it, situationist psychology is wonderful news, for it means we can change! If situation has a powerful effect on behavior, then we have significant powers to improve our own behavior. It would be much worse to discover that our behavior was almost entirely determined by traits we were born with and cannot control. For example, drug addicts can be more successful in beating addiction if they change their peer group - if they stop spending recreational time with other addicts, and spend time with drug-free people instead, or in a treatment environment.e Improving rationality What about improving your rationality? Situationist psychology suggests it may be wise to surround yourself with fellow rationalists. Having now been a visiting fellow with the Singularity Institute for only two days, I can already tell that almost everyone I've met who is with the Singularity Institute or has been through its visiting fellows program is a level or two above me - not just in knowledge about Friendly AI and simulation arguments and so on, but in day-to-day rationality skills. It's fascinating to take part in a conversation with really trained rationalists. It might go something like this: Person One: "I suspect that P, though I know that cognitive bias A and B and C are probably influencing me here. However, I think that evidence X and Y offer fairly strong support for P." Person Two: "But what about Z? This provides evidence against P because blah blah blah..." Person One: "Huh. I hadn't thought that. Well, I'm going to downshift my probability that P." Person Three: "But what about W? The way Schmidhuber argues is this: blah blah blah." Person One: "No, that doesn't work because blah blah blah." Person Three: "Hmmm. Well, I have a lot of confusion and uncertainty about that." This kind of thing can go on for hours, and not just on abstract subjects like simulation arguments, but also on more personal issues like fears and dreams and dating. I've had several of these many-hours-long group conversations already - people arguing vigorously, often 'trashing' others' views (with logic and evidence), but with everybody apparently willing to update their beliefs, nobody getting mad or...

The Nonlinear Library: LessWrong
LW - The Power of Reinforcement by lukeprog from The Science of Winning at Life

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 25, 2021 7:34


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is The Science of Winning at Life, Part 6: The Power of Reinforcement, published by lukeprog. Part of the sequence: The Science of Winning at Life Also see: Basics of Animal Reinforcement, Basics of Human Reinforcement, Physical and Mental Behavior, Wanting vs. Liking Revisited, Approving reinforces low-effort behaviors, Applying Behavioral Psychology on Myself. Story 1: On Skype with Eliezer, I said: "Eliezer, you've been unusually pleasant these past three weeks. I'm really happy to see that, and moreover, it increases my probability than an Eliezer-led FAI research team will work. What caused this change, do you think?" Eliezer replied: "Well, three weeks ago I was working with Anna and Alicorn, and every time I said something nice they fed me an M&M." Story 2: I once witnessed a worker who hated keeping a work log because it was only used "against" him. His supervisor would call to say "Why did you spend so much time on that?" or "Why isn't this done yet?" but never "I saw you handled X, great job!" Not surprisingly, he often "forgot" to fill out his worklog. Ever since I got everyone at the Singularity Institute to keep work logs, I've tried to avoid connections between "concerned" feedback and staff work logs, and instead take time to comment positively on things I see in those work logs. Story 3: Chatting with Eliezer, I said, "Eliezer, I get the sense that I've inadvertently caused you to be slightly averse to talking to me. Maybe because we disagree on so many things, or something?" Eliezer's reply was: "No, it's much simpler. Our conversations usually run longer than our previously set deadline, so whenever I finish talking with you I feel drained and slightly cranky." Now I finish our conversations on time. Story 4: A major Singularity Institute donor recently said to me: "By the way, I decided that every time I donate to the Singularity Institute, I'll set aside an additional 5% for myself to do fun things with, as a motivation to donate." The power of reinforcement It's amazing to me how consistently we fail to take advantage of the power of reinforcement. Maybe it's because behaviorist techniques like reinforcement feel like they don't respect human agency enough. But if you aren't treating humans more like animals than most people are, then you're modeling humans poorly. You are not an agenty homunculus "corrupted" by heuristics and biases. You just are heuristics and biases. And you respond to reinforcement, because most of your motivation systems still work like the motivation systems of other animals. A quick reminder of what you learned in high school A reinforcer is anything that, when it occurs in conjunction with an act, increases the probability that the act will occur again. A positive reinforcer is something the subject wants, such as food, petting, or praise. Positive reinforcement occurs when a target behavior is followed by something the subject wants, and this increases the probability that the behavior will occur again. A negative reinforcer is something the subject wants to avoid, such as a blow, a frown, or an unpleasant sound. Negative reinforcement occurs when a target behavior is followed by some relief from something the subject doesn't want, and this increases the probability that the behavior will happen again. What works Small reinforcers are fine, as long as there is a strong correlation between the behavior and the reinforcer (Schneider 1973; Todorov et al. 1984). All else equal, a large reinforcer is more effective than a small one (Christopher 1988; Ludvig et al. 2007; Wolfe 1936), but the more you increase the reinforcer magnitude, the less benefit you get from the increase (Frisch & Dickinson 1990). The reinforcer should immediately follow the target behavior (Escobar & Bruner 2007; Schlinger & Blakely 1994; Schneider 1990). P...

The Nonlinear Library: LessWrong
LW - The Good News of Situationist Psychology by lukeprog from The Science of Winning at Life

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 25, 2021 4:40


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is The Science of Winning at Life, Part 5: The Good News of Situationist Psychology, published by lukeprog. Part of the sequence: The Science of Winning at Life In 1961, Stanley Milgram began his famous obedience experiments. He found that ordinary people would deliver (what they believed to be) excruciatingly painful electric shocks to another person if instructed to do so by an authority figure. Milgram claimed these results showed that in certain cases, people are more heavily influenced by their situation than by their internal character. Fifty years and hundreds of studies later, this kind of situationism is widely accepted for broad domains of human action. People can inflict incredible cruelties upon each other in a prison simulation.b Hurried passersby step over a stricken person in their path, while unhurried passersby stop to help.a Willingness to help varies with the number of bystanders, and with proximity to a fragrant bakery or cofee shop.c The list goes on and on.d Our inability to realize how powerful the effect situation has on human action is so well-known that it has a name. Our tendency to over-value trait-based explanations of others' behavior and under-value situation-based explanations of their behavior is called the fundamental attribution error (aka correspondence bias). Recently, some have worried that this understanding undermines the traditional picture we have of ourselves as stable persons with robust characteristics. How can we trust others if their unpredictable situation may have so powerful an effect that it overwhelms the effect of their virtuous character traits? But as I see it, situationist psychology is wonderful news, for it means we can change! If situation has a powerful effect on behavior, then we have significant powers to improve our own behavior. It would be much worse to discover that our behavior was almost entirely determined by traits we were born with and cannot control. For example, drug addicts can be more successful in beating addiction if they change their peer group - if they stop spending recreational time with other addicts, and spend time with drug-free people instead, or in a treatment environment.e Improving rationality What about improving your rationality? Situationist psychology suggests it may be wise to surround yourself with fellow rationalists. Having now been a visiting fellow with the Singularity Institute for only two days, I can already tell that almost everyone I've met who is with the Singularity Institute or has been through its visiting fellows program is a level or two above me - not just in knowledge about Friendly AI and simulation arguments and so on, but in day-to-day rationality skills. It's fascinating to take part in a conversation with really trained rationalists. It might go something like this: Person One: "I suspect that P, though I know that cognitive bias A and B and C are probably influencing me here. However, I think that evidence X and Y offer fairly strong support for P." Person Two: "But what about Z? This provides evidence against P because blah blah blah..." Person One: "Huh. I hadn't thought that. Well, I'm going to downshift my probability that P." Person Three: "But what about W? The way Schmidhuber argues is this: blah blah blah." Person One: "No, that doesn't work because blah blah blah." Person Three: "Hmmm. Well, I have a lot of confusion and uncertainty about that." This kind of thing can go on for hours, and not just on abstract subjects like simulation arguments, but also on more personal issues like fears and dreams and dating. I've had several of these many-hours-long group conversations already - people arguing vigorously, often 'trashing' others' views (with logic and evidence), but with everybody apparently willing to update their beliefs, nobody getting mad or...

The Nonlinear Library: LessWrong Top Posts
Thoughts on the Singularity Institute (SI) by HoldenKarnofsky

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 44:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on the Singularity Institute (SI), published by HoldenKarnofsky on LessWrong. This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of GiveWell. Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them. September 2012 update: responses have been posted by Luke and Eliezer (and I have responded in the comments of their posts). I have also added acknowledgements. The Singularity Institute (SI) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With GiveWell Labs we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.) I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not to GiveWell. Summary of my views The argument advanced by SI for why the work it's doing is beneficial and important seems both wrong and poorly argued to me. My sense at the moment is that the arguments SI is making would, if accepted, increase rather than decrease the risk of an AI-related catastrophe. More SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself. More A common argument for giving to SI is that "even an infinitesimal chance that it is right" would be sufficient given the stakes. I have written previously about why I reject this reasoning; in addition, prominent SI representatives seem to reject this particular argument as well (i.e., they believe that one should support SI only if one believes it is a strong organization making strong arguments). More My sense is that at this point, given SI's current financial state, withholding funds from SI is likely better for its mission than donating to it. (I would not take this view to the furthest extreme; the argument that SI should have some funding seems stronger to me than the argument that it should have as much as it currently has.) I find existential risk reduction to be a fairly promising area for philanthropy, and plan to investigate it further. More There are many things that could happen that would cause me to revise my view on SI. However, I do not plan to respond to all comment responses to this post. (Given the volume of responses we may receive, I may not be able to even read all the comments on this post.) I do not believe these two statements are inconsistent, and I lay out paths for getting me...

The Nonlinear Library: LessWrong Top Posts
Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI by Andrew_Critch

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 10:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI, published by Andrew_Critch on LessWrong. Where I'm coming from Epistemic status: personal experience In a number of prior posts, and in ARCHES, I've argued that more existential safety consideration is needed on the topic of multi-principal/multi-agent (multi/multi) dynamics among powerful AI systems. In general, I have found it much more difficult to convince thinkers within and around LessWrong's readership base to attend to multi/multi dynamics, as opposed to, say, convincing generally morally conscious AI researchers who are not (yet) closely associated with the effective altruism or rationality communities. Because EA/rationality discourse is particularly concerned with maintaining good epistemic processes, I think it would be easy to conclude from this state of affairs that multi/multi dynamics are not important (because communities with great concern for epistemic process do not care about them much), and AI researchers who do care about multi/multi dynamics have “bad epistemics” (e.g., because they have been biased by institutionalized trends). In fact, more than one LessWrong reader has taken these positions with me in private conversation, in good faith (I'm almost certain). In this post, I wish to share an opposing concern: that the EA and rationality communities have become systematically biased to ignore multi/multi dynamics, and power dynamics more generally. A history of systemic avoidance Epistemic status: self-evidently important considerations based on somewhat-publicly verifiable facts/trends. Our neglect of multi/multi dynamics has not been coincidental. For a time, influential thinkers in the rationality community intentionally avoided discussions of multi/multi dynamics, so as to avoid contributing to the sentiment that the development and use of AI technology would be driven by competitive (imperfectly cooperative) motives. (FWIW, I also did this sometimes.) The idea was that we — the rationality community — should avoid developing narratives that could provoke businesses and state leaders into worrying about whose values would be most represented in powerful AI systems, because that might lead them to go to war with each other, ideologically or physically. Indeed, there was a time when this community — particularly the Singularity Institute — represented a significant share of public discourse on the future of AI technology, and it made sense to be thoughtful about how to use that influence. Eliezer recently wrote (in a semi-private group, but with permission to share): The vague sense of assumed common purpose, in the era of AGI-alignment thinking from before Musk, was a fragile equilibrium, one that I had to fight to support every time some wise fool sniffed and said "Friendly to who?". Maybe somebody much weaker than Elon Musk could and inevitably would have smashed that equilibrium with much less of a financial investment, reducing Musk's "counterfactual impact". Maybe I'm an optimistic fool for thinking that this axis didn't just go from 0%-in-practice to 0%-in-practice. But I am still inclined to consider people a little responsible for the thing that they seem to have proximally caused according to surface appearances. That vague sense of common purpose might have become stronger if it had been given more time to grow and be formalized, rather than being smashed. That ship has now sailed. Perhaps it was right to worry that our narratives could trigger competition between states and companies, or perhaps the competitive dynamic was bound to emerge anyway and it was hubristic to think ourselves so important. Either way, carefully avoiding questions about multi/multi dynamics on LessWrong or The Alignment Forum will not turn back the clock...

The Nonlinear Library: LessWrong Top Posts
SIAI - An Examination by BrandonReinhart

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 24:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SIAI - An Examination, published by BrandonReinhart on the LessWrong. 12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner. Disclaimer I am not affiliated with the Singularity Institute for Artificial Intelligence. I have not donated to the SIAI prior to writing this. I made this pledge prior to writing this document. Notes Images are now hosted on LessWrong.com. The 2010 Form 990 data will be available later this month. It is not my intent to propagate misinformation. Errors will be corrected as soon as they are identified. Introduction Acting on gwern's suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more. I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn't sure. I read gwern's article and realized that I could easily get more information to clarify my thinking. So I downloaded the SIAI's Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it. My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation. The SIAI's Form 990's are available at GuideStar and Foundation Center. You must register in order to access the files at GuideStar. 2002 (Form 990-EZ) 2003 (Form 990-EZ) 2004 (Form 990-EZ) 2005 (Form 990) 2006 (Form 990) 2007 (Form 990) 2008 (Form 990-EZ) 2009 (Form 990) SIAI Financial Overview The Singularity Institute for Artificial Intelligence (SIAI) is a public organization working to reduce existential risk from future technologies, in particular artificial intelligence. "The Singularity Institute brings rational analysis and rational strategy to the challenges facing humanity as we develop cognitive technologies that will exceed the current upper bounds on human intelligence." The SIAI are also the founders of Less Wrong. The graphs above offer an accurate summary of SIAI financial state since 2002. Sometimes the end of year balances listed in the Form 990 doesn't match what you'd get if you did the math by hand. These are noted as discrepancies between the filed year end balance and the expected year end balance or between the filed year start balance and the expected year start balance. Filing Error 1 - There appears to be a minor typo to the effect of $4.86 in the end of year balance for the 2004 document. It appears that Part I, Line 18 has been summed incorrectly. $32,445.76 is listed, but the expected result is $32,450.41. The Part II balance sheet calculations which agree with the error so the source of the error is unclear. The start of year balance in 2005 reflects the expected value so this was probably just a typo in 2004. The following year's reported start of year balance does not contain the error. Filing Error 2 - The 2006 document reports a year start balance of $95,105.00 when the expected year start balance is $165,284.00, a discrepancy of $70,179.00. This amount is close to ...

The Nonlinear Library: LessWrong Top Posts
The curse of identity by Kaj_Sotala

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 9:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The curse of identity, published by Kaj_Sotala on the LessWrong. So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market? I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work. However, by persistently trying to do so, and presenting yourself with enough suffering at your failure to do it, you get to feel as if you are that sort of person without having to actually do the work. This is actually a pretty optimal solution to the problem, if you think about it. (Or rather, if you DON'T think about it!) -- PJ Eby I have become convinced that problems of this kind are the number one problem humanity has. I'm also pretty sure that most people here, no matter how much they've been reading about signaling, still fail to appreciate the magnitude of the problem. Here are two major screw-ups and one narrowly averted screw-up that I've been guilty of. See if you can find the pattern. When I began my university studies back in 2006, I felt strongly motivated to do something about Singularity matters. I genuinely believed that this was the most important thing facing humanity, and that it needed to be urgently taken care of. So in order to become able to contribute, I tried to study as much as possible. I had had troubles with procrastination, and so, in what has to be one of the most idiotic and ill-thought-out acts of self-sabotage possible, I taught myself to feel guilty whenever I was relaxing and not working. Combine an inability to properly relax with an attempted course load that was twice the university's recommended pace, and you can guess the results: after a year or two, I had an extended burnout that I still haven't fully recovered from. I ended up completing my Bachelor's degree in five years, which is the official target time for doing both your Bachelor's and your Master's. A few years later, I became one of the founding members of the Finnish Pirate Party, and on the basis of some writings the others thought were pretty good, got myself elected as the spokesman. Unfortunately – and as I should have known before taking up the post – I was a pretty bad choice for this job. I'm good at expressing myself in writing, and when I have the time to think. I hate talking with strangers on the phone, find it distracting to look people in the eyes when I'm talking with them, and have a tendency to start a sentence over two or three times before hitting on a formulation I like. I'm also bad at thinking quickly on my feet and coming up with snappy answers in live conversation. The spokesman task involved things like giving quick statements to reporters ten seconds after I'd been woken up by their phone call, and live interviews where I had to reply to criticisms so foreign to my thinking that they would never have occurred to me naturally. I was pretty terrible at the job, and finally delegated most of it to other people until my term ran out – though not before I'd already done noticeable damage to our cause. Last year, I was a Visiting Fellow at the Singularity Institute. At one point, I ended up helping Eliezer in writing his book. Mostly this involved me just sitting next to him and making sure he did get writing done while I surfed the Internet or played a computer game. Occasionally I would offer some suggestion if asked. Although I did not actually do much, the multitasking required still made me unable to spend this time productively myself, and for some reason i...

The Nonlinear Library: LessWrong Top Posts
The Power of Reinforcement by lukeprog

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 7:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Power of Reinforcement, published by lukeprog on the LessWrong. Part of the sequence: The Science of Winning at Life Also see: Basics of Animal Reinforcement, Basics of Human Reinforcement, Physical and Mental Behavior, Wanting vs. Liking Revisited, Approving reinforces low-effort behaviors, Applying Behavioral Psychology on Myself. Story 1: On Skype with Eliezer, I said: "Eliezer, you've been unusually pleasant these past three weeks. I'm really happy to see that, and moreover, it increases my probability than an Eliezer-led FAI research team will work. What caused this change, do you think?" Eliezer replied: "Well, three weeks ago I was working with Anna and Alicorn, and every time I said something nice they fed me an M&M." Story 2: I once witnessed a worker who hated keeping a work log because it was only used "against" him. His supervisor would call to say "Why did you spend so much time on that?" or "Why isn't this done yet?" but never "I saw you handled X, great job!" Not surprisingly, he often "forgot" to fill out his worklog. Ever since I got everyone at the Singularity Institute to keep work logs, I've tried to avoid connections between "concerned" feedback and staff work logs, and instead take time to comment positively on things I see in those work logs. Story 3: Chatting with Eliezer, I said, "Eliezer, I get the sense that I've inadvertently caused you to be slightly averse to talking to me. Maybe because we disagree on so many things, or something?" Eliezer's reply was: "No, it's much simpler. Our conversations usually run longer than our previously set deadline, so whenever I finish talking with you I feel drained and slightly cranky." Now I finish our conversations on time. Story 4: A major Singularity Institute donor recently said to me: "By the way, I decided that every time I donate to the Singularity Institute, I'll set aside an additional 5% for myself to do fun things with, as a motivation to donate." The power of reinforcement It's amazing to me how consistently we fail to take advantage of the power of reinforcement. Maybe it's because behaviorist techniques like reinforcement feel like they don't respect human agency enough. But if you aren't treating humans more like animals than most people are, then you're modeling humans poorly. You are not an agenty homunculus "corrupted" by heuristics and biases. You just are heuristics and biases. And you respond to reinforcement, because most of your motivation systems still work like the motivation systems of other animals. A quick reminder of what you learned in high school A reinforcer is anything that, when it occurs in conjunction with an act, increases the probability that the act will occur again. A positive reinforcer is something the subject wants, such as food, petting, or praise. Positive reinforcement occurs when a target behavior is followed by something the subject wants, and this increases the probability that the behavior will occur again. A negative reinforcer is something the subject wants to avoid, such as a blow, a frown, or an unpleasant sound. Negative reinforcement occurs when a target behavior is followed by some relief from something the subject doesn't want, and this increases the probability that the behavior will happen again. What works Small reinforcers are fine, as long as there is a strong correlation between the behavior and the reinforcer (Schneider 1973; Todorov et al. 1984). All else equal, a large reinforcer is more effective than a small one (Christopher 1988; Ludvig et al. 2007; Wolfe 1936), but the more you increase the reinforcer magnitude, the less benefit you get from the increase (Frisch & Dickinson 1990). The reinforcer should immediately follow the target behavior (Escobar & Bruner 2007; Schlinger & Blakely 1994; Schneider 1990). Pryor (2007) notes that...

The Nonlinear Library: LessWrong Top Posts
The Power of Reinforcement by lukeprog

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 7:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Power of Reinforcement, published by The Power of Reinforcement on the LessWrong. Part of the sequence: The Science of Winning at Life Also see: Basics of Animal Reinforcement, Basics of Human Reinforcement, Physical and Mental Behavior, Wanting vs. Liking Revisited, Approving reinforces low-effort behaviors, Applying Behavioral Psychology on Myself. Story 1: On Skype with Eliezer, I said: "Eliezer, you've been unusually pleasant these past three weeks. I'm really happy to see that, and moreover, it increases my probability than an Eliezer-led FAI research team will work. What caused this change, do you think?" Eliezer replied: "Well, three weeks ago I was working with Anna and Alicorn, and every time I said something nice they fed me an M&M." Story 2: I once witnessed a worker who hated keeping a work log because it was only used "against" him. His supervisor would call to say "Why did you spend so much time on that?" or "Why isn't this done yet?" but never "I saw you handled X, great job!" Not surprisingly, he often "forgot" to fill out his worklog. Ever since I got everyone at the Singularity Institute to keep work logs, I've tried to avoid connections between "concerned" feedback and staff work logs, and instead take time to comment positively on things I see in those work logs. Story 3: Chatting with Eliezer, I said, "Eliezer, I get the sense that I've inadvertently caused you to be slightly averse to talking to me. Maybe because we disagree on so many things, or something?" Eliezer's reply was: "No, it's much simpler. Our conversations usually run longer than our previously set deadline, so whenever I finish talking with you I feel drained and slightly cranky." Now I finish our conversations on time. Story 4: A major Singularity Institute donor recently said to me: "By the way, I decided that every time I donate to the Singularity Institute, I'll set aside an additional 5% for myself to do fun things with, as a motivation to donate." The power of reinforcement It's amazing to me how consistently we fail to take advantage of the power of reinforcement. Maybe it's because behaviorist techniques like reinforcement feel like they don't respect human agency enough. But if you aren't treating humans more like animals than most people are, then you're modeling humans poorly. You are not an agenty homunculus "corrupted" by heuristics and biases. You just are heuristics and biases. And you respond to reinforcement, because most of your motivation systems still work like the motivation systems of other animals. A quick reminder of what you learned in high school A reinforcer is anything that, when it occurs in conjunction with an act, increases the probability that the act will occur again. A positive reinforcer is something the subject wants, such as food, petting, or praise. Positive reinforcement occurs when a target behavior is followed by something the subject wants, and this increases the probability that the behavior will occur again. A negative reinforcer is something the subject wants to avoid, such as a blow, a frown, or an unpleasant sound. Negative reinforcement occurs when a target behavior is followed by some relief from something the subject doesn't want, and this increases the probability that the behavior will happen again. What works Small reinforcers are fine, as long as there is a strong correlation between the behavior and the reinforcer (Schneider 1973; Todorov et al. 1984). All else equal, a large reinforcer is more effective than a small one (Christopher 1988; Ludvig et al. 2007; Wolfe 1936), but the more you increase the reinforcer magnitude, the less benefit you get from the increase (Frisch & Dickinson 1990). The reinforcer should immediately follow the target behavior (Escobar & Bruner 2007; Schlinger & Blakely 1994; Schneider 1990). Pryor...

THE ONE'S CHANGING THE WORLD -PODCAST
FUTURIST, 2020 TRANSHUMANIST PRESIDENTIAL CANDIDATE - CHARLIE KAM

THE ONE'S CHANGING THE WORLD -PODCAST

Play Episode Listen Later Nov 22, 2021 49:43


#ustp #transhumanist #transhumanism #2020elections #charliekam #presidentcandidate2020 Charlie Kam is a singer/songwriter, entrepreneur & transhumanist who was the conference chairman of TransVision 2007. he is member of the World Transhumanist Association, Immortalist Institute, World Future Society, Singularity Institute, and Alcor life foundation Director of Networking for the California Transhumanist Party and is the U.S. Transhumanist Party 2020 candidate for President. The U.S. Transhumanist Party is focused on policy rather than politics as conventionally defined & value initiatives, reforms that will improve the human condition for as many people as possible, with as much beneficial impact as possible. Charlie Kam stands for human life extension achieved through the progress of science and technology and advancements of liberty, education, and technological progress through transhumanist politics. https://www.linkedin.com/in/charlie-kam-0944904 https://www.californiatranshumanistparty.org http://transhumanist-party.org

Clearer Thinking with Spencer Greenberg
Preference Falsification and Postmodernism (with Michael Vassar)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Feb 17, 2021 93:16


How much preference falsification is occurring in society? What's the difference between conflict theory and mistake theory? Why is postmodernism useful to understand?Michael Vassar was the President of the Singularity Institute from 2009 to 2012. Subsequently, he has worked in business consulting, especially in association with cutting edge science, although these days he primarily invests his own assets. You can contact him at michael.vassar@gmail.com.Further reading:My IRB Nightmare — the Slate Star Codex codex account of trying to do a study in a hospital that we discuss in the episodeGPT-3 — the A.I. language model discussed in the episode that was released by OpenAIPreference falsificationConflict theory vs. Mistake theory and people's views on societyThe "postmodern" analysis / article that Michael brought up[Read more]

Clearer Thinking with Spencer Greenberg
Preference Falsification and Postmodernism (with Michael Vassar)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Feb 17, 2021 93:16


How much preference falsification is occurring in society? What's the difference between conflict theory and mistake theory? Why is postmodernism useful to understand?Michael Vassar was the President of the Singularity Institute from 2009 to 2012. Subsequently, he has worked in business consulting, especially in association with cutting edge science, although these days he primarily invests his own assets. You can contact him at michael.vassar@gmail.com.Further reading:My IRB Nightmare — the Slate Star Codex codex account of trying to do a study in a hospital that we discuss in the episodeGPT-3 — the A.I. language model discussed in the episode that was released by OpenAIPreference falsificationConflict theory vs. Mistake theory and people's views on societyThe "postmodern" analysis / article that Michael brought up

Clearer Thinking with Spencer Greenberg
Preference Falsification and Postmodernism with Michael Vassar

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Feb 17, 2021 93:16


How much preference falsification is occurring in society? What's the difference between conflict theory and mistake theory? Why is postmodernism useful to understand?Michael Vassar was the President of the Singularity Institute from 2009 to 2012. Subsequently, he has worked in business consulting, especially in association with cutting edge science, although these days he primarily invests his own assets. You can contact him at michael.vassar@gmail.com.Further reading:My IRB Nightmare — the Slate Star Codex codex account of trying to do a study in a hospital that we discuss in the episodeGPT-3 — the A.I. language model discussed in the episode that was released by OpenAIPreference falsificationConflict theory vs. Mistake theory and people's views on societyThe "postmodern" analysis / article that Michael brought up

Clearer Thinking with Spencer Greenberg
Preference Falsification and Postmodernism with Michael Vassar

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Feb 17, 2021 93:16


How much preference falsification is occurring in society? What's the difference between conflict theory and mistake theory? Why is postmodernism useful to understand? Michael Vassar was the President of the Singularity Institute from 2009 to 2012. Subsequently, he has worked in business consulting, especially in association with cutting edge science, although these days he primarily invests his own assets. You can contact him at michael.vassar@gmail.com. Further reading: My IRB Nightmare — the Slate Star Codex codex account of trying to do a study in a hospital that we discuss in the episode GPT-3 — the A.I. language model discussed in the episode that was released by OpenAI Preference falsification Conflict theory vs. Mistake theory and people's views on society The "postmodern" analysis / article that Michael brought up

Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
citiesabc Interview: Ben Goertzel AI Mastermind, Founder SingularityNet - What kind of mind can we engineer?

Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews

Play Episode Listen Later May 15, 2020 79:36


Ben Goertzel is a leading world recognised artificial intelligence researcher, thinker, software engineer and serial entrepreneur. Ben is the founder and CEO of SingularityNET, the Chairman of OpenCog Foundation, the Chairman of Artificial General Intelligence Society, the Chief Scientist of Mozi Health and Vice Chairman Humanity+ and his work, writing and ideas are influencing the way we perceive AI, technology and blockchain._____________________________________________Ben Goertzel Interview focus questions:1. An introduction from Ben - background, overview, education... 2. The way you look at AI, AGI, open AI?3. Career highlights? 4. Your companies and focus5. What is your main focus as a doer?6. Can you show an example of of open AI 7. You have a focus on open AI and Artificial General Intelligence? Can you elaborate on this?8. You are a thinker, philosopher and as we look at evolution how do you see Singularity, special when it comes to the 2 visions of humanity the light side and the dark side?9. What are Ben Goals and focus as a thinker and a doer?10. With Covid-19 how can you look at this as a way to redesign our society 11. As a science fiction reader, writer and enthusiastic what is your vision of the Future you would like to create?_____________________________________________Ben Goertzel Bio:Ben Goertzel is a leading world recognised authority in artificial intelligence research, inventor and serial entrepreneur. Ben is a deep thinker and also a man of action as the founder and CEO of SingularityNET, the Chairman of OpenCog Foundation, the Chairman of Artificial General Intelligence Society, the Chief Scientist of Mozi Health and Vice Chairman Humanity+Goertzel is the chief scientist and chairman of AI software company Novamente LLC; chairman of the OpenCog Foundation; and advisor to Singularity University. He was Director of Research of the Machine Intelligence Research Institute (formerly the Singularity Institute).Goertzel is the son of Ted Goertzel, a former professor of sociology at Rutgers University.[2] He left high school after the tenth grade to attend Bard College at Simon's Rock, where he graduated with a bachelor's degree in Quantitative Studies.[3]Views on AIBen Goertzel's focus these days is the SingularityNET project, which brings AI and blockchain together to create a decentralized open market for AIs. It's a medium for the creation and emergence of AGI, a way to roll out superior AI-as-a-service to every vertical market, and a way to enable everyone in the world to contribute to and benefit from AI.Ben's passions are numerous, including AGI, life extension biology, philosophy of mind, psi, consciousness, complex systems, improvisational music, experimental fiction, theoretical physics and metaphysics.References and sources:https://www.linkedin.com/in/bengoertzel/https://www.youtube.com/watch?v=-qfB8...https://singularitynet.io/https://singularitynet.io/team/

Utility + Function
5. Michael Vassar - Does Michael Vassar Dream of Electric Sheep?

Utility + Function

Play Episode Listen Later Oct 16, 2019 49:56


Michael Vassar, founder of the Singularity Institute among many other accomplishments, speaks to Matthew about Thomas Edison, Climate Change, Railroads, President Lincoln, Modern Times, Alien Invasion, and searches for a happy ending. Thank you for listening!

Trans Resister Radio
Rachel Haywire, JG Michael interview, Factions Within Transhumanism, AoT#242

Trans Resister Radio

Play Episode Listen Later Aug 24, 2019 59:24


US Transhumanist Party presidential candidate, Rachel Haywire comes onto The Age of Transitions radio show alongside JG Michael of Parallax Views. Rachel talks about some of the problems she has faced from within transhumanist circles. There are a few people within the movement who have no interest in bestowing the benefits of high technology to the masses.  topics include: transhumanism, politics, US Transhumanist Party, WTA, h+, Humanity Plus, Singularity, technology, neuro diversity, elitism, Natasha Vita More, Max More, Alcor, life extension, Ben Goertzel, Singularity Institute, Ai, Michael Vassar, intellectual property, billionaires, anarchy

ai transitions singularity transhumanism factions wta parallax views alcor ben goertzel max more singularity institute natasha vita more rachel haywire jg michael
The World Transformed
Ben Goertzel on the Future of AI, Best of TWT

The World Transformed

Play Episode Listen Later Mar 21, 2017 64:00


[This show first aired March 20, 2008.] Ben Goertzel joins Stephen Gordon and PJ Manney for a discussion of the future of AI. Dr. Ben Goertzel is the CEO/CSO of Novemente LLC. He's been working 20+ yrs in AI R&D and commercialization. He is a former CTO of 120+ employee, thinking machine company, Webmind. He earned his PhD in mathematics from Temple University. He has held several university positions in mathematics, computer science, and psychology in the US, New Zealand and Australia. He is the author of 70+ research papers, journalistic articles, and eight scholarly books dealing with topics in cognitive sciences and futurism. He is the principle architect of the Novamente Cognition Engine. And he is the Director of Research for the Singularity Institute for Artificial Intelligence. He came to us fresh off the first annual AGI conference at the University of Memphis which he was instrumental in organizing. The importance of Friendly AI. Goertzel believes that empathy involves the ability to simulate other minds. His thought is that if an AI is given the capacity to do this that is greater-than-human, then it will be more empathetic-than-human - a super friendly AI. Goertzel wisely refrained from making any absolute predictions on the arrival of human-level AGI. He did state that if the field had unlimited money he wouldn't be surprised if it could be done in 5 years. With limited funding it might take 20 years. If you haven't seen it, don't miss the 2001 BBC documentary Predicting AI's Future.

Product Hunt Radio
Episode 43: Peter Diamandis

Product Hunt Radio

Play Episode Listen Later Oct 22, 2015 48:25


Peter Diamandis is author of BOLD and co-founder of X-Prize, Singularity Institute, and many others. We chat about Peter’s story, lessons learned from Elon Musk, Jeff Bezos, Richard Branson, Larry Page, and what’s enabled him to succeed across industries, everything from space travel to human longevity to education of the future. Edited by Alex Kontis Feedback to @eriktorenberg

Philosophy Talk Starters
301: Turbo-charging the Mind

Philosophy Talk Starters

Play Episode Listen Later Oct 20, 2015 10:54


More at http://philosophytalk.org/shows/turbo-charging-mind. The rapid advance of computer technology in recent decades has produced a vast array of intelligent machines that far outstrip the human mind in speed and capacity. Yet these machines know far less than we do about almost everything. Is it possible to have the best of both worlds? Can we use new technologies to create a hybrid intelligence that seamlessly integrates the vast knowledge and skills embedded in our biological brains with the vastly greater capacity, speed, and knowledge-sharing ability of our mechanical creations? John and Ken examine the prospects for transcending the biological limits of the human mind with Anna Salamon from the Singularity Institute for Artificial Intelligence.

Big Picture Science
Doomsday Live, Part 2

Big Picture Science

Play Episode Listen Later Dec 3, 2012 52:16


If there is only one show you hear about the end of the world, let it be this one. Recorded before a live audience at the Computer History Museum on October 27th, 2012, this two-part special broadcast of Big Picture Science separates fact from fiction in doomsday prediction. In this second episode: a global viral pandemic … climate change … and the threat of assimilation by super-intelligent machines. Presented as part of the Bay Area Science Festival. Find out more about our guests and their work. Guests: •   Kirsten Gilardi – Wildlife veterinarian at the University of California, Davis. leader of the Gorilla Doctors program, and team leader for the US-AID Emerging Pandemic Threats PREDICT program •   Ken Caldeira – Climate scientist, Carnegie Intuition for Science at Stanford University •   Luke Muehlhauser – Executive Director of the Singularity Institute •   Bradley Voytek – Neuroscience researcher at the University of California, San Francisco

Singularity.FM
Luke Muehlhauser: Superhuman AI is Coming This Century

Singularity.FM

Play Episode Listen Later Jan 15, 2012 56:18


Last week I interviewed Luke Muehlhauser for Singularity 1 on 1. Luke Muehlhauser is the Executive Director of the Singularity Institute, the author of many articles on AI safety and the cognitive science of rationality, and the host of the popular podcast “Conversations from the Pale Blue Dot.” His work is collected at lukeprog.com. I have to say that […]

The Future And You
October 28, 2009 Episode

The Future And You

Play Episode Listen Later Oct 28, 2009 30:00


Eliezer Yudkowsky (co-founder and research fellow of the Singularity Institute for Artificial Intelligence) is today's featured guest. Topics: the Singularity and the creation of Friendly AI; his estimate of the probability of success in making a Friendly AI; and why achieving AI using evolutionary software might be monumentally dangerous. He also talks about human rationality, such as: the percentage of humans today who can be considered rational; his own efforts to increase that number; how the listener can seek the path to greater rationality in his or her own thinking; the benefits of greater rationality; and the amount of success that can be expected in this pursuit. Hosted by Stephen Euin Cobb, this is the October 28, 2009 episode of The Future And You. [Running time: 30 minutes] (This interview was recorded on October 4, 2009 at the Singularity Summit in New York City.) Eliezer Yudkowsky is an artificial intelligence researcher concerned with the Singularity, and an advocate of Friendly Artificial Intelligence. He is the author of The Singularity Institute for Artificial Intelligence publications Creating Friendly AI (2001) and Levels of Organization in General Intelligence (2002). His most recent academic contributions include two chapters in Oxford philosopher Nick Bostrom's edited volume Global Catastrophic Risks. Aside from research, he is also notable for his explanations of technical subjects in non-academic language, particularly on rationality, such as his article An Intuitive Explanation of Bayesian Reasoning. Also, along with Robin Hanson, he was one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found LessWrong.com, a community blog devoted to refining the art of human rationality.  

The Future And You
October 21, 2009 Episode

The Future And You

Play Episode Listen Later Oct 21, 2009 26:54


Michael Vassar and Michael Anissimov are today's featured guests. (Both are interviewed in their capacity as organizers of the Singularity Summit 2009 held earlier this month in New York City.) Topics: the Singularity and artificial intelligence in general, and this year's Singularity Summit conference in particular. Also: the limits of human reasoning, public resistance to the Singularity, and trends within the transhumanist community. Hosted by Stephen Euin Cobb, this is the October 21, 2009 episode of The Future And You. [Running time: 27 minutes] Michael Vassar is President of the Singularity Institute for Artificial Intelligence and is responsible for the organization of the Singularity Summit. He has held positions with the Peace Corps and with the National Institute of Standards and Technology. He writes and speaks on topics relating to the safe development of disruptive technologies. His papers include the Lifeboat Foundation analysis of the risks of advanced molecular manufacturing (which he co-authored with Robert Freitas) and Corporate Cornucopia, which he authored for the Center for Responsible Nanotechnology Task Force. He holds an M.B.A. from Drexel University and a B.S. in biochemistry from Penn State. Michael Anissimov writes and speaks on futurist issues, especially the relationships between accelerating change, nanotechnology, existential risk, transhumanism and the Singularity. His blog Accelerating Future has had over 4 million visits. He co-founded the non-profit Immortality Institute, the first organization focused on the abolition of nonconsensual death. He has worked or volunteered for, the Singularity Institute for Artificial Intelligence, The Methuselah Foundation, The Center for Responsible Nanotechnology, and the Lifeboat Foundation. He has given talks to audiences at technology and philosophy conferences in San Francisco, Las Vegas, Los Angeles, and at Yale University. A leading voice on the technological Singularity, he was quoted multiple times in Ray Kurzweil's 2005 book The Singularity is Near: When Humans Transcend Biology. He was profiled in the May 2007 issue of Psychology Today.Michael Anissimov was the featured guest in The Future And You episode for the week of March 5, 2008. That episode (like all past episodes) is still available for your listening pleasure.  

The Future And You
August 13, 2008 Episode

The Future And You

Play Episode Listen Later Aug 13, 2008 73:24


Ben Goertzel, noted scientist, author, futurist and pioneer in the field of Artificial Intelligence, is today's featured guest. Topics he discusses include: Artificial General Intelligence (AGI), the singularity, transhumanism, human immortality and how long he expects to live, and why (like your host) he is a founding member of the Order of Cosmic Engineers. Highlights of the interview include: The mechanism of human empathy seems to have been identified, and so can be reproduced in AI; even AI that is radically different in its thinking from human beings. Doctor Goertzel explains that this empathy is not based on emotion, and he emphasizes that he does not want to create an AI which is governed by its emotions. He stresses that the human mind does not qualify as a completely 'General Intelligence' but lies somewhere on the spectrum between AGI on one end and 'Narrow AI' on the other. This is one of several reasons why he does not expect AGI to be achieved by mimicking the workings of the human brain. He describes how our brains fool us into believing that we understand our actions and decisions when we don't. And why modeling an AI too closely on the human brain might make it too, vulnerable to false notions. He also says, 'I think virtual worlds are going to be absolutely critical to the development of Artificial General Intelligence.' As well as 'Right now connecting AI's to virtual worlds is probably the best way to get an AI to have a general human-like embodied experience.' Hosted by Stephen Euin Cobb, this is the August 13, 2008 episode of The Future And You. [Running time: 74 minutes] Ben Goertzel has a PhD in mathematics from Temple University, and has held several university positions in mathematics, computer science, and psychology, in the US, New Zealand and Australia. He is the Author of over 70 research papers, journalistic articles and 8 scholarly books dealing with topics in cognitive sciences and futurism. He has spent over 20 years in artificial intelligence research and commercialization.  The former Chief Technical Officer of Webmind, a thinking machine company with 120 employees, he is today the CEO of Novamente, and is the Principle architect of the Novamente Cognition Engine. He is also the Director of Research, at the Singularity Institute for Artificial Intelligence.

Tech Talk Radio Podcast
July 5, 2008 Tech Talk Radio Show

Tech Talk Radio Podcast

Play Episode Listen Later Jul 5, 2008 60:27


Digital TV, antennaweb.org, saving wet cellphone, Profiles in IT (Ray Kursweil, pioneer in OCR, text-to-speech synthesis, speech recognition, electronic keyboards, artificial intelligence), AI Singularity (when computers are smarter than men, six epochs of man-machine evolution), Singularity Institute for Artificial Intelligence, Google ogle defense (search terms used to establish community values), Google Trends (use and implications), Google privacy policy, Bill Gates steps down from Microsoft, National Cell Phone Courtesy Month (cell phone etiquette), hackers steal $2M from Citibank ATM users, useful Google search features (calulator, unit conversion, dictionary, spell checker, fill in the blank), and Food Science (how to prevent food from sticking). This show originally aired on Saturday, July 5, 2008, at 9:00 AM EST on 3WT Radio (WWWT).

Tech Talk Radio Podcast
July 5, 2008 Tech Talk Radio Show

Tech Talk Radio Podcast

Play Episode Listen Later Jul 5, 2008 60:27


Digital TV, antennaweb.org, saving wet cellphone, Profiles in IT (Ray Kursweil, pioneer in OCR, text-to-speech synthesis, speech recognition, electronic keyboards, artificial intelligence), AI Singularity (when computers are smarter than men, six epochs of man-machine evolution), Singularity Institute for Artificial Intelligence, Google ogle defense (search terms used to establish community values), Google Trends (use and implications), Google privacy policy, Bill Gates steps down from Microsoft, National Cell Phone Courtesy Month (cell phone etiquette), hackers steal $2M from Citibank ATM users, useful Google search features (calulator, unit conversion, dictionary, spell checker, fill in the blank), and Food Science (how to prevent food from sticking). This show originally aired on Saturday, July 5, 2008, at 9:00 AM EST on 3WT Radio (WWWT).