POPULARITY
Version française disponible ! Auto-doublée de YouTube avec l'IA (la qualité peut varier) Pour activer ou désactiver la version doublée en français de cet épisode en anglais, cliquez sur l'icône ⚙️ en bas de la vidéo, puis sélectionnez votre piste audio préférée. Les sous-titres français sont également disponibles dans les paramètres. ⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la
Bio Bala has rich experience in retail technology and process transformation. Most recently, he worked as a Principal Architect for Intelligent Automation, Innovation & Supply Chain in a global Fortune 100 retail corporation. Currently he works for a luxury brand as Principal Architect for Intelligent Automation providing technology advice for the responsible use of technology (Low Code, RPA, Chatbots, and AI). He is passionate about technology and spends his free time reading, writing technical blogs and co-chairing a special interest group with The OR Society. Interview Highlights 02:00 Mentors and peers 04:00 Community bus 07:10 Defining AI 08:20 Contextual awareness 11:45 GenAI 14:30 The human loop 17:30 Natural Language Processing 20:45 Sentiment analysis 24:00 Implementing AI solutions 26:30 Ethics and AI 27:30 Biased algorithms 32:00 EU AI Act 33:00 Responsible use of technology Connect Bala Madhusoodhanan on LinkedIn Books and references · https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html - NLP · https://www.theregister.com/2021/05/27/clearview_europe/ - Facial Technology Issue · https://www.designnews.com/electronics-test/apple-card-most-high-profile-case-ai-bias-yet - Apple Card story · https://www.ft.com/content/2d6fc319-2165-42fb-8de1-0edf1d765be3 - Data Centre growth · https://www.technologyreview.com/2024/02/06/1087793/what-babies-can-teach-ai/ · Independent Audit of AI Systems - · Home | The Alan Turing Institute · Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, Marco Iansiti & Karim R. Lakhani · AI Superpowers: China, Silicon Valley, and the New World, Kai-Fu Lee · The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, Mike Walsh · Human+Machine: Reimagining Work in the Age of AI, Paul R Daugherty, H. James Wilson · Superintelligence: Paths, Dangers, Strategies, Nick Bostrom · The Alignment Problem: How Can Artificial Intelligence Learn Human Values, Brian Christian · Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, Reid Blackman · Wanted: Human-AI Translators: Artificial Intelligence Demystified, Geertrui Mieke De Ketelaere · The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond, Michio Kaku, Feodor Chin et al Episode Transcript Intro: Hello and welcome to the Agile Innovation Leaders podcast. I'm Ula Ojiaku. On this podcast I speak with world-class leaders and doers about themselves and a variety of topics spanning Agile, Lean Innovation, Business, Leadership and much more – with actionable takeaways for you the listener. Ula Ojiaku So I have with me here, Bala Madhusoodhanan, who is a principal architect with a global luxury brand, and he looks after their RPA and AI transformation. So it's a pleasure to have you on the Agile Innovation Leaders podcast, Bala, thank you for making the time. Bala Madhusoodhanan It's a pleasure to have a conversation with the podcast and the podcast audience, Ula. I follow the podcast and there have been fantastic speakers in the past. So I feel privileged to join you on this conversation. Ula Ojiaku Well, the privilege is mine. So could you start off with telling us about yourself Bala, what have been the key points or the highlights of your life that have led to you being the Bala we know now? Bala Madhusoodhanan It's putting self into uncharted territory. So my background is mechanical engineering, and when I got the job, it was either you go into the mechanical engineering manufacturing side or the software side, which was slightly booming at that point of time, and obviously it was paying more then decided to take the software route, but eventually somewhere the path kind of overlapped. So from a mainframe background, started working on supply chain, and then came back to optimisation, tied back to manufacturing industry. Somewhere there is an overlap, but yeah, that was the first decision that probably got me here. The second decision was to work in a UK geography, rather than a US geography, which is again very strange in a lot of my peers. They generally go to Silicon Valley or East Coast, but I just took a choice to stay here for personal reasons. And then the third was like the mindset. I mean, I had over the last 15, 20 years, I had really good mentors, really good peers, so I always had their help to soundboard my crazy ideas, and I always try to keep a relationship ongoing. Ula Ojiaku What I'm hearing is, based on what you said, lots of relationships have been key to getting you to where you are today, both from mentors, peers. Could you expand on that? In what way? Bala Madhusoodhanan The technology is changing quite a lot, at least in the last 10 years. So if you look into pre-2010, there was no machine learning or it was statistics. People were just saying everything is statistics and accessibility to information was not that much, but post 2010, 2011, people started getting accessibility. Then there was a data buzz, big data came in, so there were a lot of opportunities where I could have taken a different career path, but every time I was in a dilemma which route to take, I had someone with whom either I have worked or who was my team lead or manager to guide me to tell me, like, take emotion out of the decision making and think in a calm mind, because you might jump into something and you might like it, you might not like it, you should not regret it. So again, over the course of so many such decisions, my cognitive mind has also started thinking about it. So those conversations really help. And again, collective experience. If you look into the decision making, it's not just my decision, I'm going through conversations that I had with people where they have applied their experience, so it's not just me or just not one situation, and to understand the why behind that, and that actually helps. In short, it's like a collection of conversations that I had with peers. A few of them are visionary leaders, they are good readers. So they always had a good insight on where I should focus, where I shouldn't focus, and of late recently, there has been a community bus. So a lot of things are moving to open source, there is a lot of community exchange of conversation, the blogging has picked up a lot. So, connecting to those parts also gives you a different dimension to think about. Ula Ojiaku So you said community bus, some of the listeners or people who are watching the video might not understand what you mean by the community bus. Are you talking about like meetups or communities that come around to discuss shared interests? Bala Madhusoodhanan If you are very much specifically interested in AI, or you are specifically interested in, power platform or a low code platform, there are a lot of content creators on those topics. You can go to YouTube, LinkedIn, and you get a lot of information about what's happening. They do a lot of hackathons, again, you need to invest time in all these things. If you don't, then you are basically missing the boat, but there are various channels like hackathon or meetup groups, or, I mean, it could be us like a virtual conversation like you and me, we both have some passionate topics, that's why we resonate and we are talking about it. So it's all about you taking an initiative, you finding time for it, and then you have tons and tons of information available through community or through conferences or through meetup groups. Ula Ojiaku Thanks for clarifying. So, you said as well, you had a collection of conversations that helped you whenever you were at a crossroad, some new technology or something emerges or there's a decision you had to make and checking in with your mentors, your peers and your personal Board of Directors almost, that they give you guidance. Now, looking back, would you say there were some turns you took that knowing what you know now, you would have done differently? Bala Madhusoodhanan I would have liked to study more. That is the only thing, because sometimes the educational degree, even though without a practical knowledge has a bigger advantage in certain conversation, otherwise your experience and your content should speak for you and it takes a little bit of effort and time to get that trust among leaders or peers just to, even them to trust saying like, okay, this person knows what he's talking about. I should probably trust rather than, someone has done a PhD and it's just finding the right balance of when I should have invested time in continuing my education, if I had time, I would have gone back two years and did everything that I had done, like minus two years off-set it by two years earlier. It would have given me different pathways. That is what I would think, but again, it's all constraints. I did the best at that point in time with whatever constraints I had. So I don't have any regret per se, but yeah, if there is a magic wand, I would do that. Ula Ojiaku So you are a LinkedIn top voice from AI. How would you define AI, artificial intelligence? Bala Madhusoodhanan I am a bit reluctant to give a term Artificial Intelligence. It's in my mind, it is Artificial Narrow Intelligence, it's slightly different. So let me start with a building block, which is machine learning. So machine learning is like a data labeller. You go to a Tesco store, you read the label, you know it is a can of soup because you have read the label, your brain is not only processing that image, it understands the surrounding. It does a lot of things when you pick that can of soup. You can't expect that by just feeding one model to a robot. So that's why I'm saying like it's AI is a bit over glorified in my mind. It is artificial narrow intelligence. What you do to automate certain specific tasks using a data set which is legal, ethical, and drives business value is what I would call machine learning, but yeah, it's just overhyped and heavily utilised term AI. Ula Ojiaku You said, there's a hype around artificial intelligence. So what do you mean by that? And where do you see it going? Bala Madhusoodhanan Going back to the machine learning definition that I said, it's basically predicting an output based on some input. That's as simple as what we would say machine learning. The word algorithm is basically something like a pattern finder. What you're doing is you are giving a lot of data, which is properly labelled, which has proper diversity of information, and there are multiple algorithms that can find patterns. The cleverness or engineering mind that you bring in is to select which pattern or which algorithm you would like to do for your use case. Now you're channelling the whole machine learning into one use case. That's why I'm going with the term narrow intelligence. Computers can do brilliant jobs. So you ask computers to do like a Rubik's cubes solving. It will do it very quickly because the task is very simple and it is just doing a lot of calculation. You give a Rubik's cube to a kid. It has to apply it. The brain is not trained enough, so it has to cognitively learn. Maybe it will be faster. So anything which is just pure calculation, pure computing, if the data is labelled properly, you want to predict an outcome, yes, you can use computers. One of the interesting videos that I showed in one of my previous talks was a robot trying to walk across the street. This is in 2018 or 19. The first video was basically talking about a robot crossing a street and there were vehicles coming across and the robot just had a headbutt and it just fell off. Now a four year old kid was asked to walk and it knew that I have to press a red signal. So it went to the signal stop. It knew, or the baby knew that I can only walk when it is green. And then it looks around and then walks so you can see the difference – a four year old kid has a contextual awareness of what is happening, whereas the robot, which is supposed to be called as artificial intelligence couldn't see that. So again, if you look, our human brains have been evolved over millions of years. There are like 10 billion neurons or something, and it is highly optimised. So when I sleep, there are different set of neurons which are running. When I speak to you, my eyes and ears are running, my motion sensor neurons are running, but these are all highly optimised. So the mother control knows how much energy should be sent on which neuron, right, whereas all these large language models, there is only one task. You ask it, it's just going to do that. It doesn't have that intelligence to optimise. When I sleep, maybe 90 percent of my neurons are sleeping. It's getting recharged. Only the dream neurons are working. Whereas once you put a model live, it doesn't matter, all the hundred thousand neurons would run. So, yeah, it's in very infancy state, maybe with quantum computing, maybe with more power and better chips things might change, but I don't see that happening in the next five to 10 years. Ula Ojiaku Now, what do you say about Gen AI? Would you also classify generative AI as purely artificial neural intelligence? Bala Madhusoodhanan The thing with generative AI is you're trying to generalise a lot of use cases, say ChatGPT, you can throw in a PDF, you can ask something, or you can say, hey, can you create a content for my blog or things like that, right? Again, all it is trying to do is it has some historical content with which it is trying to come up with a response. So the thing that I would say is humans are really good with creativity. If a problem is thrown at a person, he will find creative ways to solve it. The tool with which we are going to solve might be a GenAI tool, I don't know, because I don't know the problem, but because GenAI is in a hype cycle, every problem doesn't need GenAI, that's my view. So there was an interesting research which was done by someone in Montreal University. It talks about 10 of the basic tasks like converting text to text or text to speech and with a generative AI model or multiple models, because you have a lot of vendors providing different GenAI models, and then they went with task specific models and the thing that they found was the task specific models were cheap to run, very, very scalable and robust and highly accurate, right. Whereas GenAI, if, when you try to use it and when it goes into a production ready or enterprise ready and if it is used by customers or third party, which are not part of your ecosystem, you are putting yourself in some kind of risk category. There could be a risk of copyright issues. There could be a risk of IP issues. There could be risk of not getting the right consent from someone. I can say, can you create an image of a podcaster named Ula? You never know because you don't remember that one of your photos on Google or Twitter or somewhere is not set as private. No one has come and asked you saying, I'm using this image. And yeah, it's finding the right balance. So even before taking the technology, I think people should think about what problem are they trying to solve? In my mind, AI or artificial intelligence, or narrow intelligence can have two buckets, right. The first bucket is to do with how can I optimise the existing process? Like there are a lot of things that I'm doing, is there a better way to do it? Is there an efficient way to do it? Can I save time? Can I save money? Stuff like that. So that is an optimisation or driving efficiency lever. Other one could be, I know what to do. I have a lot of data, but I don't have infrastructure or people to do it, like workforce augmentation. Say, I have 10 data entry persons who are graduate level. Their only job is to review the receipts or invoices. I work in FCA. I have to manually look at it, approve it, and file it, right? Now it is a very tedious job. So all you are doing is you are augmenting the whole process with an OCR engine. So OCR is Optical Character Recognition. So there are models, which again, it's a beautiful term for what our eyes do. When we travel somewhere, we get an invoice, we exactly know where to look, right? What is the total amount? What is the currency I have paid? Have they taken the correct credit card? Is my address right? All those things, unconsciously, your brain does it. Whereas our models given by different software vendors, which have trained to capture these specific entities which are universal language, to just pass, on data set, you just pass the image on it. It just picks and maps that information. Someone else will do that job. But as part of your process design, what you would do is I will do the heavy lifting of identifying the points. And I'll give it to someone because I want someone to validate it. It's human at the end. Someone is approving it. So they basically put a human in loop and, human centric design to a problem solving situation. That's your efficiency lever, right? Then you have something called innovation level - I need to do something radical, I have not done this product or service. Yeah, that's a space where you can use AI, again, to do small proof of concepts. One example could be, I'm opening a new store, it's in a new country, I don't know how the store layout should look like. These are my products. This is the store square footage. Can you recommend me the best way so that I can sell through a lot? Now, a visual merchandising team will have some ideas on where the things should be, they might give that prompt. Those texts can be converted into image. Once you get the base image, then it's human. It's us. So it will be a starting point rather than someone implementing everything. It could be a starting point. But can you trust it? I don't know. Ula Ojiaku And that's why you said the importance of having a human in the loop. Bala Madhusoodhanan Yeah. So the human loop again, it's because we humans bring contextual awareness to the situation, which machine doesn't know. So I'll tie back this to the NLP. So Natural Language Processing, it has two components, so you have natural language understanding and then you have natural language generation. When you create a machine learning model, all it is doing is, it is understanding the structure of language. It's called form. I'm giving you 10,000 PDFs, or you're reading a Harry Potter book. There is a difference between you reading a Harry Potter book and the machine interpreting that Harry Potter book. You would have imagination. You will have context of, oh, in the last chapter, we were in the hilly region or in a valley, I think it will be like this, the words like mist, cold, wood. You started already forming images and visualising stuff. The machine doesn't do that. Machine works on this is the word, this is a pronoun, this is the noun, this is the structure of language, so the next one should be this, right? So, coming back to the natural language understanding, that is where the context and the form comes into play. Just think of some alphabets put in front of you. You have no idea, but these are the alphabet. You recognise A, you recognise B, you recognise the word, but you don't understand the context. One example is I'm swimming against the current. Now, current here is the motion of water, right? My current code base is version 01. I'm using the same current, right? The context is different. So interpreting the structure of language is one thing. So, in natural language understanding, what we try to do is we try to understand the context. NLG, Natural Language Generation, is basically how can I respond in a way where I'm giving you an answer to your query. And this combined is NLP. It's a big field, there was a research done, the professor is Emily Bender, and she one of the leading professors in the NLP space. So the experiment was very funny. It was about a parrot in an island talking to someone, and there was a shark in between, or some sea creature, which basically broke the connection and was listening to what this person was saying and mimicking. Again, this is the problem with NLP, right? You don't have understanding of the context. You don't put empathy to it. You don't understand the voice modulation. Like when I'm talking to you, you can judge what my emotion cues are, you can put empathy, you can tailor the conversation. If I'm feeling sad, you can put a different spin, whereas if I'm chatting to a robot, it's just going to give a standard response. So again, you have to be very careful in which situation you're going to use it, whether it is for a small team, whether it is going to be in public, stuff like that. Ula Ojiaku So that's interesting because sometimes I join the Masters of Scale strategy sessions and at the last one there was someone whose organisational startup was featured and apparently what their startup is doing is to build AI solutions that are able to do sentiment analysis. And I think some of these, again, in their early stages, but some of these things are already available to try to understand the tone of voice, the words they say, and match it with maybe the expression and actually can transcribe virtual meetings and say, okay, this person said this, they looked perplexed or they looked slightly happy. So what do you think about that? I understand you're saying that machines can't do that, but it seems like there are already organisations trying to push the envelope towards that direction. Bala Madhusoodhanan So the example that you gave, sentiment of the conversation, again, it is going by the structure or the words that I'm using. I am feeling good. So good, here is positive sentiment. Again, for me the capability is slightly overhyped, the reason being is it might do 20 percent or 30 percent of what a human might do, but the human is any day better than that particular use case, right? So the sentiment analysis typically works on the sentiment data set, which would say, these are the certain proverbs, these are the certain types of words, this generally referred to positive sentiment or a good sentiment or feel good factor, but the model is only good as good as the data is, right? So no one is going and constantly updating that dictionary. No one is thinking about it, like Gen Z have a different lingo, millennials had a different lingo. So, again, you have to treat it use case by use case, Ula. Ula Ojiaku At the end of the day, the way things currently are is that machines aren't at the place where they are as good as humans. Humans are still good at doing what humans do, and that's the key thing. Bala Madhusoodhanan Interesting use case that I recently read probably after COVID was immersive reading. So people with dyslexia. So again, AI is used for good as well, I'm not saying it is completely bad. So AI is used for good, like, teaching kids who are dyslexic, right? Speech to text can talk, or can translate a paragraph, the kid can hear it, and on the screen, I think one note has an immersive reader, it actually highlights which word it is, uttering into the ears and research study showed that kids who were part of the study group with this immersive reading audio textbook, they had a better grasp of the context and they performed well and they were able to manage dyslexia better. Now, again, we are using the technology, but again, kudos to the research team, they identified a real problem, they formulated how the problem could be solved, they were successful. So, again, technology is being used again. Cancer research, they invest heavily, in image clustering, brain tumours, I mean, there are a lot of use cases where it's used for good, but then again, when you're using it, you just need to think about biases. You need to understand the risk, I mean, everything is risk and reward. If your reward is out-paying the minimum risk that you're taking, then it's acceptable. Ula Ojiaku What would you advise leaders of organisations who are considering implementing AI solutions? What are the things we need to consider? Bala Madhusoodhanan Okay. So going back to the business strategy and growth. So that is something that the enterprises or big organisations would have in mind. Always have your AI goals aligned to what they want. So as I said, there are two buckets. One is your efficiency driver, operational efficiency bucket. The other one is your innovation bucket. Just have a sense check of where the business wants to invest in. Just because AI is there doesn't mean you have to use it right. Look into opportunities where you can drive more values. So that would be my first line of thought. The second would be more to do with educating leaders about AI literacy, like what each models are, what do they do? What are the pitfalls, the ethical awareness about use of AI, data privacy is big. So again, that education is just like high level, with some examples on the same business domain where it has been successful, where it has been not so successful, what are the challenges that they face? That's something that I would urge everyone to invest time in. I think I did mention about security again, over the years, the practice has been security is always kept as last. So again, I was fortunate enough to work in organisations where security first mindset was put in place, because once you have a proof of value, once you show that to people, people get excited, and it's about messaging it and making sure it is very secured, protecting the end users. So the third one would be talking about having secure first design policies or principles. Machine learning or AI is of no good if your data quality is not there. So have a data strategy is something that I would definitely recommend. Start small. I mean, just like agile, you take a value, you start small, you realise whether your hypothesis was correct or not, you monitor how you performed and then you think about scale just by hello world doesn't mean that you have mastered that. So have that mindset, start small, monitor, have constant feedback, and then you think about scaling. Ula Ojiaku What are the key things about ethics and AI, do you think leaders should be aware of at this point in time? Bala Madhusoodhanan So again, ethical is very subjective. So it's about having different stakeholders to give their honest opinion of whether your solution is the right thing to do against the value of the enterprise. And it's not your view or my view, it's a consent view and certain things where people are involved, you might need to get HR, you might need to get legal, you might need to get brand reputation team to come and assist you because you don't understand the why behind certain policies were put in place. So one is, is the solution or is the AI ethical to the core value of the enterprise? So that's the first sense check that you need to do. If you pass that sense check, then comes about a lot of other threats, I would say like, is the model that I'm using, did it have a fair representation of all data set? There's a classic case study on one of a big cloud computing giant using an AI algorithm to filter resumes and they had to stop it immediately because the data set was all Ivy League, male, white, dominant, it didn't have the right representation. Over the 10 years, if I'm just hiring certain type of people, my data is inherently biased, no matter how good my algorithm is, if I don't have that data set. The other example is clarify AI. They got into trouble on using very biased data to give an outcome on some decision making to immigration, which has a bigger ramification. Then you talk about fairness, whether the AI system is fair to give you an output. So there was a funny story about a man and a woman in California living together, and I think the woman wasn't provided a credit card, even though everything, the postcode is the same, both of them work in the same company, and it was, I think it has to do with Apple Pay. Apple Pay wanted to bring in a silver credit card, Apple card or whatever it is, but then it is so unfair that the women who was equally qualified was not given the right credit limit, and the bank clearly said the algorithm said so. Then you have privacy concern, right? So all these generic models that you have that is available, even ChatGPT for that matter. Now you can chat with ChatGPT multiple times. You can talk about someone like Trevor Noah and you can say hey, can you create a joke? Now it has been trained with the jokes that he has done, it might be available publicly. But has the creator of model got a consent saying, hey Trevor, I'm going to use your content so that I can give better, and how many such consent, even Wikipedia, if you look into Wikipedia, about 80 percent of the information is public, but it is not diversified. What I mean by that is you can search for a lot of information. If the person is from America or from UK or from Europe, maybe from India to some extent, but what is the quality of data, if you think about countries in Africa, what do you think about South America? I mean, it is not representing the total diversity of data, and we have this large language model, which has been just trained on that data, right? So there is a bias and because of that bias, your outcome might not be fair. So these two are the main things, and of course the privacy concern. So if someone goes and says, hey, you have used my data, you didn't even ask me, then you're into lawsuit. Without getting a proper consent, again, it's a bad world, it's very fast moving and people don't even, including me, I don't even read every terms and condition, I just scroll down, tick, confirm, but those things are the things where I think education should come into play. Think about it, because people don't understand what could go wrong, not to them, but someone like them. Then there is a big fear of job displacement, like if I put this AI system, what will I do with my workforce? Say I had ten people, you need to think about, you need to reimagine your workplace. These are the ten jobs my ten people are doing. If I augment six of those jobs, how can I use my ten resources effectively to do something different or that piece of puzzle is always, again, it goes back to the core values of the company, what they think about their people, how everything is back, but it's just that needs a lot of inputs from multiple stakeholders. Ula Ojiaku It ties back to the enterprise strategy, there is the values, but with technology as it has evolved over the years, things will be made obsolete, but there are new opportunities that are created, so moving from when people travelled with horses and buggies and then the automotive came up. Yes, there wasn't as much demand for horseshoes and horses and buggies, but there was a new industry, the people who would mechanics or garages and things like that. So I think it's really about that. Like, going back to what you're saying, how can you redeploy people? And that might involve, again, training, reskilling, and investing in education of the workforce so that they're able to harness AI and to do those creative things that you've emphasised over this conversation about human beings, that creative aspect, that ability to understand context and nuance and apply it to the situation. Bala Madhusoodhanan So I was fortunate to work with ForHumanity, an NGO which basically is trying to certify people to look into auditing AI systems. So EU AI Act is now in place, it will be enforced soon. So you need people to have controls on all these AI systems to protect - it's done to protect people, it's done to protect the enterprise. So I was fortunate enough to be part of that community. I'm still working closely with the Operation Research Society. Again, you should be passionate enough, you should find time to do it, and if you do it, then the universe will find a way to give you something interesting to work with. And our society, The Alan Turing Institute, the ForHumanity Society, I had a few ICO workshops, which was quite interesting because when you hear perspectives from people from different facets of life, like lawyers and solicitors, you would think, ah, this statement, I wouldn't interpret in this way. It was a good learning experience and I'm sure if I have time, I would still continue to do that and invest time in ethical AI. As technology, it's not only AI, it's ethical use of technology, so sustainability is also part of ethical bucket if you look into it. So there was an interesting paper it talks about how many data centres have been opened between 2018 to 2024, which is like six years and the power consumption has gone from X to three times X or two times X, so we have opened a lot. We have already caused damage to the environment with all these technology, and just because the technology is there, it doesn't mean you have to use it, but again, it's that educational bit, what is the right thing to do? And even the ESG awareness, people are not aware. Like now, if you go to the current TikTok trenders, they know I need to look into certified B Corp when I am buying something. The reason is because they know, and they're more passionate about saving the world. Maybe we are not, I don't know, but again, once you start educating and, telling those stories, humans are really good, so you will have a change of heart. Ula Ojiaku What I'm hearing you say is that education is key to help us to make informed choices. There is a time and place where you would need to use AI, but not everything requires it, and if we're more thoughtful in how we approach, these, because these are tools at the end of the day, then we can at least try to be more balanced in the risks and taking advantage of opportunities versus the risks around it and the impact these decisions and the tools that we choose to use make on the environment. Now, what books have you found yourself recommending most to people, and why? Bala Madhusoodhanan Because we have been talking on AI, AI Superpower is one book which was written by Kai-Fu Lee. There is this book by Brian Christian, The Alignment Problem: Machine Learning and Human Values alignment of human values and machine it was basically talking about what are the human values? Where do you want to use machine learning? How do you basically come up with a decision making, that's a really interesting read. Then there is a book called Ethical Machines by Reid Blackman. So it talks about all the ethical facets of AI, like biases, fairnesses, like data privacy, transparency, explainability, and he gives quite a detail, example and walkthrough of what that means. Another interesting book was Wanted: Human-AI Translators: Artificial Intelligence Demystified by a Dutch professor, again, really, really lovely narration of what algorithms are, what AI is, where, and all you should think about, what controls and stuff like that. So that is an interesting book. Harvard Professor Kahrim Lakhani, he wrote something called, Competing in the Age of AI, that's a good book. The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You by Mike Walsh is another good book, which I finished a couple of months back. Ula Ojiaku And if the audience wants to find you, how can they reach out to you? Bala Madhusoodhanan They can always reach out to me at LinkedIn, I would be happy to touch base through LinkedIn. Ula Ojiaku Awesome. And do you have any final words and or ask of the audience? Bala Madhusoodhanan The final word is, again, responsible use of technology. Think about not just the use case, think about the environmental impact, think about the future generation, because I think the damage is already done. So, at least not in this lifetime, maybe three or four lifetimes down the line, it might not be the beautiful earth that we have. Ula Ojiaku It's been a pleasure, as always, speaking with you, Bala, and thank you so much for sharing your insights and wisdom, and thank you for being a guest on the Agile Innovation Leaders Podcast. Bala Madhusoodhanan Thank you, lovely conversation, and yeah, looking forward to connecting with more like minded LinkedIn colleagues. Ula Ojiaku That's all we have for now. Thanks for listening. If you liked this show, do subscribe at www.agileinnovationleaders.com or your favourite podcast provider. Also share with friends and do leave a review on iTunes. This would help others find this show. I'd also love to hear from you, so please drop me an email at ula@agileinnovationleaders.com Take care and God bless!
In this episode of Create Like the Greats, Ross shares his thoughts on the future of AI, inspired by Sam Altman's insights. He dives into how AI is poised to revolutionize our work, culture, and ability to solve complex problems. Ross explores how far human capabilities have come and how AI will continue to enhance our potential. He also reflects on the current state of AI adoption and how those who take advantage of it now will have a significant edge in the future. Key Takeaways and Insights: 1. People Have Become Dramatically More Capable Over Time Over the years, human productivity has soared to levels previously thought impossible. Tasks like sending hundreds of emails, creating design mockups, optimizing blog posts, and repurposing content can now be done in minutes, thanks to technological advancements. Yet, many still overlook the full potential of AI. In retrospect, we'll see this moment as a missed opportunity if we don't take action now. No matter your field—SEO, video editing, social media, sales—AI tools should be a core part of your toolkit. Those who embrace AI today will be ahead of the curve tomorrow. 2. AI Will Equip Us to Solve Complex Problems AI is not just an incremental tool; it's a revolutionary one. As we integrate AI into more aspects of work and life, it will allow us to solve challenges that were previously out of reach. Recent reports predict that AI could contribute trillions to global productivity, especially in areas like marketing optimization, lead generation, and personalized outreach. This is just the beginning. Future generations will likely harness AI to achieve things we can't even imagine today. 3. AI Will Improve Lives as It Scales AI's growth is just starting, and while the initial hype has quieted, the true power of AI is just beginning to unfold. As AI scales, it will bring profound improvements to everyday life—especially for those who integrate it into their workflows now. The ROI for early adopters who use AI to streamline communication, personalize campaigns, and analyze data is already extraordinary. The buzz may have faded, but those who stay the course will see transformative benefits Resources: Distribution.Ai - https://distribution.ai/ The Intelligence Age: What It Means For Business & Revenue - https://rosssimmonds.com/intelligence-age/ The Intelligence Age - https://ia.samaltman.com/ Superintelligence: Paths, Dangers, Strategies - https://www.amazon.ca/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111 HeyGen - https://heygen.com/?sid=rewardful&via=ross-simmonds Jasper - https://www.jasper.ai/chat?fpr=foundationinc Frase - https://www.frase.io/?via=ross73 —
Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/385-ai-utopia Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics. Nick Bostrom is a professor at Oxford University, where he is the founding director of the Future of Humanity Institute. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has framed much of the current thinking around humanity’s future (such as the concept of existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, and the unilateralist’s curse). He has been on Foreign Policy’s Top 100 Global Thinkers list twice, and was the youngest person to rank among the top 15 in Prospect’s World Thinkers list. He has an academic background in theoretical physics, AI, computational neuroscience, and philosophy. His most recent book is Deep Utopia: Life and Meaning in a Solved World. Website: https://nickbostrom.com/ Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
What if everything you know is just a simulation? In 2022, I was joined by the one and only Nick Bostrom to discuss the simulation hypothesis and the prospects of superintelligence. Nick is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller. With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, there is no one better to answer this question than him! Tune in. — Key Takeaways: 00:00:00 Intro 00:00:44 Judging a book by its cover 00:05:22 How could an AI have emotions and be creative? 00:08:22 How could a computing device / AI feel pain? 00:13:09 The Turing test 00:20:02 The simulation hypothesis 00:22:27 Is there a "Drake Equation" for the simulation hypothesis? 00:27:16 Penroses' orchestrated objective reality 00:34:11 SETI and the prospect of extraterrestrial life 00:49:20 Are computers really getting "smarter"? 00:53:59 Audience questions 01:01:09 Outro — Additional resources:
Why do there seem to be more dystopias than utopias in our collective imagination? Why is it easier to find agreement on what we don't want than on what we do want? Do we simply not know what we want? What are "solved worlds", "plastic worlds", and "vulnerable worlds"? Given today's technologies, why aren't we working less than we potentially could? Can humanity reach a utopia without superintelligent AI? What will humans do with their time, and/or how will they find purpose in life, if AIs take over all labor? What are "quiet" values? With respect to AI, how important is it to us that our conversation partners be conscious? Which factors will likely make the biggest differences in terms of moving the world towards utopia or dystopia? What are some of the most promising strategies for improving global coordination? How likely are we to end life on earth? How likely is it that we're living in a simulation?Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He's been a Professor at Oxford University, where he served as the founding Director of the Future of Humanity Institute from 2005 until its closure in April 2024. He is currently the founder and Director of Research of the Macrostrategy Research Initiative. Bostrom is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014). His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. His most recent book, Deep Utopia: Life and Meaning in a Solved World, was published in March of 2024. Learn more about him at his website, nickbostrom.com.StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
The media is full of dystopian depictions of artificial intelligence, such as The Terminator and The Matrix, yet few have dared to dream up the image of an AI utopia. Nick Bostrom's most recent book, Deep Utopia: Life and Meaning in a Solved World attempts to do exactly that. Bostrom explores what it would mean to live in a post-work world, where human labor is vastly outperformed by AI, or even made obsolete. When all of our problems have been solved in an AI utopia . . . well, what's next for us humans?Bostrom is a philosopher and was founding director of the Future of Humanity Institute at Oxford University. He is currently the founder and director of research at the Macrostrategy Research Initiative. He also wrote the much-discussed 2014 book, Superintelligence: Paths, Dangers, Strategies.In This Episode* Our dystopian predisposition (1:29)* A utopian thought experiment (5:16)* The plausibility of a solved world (12:53)* Weighing the risks (20:17)Below is a lightly edited transcript of our conversationOur dystopian predisposition (1:29)Pethokoukis: The Dutch futurist, Frederik Polak famously put it that any culture without a positive vision of the future has no future. It's a light paraphrase. And I kind of think that's where we are right now, that despite the title of your book, I feel like right now people can only imagine dystopia. Is that what you think? Do I have that wrong?Bostrom: It's easier to imagine dystopia. I think we are all familiar with a bunch of dystopian works of fiction. The average person could rattle off Brave New World, 1984, The Handmaid's Tale. Most people couldn't probably name a single utopian work, and even the attempts that have been made, if you look closely at them, you probably wouldn't actually want to live there. It is an interesting fact that it seems easier for us to imagine ways in which things could be worse than ways in which things could be better. Maybe some culture that doesn't have a positive vision has no future but, then again, cultures that have had positive visions also often have ended in tears. A lot of the times utopian blueprints have been used as excuses for imposing coercively some highly destructive vision on society. So you could argue either way whether it is actually beneficial for societies to have a super clear, long-term vision that they are staring towards.I think if we were to ask people to give a dystopian vision, we would get probably some very picturesque, highly detailed visions from having sort of marinated in science fiction for decades. But then if you asked people about utopia, I wonder if all their visions would be almost alike: Kind of this clean, green world, with maybe some tall skyscrapers or something, and people generally getting along. I think it'd be a fairly bland, unimaginative vision.That would be the idea of “all happy families are alike, but each unhappy family is unhappy in its own unique way.” I think it's easy enough to enable ways in which the world could be slightly better than it is. So imagine a world exactly like the one we have, except minus childhood leukemia. So everybody would agree that definitely seems better. The problem is if you start to add these improvements and you stack on enough of them, then eventually you face a much more philosophically challenging proposition, which is, if you remove all the difficulties and all the shadows of human life, all forms of suffering and inconvenience, and all injustice and everything, then you risk ending up in this rather bland future where there is no challenge, no purpose, no meaning for us humans, and it then almost becomes utopian again, but in a different way. Maybe all our basic needs are catered to, but there seems to be then some other part missing that is important for humans to have flourishing lives.A utopian thought experiment (5:16)Is your book a forecast or is it a thought experiment?It's much more a thought experiment. As it happens, I think there is a non-trivial chance we will actually end up in this condition, I call it a “solved world,” particularly with the impending transition to the machine intelligence era, which I think will be accompanied by significant risks, including existential risk. In my previous book, Superintelligence, which came out in 2014, focused on what could go wrong when we are developing machine super intelligence, but if things go right—and this could unfold within the lifetime of a lot of us who are alive on this planet today—if things go right, they could go very right, and, in particular, all kinds of problems that could be solved with better technology could be solved in this future where you have superintelligent AIs doing the technological development. And we might then actually confront the situation where these questions we can now explore as a thought experiment would become pressing practical questions where we would actually have to make decisions on what kinds of lives we want to live, what kind of future we want to create for ourselves if all these instrumental limitations were removed that currently constrain the choices set that we face.I imagine the book would seem almost purely a thought experiment before November 2022 when ChatGPT was rolled out by OpenAI, and now, to some people, it seems like these are questions certainly worth pondering. You talked about the impending machine superintelligence—how impending do you think, and what is your confidence level? Certainly we have technologists all over the map speaking about the likelihood of reaching that maybe through large language models, other people think they can't quite get us there, so how much work is “impending” doing in that sentence?I don't think we are in a position any longer to rule out even extremely short timelines. We can't be super confident that we might not have an intelligence explosion next year. It could take longer, it could take several years, it could take a decade or longer. We have to think in terms of smeared out probability distributions here, but we don't really know what capabilities will be unlocked as you scale up even the current architectures one more order of magnitude like GPT-5-level or GPT-6-level. It might be that, just as the previous steps from GPT-2 to GPT-3 and 3 to 4 sort of unlocked almost qualitatively new capabilities, the same might hold as we keep going up this ladder of just scaling up the current architectures, and so we are now in a condition where it could happen at any time, basically. It doesn't mean it will happen very soon, but we can't be confident that it won't.I do think it is slightly easier for people maybe now, even just with looking at the current AI systems, we have to take these questions seriously, and I think it will become a lot easier as the penny starts to drop that we're about to see this big transition to the machine intelligence era. The previous book, Superintelligence, back in 2014 when that was published—and it was in the works for six years prior—at that time, what was completely outside the Overton window was even the idea that one day we would have machine superintelligence, and, in particular, the idea that there would then be an alignment problem, a technical difficulty of steering these superintelligent intellects so that they would actually do what we want. It was completely neglected by academia. People thought, that's just science fiction or idle futurism. There were maybe a handful of people on the internet who were starting to think about that. In the intervening 10 years, that has changed, and so now all the frontier AI labs have research teams specifically trying to work on scalable methods for AI alignment, and it's much more widely recognized over the last couple of years that this will be a transformative thing. You have statements coming out from leading policy makers from the White House, the UK had this global summit on AI, and so this alignment problem and the risks related to AI have sort of entered the Overton window, and I think some of these other issues as to what the world will look like if we succeed, similarly, will have to come inside the Overton window, and probably will do so over the next few years.So we have an Overton window, we have this technological advance with machine intelligence. Are you as confident about one of the other pillars of your thought experiment, which is an equally, what might seem science-futuristic advance in our ability to edit ourselves, to modify ourselves and our brains and our emotions. That seems to hand-in-hand with the thought experiment.I think once we develop machine superintelligence, then we will soon thereafter have tremendous advances in other technological areas as well because we would then not be restricted to humans trying to develop new technologies with our biological brains. But this research and development would be done by superintelligences on digital timescales rather than biological timescales. So the transition to superintelligence would, I think, mean a kind of telescoping of the future.So there are all these technologies we can see are, in principle, possible. They don't violate the law of physics. In the fullness of time, probably human civilization would reach them if we had 10,000 years to work on it, all these science fiction like space colonies, or cures for aging, or perfect virtual reality uploading into computers, we could see how we might eventually . . . They're unrealistic given the current state of technology, but there's no (in principle) barriers, so we could imagine developing those if we had thousands of years to work on them. But all those technologies might become available quite soon after you have superintelligence doing the research and development. So I think we will then start to approximate the condition of technological maturity, like a condition where we have already developed most of those general purpose technologies that are physically possible, and for which there exists some in principally feasible pathway from where we are now to developing them. The plausibility of a solved world (12:53)I know one criticism of the book is, with this notion of a “solved world” or technological maturity, that the combinatorial nature of ideas would allow for almost an unlimited number of new possibilities, so in no way could we reach maturity or a technologically solved state of things. Is that a valid criticism?Well, it is a hypothesis you could entertain that there is an infinite number of ever-higher levels of technological capability such that you'd never be able to reach or even approximate any maximum. I think it's more likely that there will eventually be diminishing returns. You will eventually have figured out the best way to do most of the general things that need doing: communicating information, processing information, processing raw materials, creating various physical structures, et cetera, et cetera. That happens to be my best guess, but in any case, you could bracket that, we could at least establish lower bounds on the kinds of technological capabilities that an advanced civilization with superintelligence would be able to develop, and we can list out a number of those technologies. Maybe it would be able to do more than that, but at least it would be able to do various things that we can already sort of see and outline how you could do, it's just we can't quite put all the pieces together and carry it out yet.And the book lists a bunch of these affordances that a technologically mature civilization would at least have, even if maybe there would be further things we haven't even dreamt of yet. And already that set of technological capabilities would be enough to radically transform the human condition, and indeed to present us with some of these basic philosophical challenges of how to live well in this world where we wouldn't only have a huge amount of control over the external reality, we wouldn't only be able to automate human labor across almost all domains, but we would also, as you alluded to earlier, have unprecedented levels of control over ourselves or our biological organism and our minds using various forms of bio technologies or newer technologies.In this kind of scenario, is the purpose of our machines to solve our problems, or, not give us problems, but give us challenges, give us things to do?It then comes down to questions about value. If we had all of these capabilities to achieve various types of worlds, which one would we actually want? And I think there are layers to this onion, different levels of depth at which one can approach and think about this problem. At the outermost layer you have the idea that, well, we will have increased automation as a result of advances in AI and robotics, and so there will be some humans who become unemployed as a result. At the most superficial layer of analysis, you would then think, “Well some jobs become unnecessary, so you need to maybe retrain workers to move to other areas where there is continued demand for human labor. Maybe they need some support whilst they're retraining and stuff like that.”So then you take it a step further, like you peel off another layer of the onion and you realize that, well, if AI truly succeeds, if you have artificial general intelligence, then it's really not just some areas of human economic contribution that gets affected, but all areas, with a few exceptions that we can return to. But AIs could do everything that we can do, and do it better, and cheaper, and more efficiently. And you could say that the goal of AI is full unemployment. The goal is not just to automate a few particular tasks, but to develop a technology that allows us to automate all tasks. That's kind of what AI has always been about; it's not succeeded yet, but that's the goal, and we are seemingly moving closer to that. And so, with the asterisk here that there are a few exceptions that we can zoom in on, you would then get a kind of post-work condition where there would be no need for human labor at all.My baseline—I think this is a reasonable baseline—is that the history of technology is a history of both automating things, but then creating new things for us to do. So I think if you ask just about any economist, they will say that that should be our guide for the future: that this exact same technology will think of new things for people to do, that we, at least up to this point, have shown infinite creativity in creating new things to do, and whether you want to call those “work,” there's certainly things for us to do, so boredom should not be an issue.So there's a further question of whether there is anything for us to do, but if we just look at the work part first, are there ways for humans to engage in economically productive labor? And, so far, what has been the case is that various specific tasks have been automated, and so instead of having people digging ditches using their muscles, we can have bulldozers digging ditches, and you could have one guy driving the bulldozer and do the work of 50 people with a shovel or something. And so human labor is kind of just moving out of the areas where you can automate it and into other areas where we haven't yet been able to automate it. But if AIs are able to do all the things that we can do, then that would be no further place, it would look like, at least at first sight, for human workers to move into. The exceptions to this, I think, are cases were the consumer cares not just about the product, but about how the productThey want that human element.You could have consumers with just a raw preference that a particular task was performed by humans or a particular product—just as now sometimes consumers play a little premium sometimes if a little gadget was produced by a politically favored group, or maybe handcrafted by indigenous people, we may pay more for it than if the same object was made in a sweatshop in Indonesia or something. Even if the actual physical object itself is equally good in both cases, we might care about the causal process that brought it into existence. So to the extent that consumers have those kinds of preferences, there could remain ineliminable demand for human labor, even at technological maturity. You could think of possible examples: Maybe we just prefer to watch human athletes compete, even if robots could run faster or box harder. Maybe you want a human priest to officiate at your wedding, even if the robot could say the same words with the same intonations and the same gestures, et cetera. So there could be niches of that sort, where there would remain demand for human labor no matter how advanced our technology.Weighing the risks (20:17)Let me read one friendly critique from Robin Hanson of the book:Bostrom asks how creatures very much like him might want to live for eons if they had total peace, vast wealth, and full eternal control of extremely competent AI that could do everything better than they. He . . . tries to list as many sensible possibilities as possible . . .But I found it . . . hard to be motivated by his key question. In the future of creatures vastly more capable than us I'm far more interested in what those better creatures would do than what a creature like me now might do there. And I find the idea of creatures like me being rich, at peace, and in full control of such a world quite unlikely.Is the question he would prefer you answer unanswerable, therefore you cannot answer that question, so the only question you can answer is what people like us would be like?No, I think there are several different questions, each of which, I think, is interesting. In some of my other work, I do, in fact, investigate what other creatures, non-human creatures, digital minds we might be building, for example, AIs of different types, what they might want and how one might think of what would be required for the future to go well for these new types of being that we might be introducing. I think that's an extremely important question as well, particularly from a moral point of view. It might be, in the future, most inhabitants of the future will be digital minds or AIs of different kinds. Some might be at scales far larger than us human beings.In this book, though, I think the question I'm primarily interested in is: What if we are interested in it from our own perspective, what is the best possible future we could hope for for ourselves, given the values that we actually have? And I think that could be practically relevant in various ways. There could, for example, arise situations where we have to make trade-offs between delaying the transition to AI with maybe the risk going up or down, depending on how long we take for it. And then, in the meantime, people like us dying, just as a result of aging and disease and all kinds of things that currently result in people.So what are the different risk tradeoffs we are willing to take? And that might depend, in part, on how much better we think our lives could be if this goes well. If the best we could hope for was just continuing our current lives for a bit longer, that might be a different choice situation than if there was actually on the table something that would be super desirable from our current point of view, then we might be willing to take bigger risks to our current lives if there was at least some chance of achieving this much better life. And I think those questions, from a prudential point of view, we can only try to answer if we have some conception of how good the potential outcome would be for us. But I agree with him that both of these questions are important.It also seems to me that, initially, there was a lot of conversation after the rollout of ChatGPT about existential risk, we were talking about an AI pause, and I feel like the pendulum has swung completely to the other side, that, whether it's due to people not wanting to miss out on all the good stuff that AI could create, or worried about Chinese AI beating American AI, that the default mode that we're in right now is full speed ahead, and if there are problems we'll just have to fix them on the fly, but we're just not going to have any substantial way to regulate this technology, other than, perhaps, the most superficial of guardrails. I feel like that's where we're at now; at least, that's what I feel like in Washington right now.Yeah, I think that has been the default mode of AI development since its inception, and still is today, predominantly. The difficulties are actually to get the machines to do more, rather than how to limit what they're allowed to do. That is still the main thrust. I do think, though, that the first derivative of this is towards increased support for various kinds of regulations and restrictions, and even a growing number of people calling for an “AI pause” or wanting to stop AI development altogether. This used to be basically a completely fringe . . . there were no real serious efforts to push in this direction for almost all decades of AI up until maybe two years ago or so. And since then there has been an increasingly vocal, still minority, but a set of people who are trying hard to push for increased regulation, and for slowing down, and for raising the alarm of AI developments. And I think it remains an open question how this will unfold over the coming years.I have a complex view on this, what would actually be desirable here. On the one hand, I do think there are these significant risks, including existential risks, that will accompany a transition. When we develop superintelligent machines, it's not just one cool more gadget, right? It's the most important thing ever happening in human history, and they will be to us as we are to chimpanzees or something—potentially a very powerful force, and things could go wrong there. So I do agree with the C.So I've been told over the past two years!And to the point where some people think of me as a kind of doomsayer or anti-AI, but that's not the full picture. I think, ultimately, it would be a catastrophe if superintelligence was never developed, and that we should develop this, ideally carefully, and it might be desirable if, at a critical point, just when we figure out how to make machines superintelligent, whoever is doing this, whether it's some private lab, or some government Manhattan Project, whoever it is, has the ability to go a little bit slow in that, maybe to pause for six months or, rather than immediately cranking all the knobs up to 11, maybe do it incrementally, see what happens, make sure the safety mechanisms work. I think that might be more ideal than a situation where you have, say, 15 different labs all racing together first, and whoever takes any extra precautions just immediately fall behind and become irrelevant. I think that would seem . . .I feel like where we're at right now—I may have answered this differently 18 months ago—but I feel like where we're at right now is that second scenario. At least here in the United States, and maybe I'm too Washington-centric, but I feel we're at the “crank it up to 11,” realistically, phase.Well, we have seen the first-ever real AI regulations coming on board. It's something rather than nothing, and so you could easily imagine, if pressure continues to build, there will be more demand for this, and then, if you have some actual adverse event, like some bad thing happening, then who knows? There are other technologies that have been stymied because of . . . like human cloning, for example, or nuclear energy in many countries. So it's not unprecedented that society could convince itself that it's bad. So far, historically, all these technology bans and relinquishments have probably been temporary because there have been other societies making other choices, and eventually, just like each generation is, to some extent, like a new role of the die, and eventually you get . . .But it might be that we already have, in particular with AI technologies that, if fully deployed, could allow a society in a few years to lock itself in to some sort of permanent orthodoxy. If you imagine deploying even current AI systems fully to censor dissenting information—if you had some huge stigmatization of AI where it becomes just taboo to say anything positive about AI, and then very efficient ways of enforcing that orthodoxy by shadow banning people who dissent from it, or canceling them, or you could imagine surveilling anybody not to do any research on AI like that, the technology to sort of freeze in a temporary social consensus might be emerging. And so if 10 years from now there were a strong global consensus of some of these issues, then we can't rule out that that would become literally permanent. My probably optimal level of government oversight and regulation would be more than we currently have, but I do worry a little bit about it not increasing to the optimal point and then stopping there, but once the avalanche starts rolling, it could overshoot the target and result in a problem. To be clear, I still think that's unlikely, but I think it's more likely than it was two years ago.In 2050, do you feel like we'll be on the road to deep utopia or deep dystopia?I hope the former, I think both are still in the cards for what we know. There are big forces at play here. We've never had machine intelligence transition before. We don't have the kind of social or economic predictive science that really allows us to say what will happen to political dynamics as we change these fundamental parameters of the human condition. We don't yet have a fully reliable solution to the problem of scalable alignment. I think we are entering uncharted territories here, and both extremely good and extremely bad outcomes are possible, and we are a bit in the dark as to how all of this will unfold.Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
For decades, philosopher Nick Bostrom (director of the Future of Humanity Institute at Oxford) has led the conversation around technology and human experience (and grabbed the attention of the tech titans who are developing AI - Bill Gates, Elon Musk, and Sam Altman). Now, a decade after his NY Times bestseller Superintelligence warned us of what could go wrong with AI development, he flips the script in his new book Deep Utopia: Life and Meaning in a Solved World (March 27), asking us to instead consider “What could go well?” Ronan recently spoke to Professor Nick Bostrom. Professor Bostrom talks about his background, his new book Deep Utopia Life and Meaning in a Solved World, why he thinks advanced AI systems could automate most human jobs and more. More about Nick Bostrom: Swedish-born philosopher Nick Bostrom was founder and director of the Future of Humanity Institute at Oxford University. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller. With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, his work has pioneered some of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds.His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit.
Join my mailing list https://briankeating.com/list to win a real 4 billion year old meteorite! All .edu emails in the USA
Nick Bostrom is a Professor at Oxford University and the founding director of the Future of Humanity Institute. Nick is also the world's most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement(2009), and Superintelligence: Paths, Dangers, Strategies (2014), a wrote a New York Times bestseller which sparked a global conversation about the future of AI. His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list. He has just published a new book called “Deep Utopia: Life and Meaning in a Solved World.” What you will learn Find out why Nick is spending time in seclusion in Portugal Nick shares the big ideas from his new book “Deep Utopia”, which dreams up a world perfectly fixed by AI Discover why Nick got hooked on AI way before the internet was a big deal and how those big future questions sparked his path What would happen to our jobs and hobbies if AI races ahead in the creative industries? Nick shares his thoughts Gain insights into whether AI is going to make our conversations better or just make it easier for people to push ads and political agendas Plus loads more!
Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He is known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. Nick is a Professor at Oxford University, where he heads the Future of Humanity Institute as its Founding Director. He is also the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), Superintelligence: Paths, Dangers, Strategies (2014), which became a New York Times bestseller and sparked a global conversation about the future of AI, and Deep Utopia, Life and Meaning in a Solved World, Ideapress, 2024.Nick's work has pioneered some of the ideas that frame current thinking about humanity's future: the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, to name a few.To know more about Nick Bostrom, visit https://businessabc.net/wiki/nick-bostromDeep Utopia: Life and Meaning in a Solved WorldIn his latest book, "Deep Utopia: Life and Meaning in a Solved World," Nick Bostrom shifts the focus from the potential dangers of artificial intelligence explored in his previous work, "Superintelligence: Paths, Dangers, Strategies," to envisioning a future where AI development unfolds positively. As the conversation around AI continues to evolve, Bostrom probes the profound philosophical and spiritual implications of a world where superintelligence is safely developed, effectively governed, and utilised for the benefit of humanity.In this hypothetical scenario of a "solved world," where human labour becomes obsolete due to advanced AI systems, Bostrom raises existential questions about the essence of human existence and the pursuit of meaning. With the advent of technologies capable of fulfilling practical needs and desires beyond human capabilities, society would enter a state of "post-instrumentality," where the traditional purposes of human endeavour are rendered obsolete.About citiesabc.comhttps://www.citiesabc.com/ About businessabc.nethttps://www.businessabc.net/About fashionabc.orghttps://www.fashionabc.org/ About Dinis Guardahttps://www.dinisguarda.com/https://businessabc.net/wiki/dinis-guardaSupport the show
Nick Bostrom's previous book, Superintelligence: Paths, Dangers, Strategies, changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right? Bostrom and Shermer discuss: An AI Utopia and Protopia • Trekonomics, post-scarcity economics • the hedonic treadmill and positional wealth values • colonizing the galaxy • The Fermi paradox: Where is everyone? • mind uploading and immortality • Google's Gemini AI debacle • LLMs, ChatGPT, and beyond • How would we know if an AI system was sentient? Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world's most cited philosopher aged 50 or under.
What would life look like in a fully automated world? How would we derive meaning in a world of superintelligence? Today's Win-Win episode is all about utopias, dystopias and thought experiments, because I'm talking to Professor Nick Bostrom. Nick is one of the world's leading philosophers - he's a leading thinker on the nature of consciousness, AI, catastrophic risks, cosmology… he's also the guy behind the Simulation Hypothesis, the Paperclip Maximizer thought experiment, the seminal AI book Superintelligence... Off into the hypotheti-sphere we go! Chapters 0:00 - Intro 01:42 - Why a book on Utopia? 03:31 - Different types of Utopias 11:40 - How to find purpose in a solved world? 18:31 - Potential Limits to Technology 22:34 - How would Utopians approach Competition? 30:24 - Superintelligence 34:39 - Vulnerable World Hypothesis 39:48 - Thinking in Superpositions 41:24 - Solutions to the Vulnerable World? 46:34 - Aligning Markets to Defensive Tech 48:43 - Digital Minds & Uploading 52:25 - AI Consciousness 55:08 - Outro Links: Nick's Website - https://nickbostrom.com/ Anthropic Bias Paper - https://anthropic-principle.com/ Deep Utopia Book - https://nickbostrom.com/booklink/deep... Superintelligence book - Superintelligence: Paths, Dangers, Strategies Vulnerable World Hypothesis - https://nickbostrom.com/papers/vulner... Orthogonality Thesis - https://nickbostrom.com/superintellig... Simulation Argument - https://simulation-argument.com/ Digital Minds - https://nickbostrom.com/papers/intere... Future of Humanity Institute - https://www.fhi.ox.ac.uk/ The Win-Win Podcast: Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins. Credits ♾️ Hosted by Liv Boeree & Igor Kurganov ♾️ Produced & Edited by Raymond Wei ♾️ Audio Mix by Keir Schmidt
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom's new book, "Deep Utopia", is out today, published by peterhartree on March 27, 2024 on The Effective Altruism Forum. Description: A greyhound catching the mechanical lure - what would he actually do with it? Has he given this any thought? Bostrom's previous book, Superintelligence: Paths, Dangers, Strategies changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right? Suppose that we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock. If this transition to the machine intelligence era goes well, human labor becomes obsolete. We would thus enter a condition of "post-instrumentality", in which our efforts are not needed for any practical purpose. Furthermore, at technological maturity, human nature becomes entirely malleable. Here we confront a challenge that is not technological but philosophical and spiritual. In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day? Deep Utopia shines new light on these old questions, and gives us glimpses of a different kind of existence, which might be ours in the future. Links to purchase: Amazon Barnes & Noble There's a table of contents on the book's web page. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Nick Bostrom's latest book, Deep Utopia: Life and Meaning in a Solved World, will be published on 27th March, 2024. It's available to pre-order now: https://nickbostrom.com/deep-utopia/ The publisher describes the book as follows: A greyhound catching the mechanical lure—what would he actually do with it? Has he given this any thought? Bostrom's previous book, Superintelligence: Paths, Dangers, Strategies changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right? Suppose that we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock. If this transition to the machine intelligence era goes well, human labor becomes obsolete. We would thus enter a condition of “post-instrumentality”, in which our efforts are not needed for any practical purpose. Furthermore, at technological maturity, human nature becomes entirely malleable. Here we confront a challenge that is not technological but philosophical and spiritual. In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day? Deep Utopia shines new light on these old questions, and gives us glimpses of a different kind of existence, which might be ours in the future.
We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains. Nick Bostrom, a professor at the University of Oxford and director of the Future of Humanity Institute, discusses the development of machine superintelligence and its potential impact on humanity. Bostrom believes that, in this century, we will create the first general intelligence that will be smarter than humans. He sees this as the most important thing humanity will ever do, but it also comes with an enormous responsibility. Bostrom notes that there are existential risks associated with the transition to the machine intelligence era, such as the possibility of an underlying superintelligence that overrides human civilization with its own value structures. In addition, there is the question of how to ensure that conscious digital minds are treated well. However, if we succeed in ensuring the well-being of artificial intelligence, we could have vastly better tools for dealing with everything from diseases to poverty. Ultimately, Bostrom believes that the development of machine superintelligence is crucial for a truly great future. ---------------------------------------------------------------------------------------------------------------------------------------- chapters: 0:00 Smarter than humans 0:57 Brains: From organic to artificial 1:39 The birth of superintelligence 2:58 Existential risks 4:22 The future of humanity -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Go Deeper with Big Think: ►Become a Big Think Member Get exclusive access to full interviews, early access to new releases, Big Think merch and more ►Get Big Think+ for Business Guide, inspire and accelerate leaders at all levels of your company with the biggest minds in business -------------------------------------------------------------------------------------------------------------------------------------------------------------------- About Nick Bostrom: Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50. He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002). Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been interviewed more than 1,000 times by various media. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots. ---------------------------------------------------------------------------------------------------------------------------------------- About Big Think | Smarter Faster™ ► Big Think The leading source of expert-driven, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, Big Think helps you get smarter, faster by exploring the big ideas and core skills that define knowledge in the 21st century. Get Smarter, Faster. With Episodes From The Worlds Biggest Thinkers. Follow The Podcast And Turn On The Notifications!! Share This Episode If You Found It Valuable Leave A 5 Star Review... Learn more about your ad choices. Visit megaphone.fm/adchoices
Sam Harris speaks with Nick Bostrom about the problem of existential risk. They discuss public goods, moral illusions, the asymmetry between happiness and suffering, utilitarianism, “the vulnerable world hypothesis,” the history of nuclear deterrence, the possible need for “turnkey totalitarianism,” whether we're living in a computer simulation, the Doomsday Argument, the implications of extraterrestrial life, and other topics. Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller. Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.
Did we decide to record an episode about simulation theory or were we programmed to do so by the creators of our virtually-coded reality? Are we living on some super-futuristic civilization's hard drive? Join us as we question the very concept of our own reality and debate the probability that we exist in the one true universe instead of the potential infinite amount of simulated others. SPOILERS for the following mediaRick and Morty - "M Night Shaym-Aliens" - S01E04The Jokester - Isaac Asimov (1956)Total Recall (1990)Also mentioned (no spoilers)The Matrix (1999)Black MirrorFurther reading/linksUnrecord - Official Early Gameplay Trailerhttps://www.youtube.com/watch?v=IK76q13Aqt0ChatGPT AI Game Prototypes - Lee Vermeulenhttps://www.youtube.com/watch?v=VC_pSgAMbUUThe Simulation Argument - Nick Bostromhttps://simulation-argument.com/Superintelligence: Paths, Dangers, Strategies - Nick Bostromhttps://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111Is Reality Real? The Simulation Argument - Kurzesagthttps://www.youtube.com/watch?v=tlTKTTt47WEFinal Deployment 4: Queen Battle Walkthrough - Adult Swimhttps://www.youtube.com/watch?v=AKjmDPGJI-4
I denne episoden diskuterer gjestene AIs historiske tidslinje og hvordan den har utviklet seg til å bli mer allsidig og kunnskapsrik om forretningsprosessene. Videre diskuteres hvordan man kan bruke det som verktøy i jobben sin og utfordringene som følger med å bruke det. “En spennende utvikling er at KI nå er mye mer allsidig og har økt kunnskap om forretningsprosesser.” Dette lørner du:AI-utvikling gjennom årene;Bruk av AI på forskjellige felt;AI begrensninger og utfordringer;Hvordan kan AI brukes som et verktøy for å hjelpe virksomheter med å oppdage nye muligheter og avgrense prosessene deres;Avhengighet av menneskelig faktor og erstatning av kreativ prosess.Anbefalt literatur:For å lære om hvordan moderne AI fungerer vil jeg anbefale å bare søke på nettet og lese de artiklene som høres interessante ut. Hva som er nytt og moderne når det kommer til AI endrer seg så fort at det er vanskelig å velge en bok som ikke er utdatert. I tillegg vil jeg veldig anbefale å bruke litt tid på å leke litt med ChatGPT og se hvilket nivå moderne AI faktisk er på. For å nevne en litt mer generell bok om temaet kan jeg anbefale “Superintelligence: Paths, Dangers, Strategies” av Nick Bostrom som forteller mer om AI og hvor vi kanskje er på vei. Hosted on Acast. See acast.com/privacy for more information.
Learn on your own terms. Get the PDF, infographic, full ad-free audiobook and animated version of this summary and a lot more on the top-rated StoryShots app: https://www.getstoryshots.com Help us grow and create more amazing content for you! ⭐️⭐️⭐️⭐️⭐️ Don't forget to subscribe, rate and review the StoryShots podcast now. What should our next book be? Suggest and vote it up on the StoryShots app. StoryShots Book Summary and Review of Superintelligence: Paths, Dangers, Strategies by Nick Bostrom Life gets busy. Has Superintelligence been on your reading list? Learn the key insights now. We're scratching the surface here. If you don't already have Nick Bostrom's popular book on artificial intelligence and technology, order it here or get the audiobook for free to learn the juicy details. Introduction What happens when artificial intelligence surpasses human intelligence? Machines can think, learn, and solve complex problems faster and more accurately than we can. This is the world that Nick Bostrom explores in his book, Superintelligence. Advances in artificial intelligence are bringing us closer to creating superintelligent beings. Big tech companies like Microsoft, Google, and Facebook are all racing to create a super powerful AI. They're pouring a lot of resources into research and development to make it happen. But here's the catch: without the right safety measures and rules in place, things might go haywire. That's why it's important to step in and make sure AI stays under control. Imagine a world where machines are not only cheaper but also way better at doing jobs than humans. In that world, machines might take over human labor, leaving people wondering, "What now?" So it's important to come up with creative solutions to make sure everyone's taken care of. The book shows what happens after superintelligence emerges. It examines the growth of intelligence, the forms and powers of superintelligence, and its strategic choices. We have to prepare now to avoid disasters later. Bostrom offers strategies to navigate the dangers and challenges it presents. Superintelligence examines the history of artificial intelligence and the development of technological growth. The book describes how AI is growing faster than its technological predecessors. It also looks at surveys of expert opinions regarding its future progress. Sam Altman, the co-founder of OpenAI, calls Superintelligence a must-read for anyone who cares about the future of humanity. He even included it on his list of the nine books he thinks everyone should read. Site: This summary will delve into the fascinating and sometimes frightening world of superintelligence. It provides you with an engaging overview of Bostrom's key ideas. About Nick Bostrom Nick Bostrom is a Swedish philosopher and futurist. He is known for his groundbreaking work in artificial intelligence and its impact on humanity. Bostrom is a professor at the University of Oxford, where he founded the Future of Humanity Institute. In particular, he conducts research in how advanced technologies and AI can benefit and harm society. In addition to Superintelligence, Bostrom has authored other influential works, including Anthropic Bias: Observation Selection Effects in Science and Philosophy and Global Catastrophic Risks. His work has contributed to the ongoing discussion of humanity's future. StoryShot #1: We Are Not Ready for Superintelligence StoryShot #2: There Are Three Forms of Superintelligence StoryShot #3: There are Two Sources of Advantage for Digital Intelligence StoryShot #4: Uncontrolled Superintelligence Poses Significant Risks to Society Learn more about your ad choices. Visit megaphone.fm/adchoices
What happens when machines take over the world? Have you ever thought about that? With the emergence of incredible tools like ChatGPT, you can get a glimpse of what AI is capable of that makes you think, “Will artificial intelligence benefit or destroy us?” In today's mind-blowing episode, I am joined by a fantastic individual who is not at all threatened by artificial intelligence. In fact, he is perfectly confident in his ability to leverage AI to his advantage; he even trains virtual assistants to maximize the output of AI. So join me in welcoming Savage Marketer Jeff J. Hunter! Tune in as we discuss real-world applications of AI, especially how it can help you create a super powerful business. About the Guest:Jeff J. Hunter is the founder and CEO of VA Staffer, a virtual assistant staffing agency that helps entrepreneurs and startups build and scale remote teams to dominate their brand category. He also owns Savage Marketing Academy. In addition, Jeff is the host of the "Savage Marketer Podcast" and the author of the "Zero to Hero Crypto Guide" available on Amazon.“Isn't it interesting, guys, that in a world being taken over by AI, it's never been more important and valuable to be human?” - Jeff J. HunterIn This Episode:- Discover some practical and exciting applications of AI (it can even write a book for your kids!) and top tips when using OpenAI for the first time- How to set yourself apart from the competition who leverages AI to come up with a massive amount of content- How do you train AI to write like you?- How to use AI in ways other people aren't using it (out-of-the-box ideas for using AI to your advantage)- What SMART Businesses should do to get the optimal benefits of AIAnd much more…This Episode is Brought to You by: - Working Vacation - https://workingvacation.com/Resources: - Introducing ChatGPT - https://openai.com/- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom - https://www.amazon.com/Superintelligence-Nick-Bostrom-audiobook/dp/B00LPMFE9Y/ref=tmm_aud_swatch_0?_encoding=UTF8&qid=&sr=- Book by Jeff J. Hunter: https://www.amazon.com/Zero-Hero-Crypto-Guide-Hunter/dp/B0B92HPKM9Connect with Jeff J. Hunter: - Website - https://jeffjhunter.com/Connect with Adam Lyons:- Website - https://thesmartblueprint.com/- LinkedIn - https://uk.linkedin.com/in/adam-lyons-210670166- Facebook -
Nick Bostrom https://nickbostrom.com/ Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50. He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002).
Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list. Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence". Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects. In his book Superintelligence, Professor Bostrom asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? https://www.fhi.ox.ac.uk/ https://nickbostrom.com/ Related Episodes: David Chalmers elaborates on the simulation hypothesis, virtual reality, and his philosophy of consciousness. https://youtu.be/ywjbbQXAFic Sabine Hossenfelder on Existential Physics: https://youtu.be/g00ilS6tBvs Connect with me:
"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"I think maybe the critical issue here is the governance aspect which I think is one of the core sources of many of the greatest threats to human civilization on the planet. The difficulties we have in effectively tackling these global governance challenges. So global warming, I think, at its core is really a problem of the global commons. So we all share the same atmosphere and the same global climate, ultimately. And we have a certain reservoir, the environment can absorb a certain amount of carbon dioxide without damage, but if we put out too much, then we together face a negative consequence."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think maybe the critical issue here is the governance aspect which I think is one of the core sources of many of the greatest threats to human civilization on the planet. The difficulties we have in effectively tackling these global governance challenges. So global warming, I think, at its core is really a problem of the global commons. So we all share the same atmosphere and the same global climate, ultimately. And we have a certain reservoir, the environment can absorb a certain amount of carbon dioxide without damage, but if we put out too much, then we together face a negative consequence."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
The Creative Process in 10 minutes or less · Arts, Culture & Society
"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Mike and Ting talk about uploading two million videos to YouTube, Autopilot and Avatar 2. NOTE: this is the pre-episode warm up chat for Now Playing and Not-E3 2022. Contact us: @lostlevelsclub or mike.and.ting@lostlevels.club Show Notes: Pre-Chat The Man That Uploaded 2 Million Videos To YouTube (YouTube) Roel Van de Paar's YouTube Stats (Summary Profile) - Social Blade Stats 1,500 Slot Machines Walk into a Bar: Adventures in Quantity Over Quality (YouTube) YouTube gives dislikes the thumbs-down, hides public counts DALL-E-2 Imagen: Text-to-Image Diffusion Models DALL-E Mini Elon Musk's regulatory woes mount as U.S. moves closer to recalling Tesla's self-driving software Superintelligence: Paths, Dangers, Strategies - Wikipedia The Avatar 2 Trailer Saw More First-Day Views Than Trailers for the Recent Star Wars Movies Avatar: The Way of Water - Wikipedia
Today we investigate haunted schools in Arizona, and then we try to raise an owl but instead create a monster! Patreon https://www.patreon.com/user?u=18482113 MERCH STORE!!! https://tinyurl.com/y8zam4o2 Amazon Wish List https://www.amazon.com/hz/wishlist/ls/28CIOGSFRUXAD?ref_=wl_share Help Promote Dead Rabbit! Dual Flyer https://i.imgur.com/OhuoI2v.jpg "As Above" Flyer https://i.imgur.com/yobMtUp.jpg “Alien Flyer” By TVP VT U https://imgur.com/gallery/aPN1Fnw Links: EP 315 - The Headless Man On A Haunted Ship (Ghost Ship episode, one of my favorites!) https://deadrabbitradio.libsyn.com/ep-315-the-headless-man-on-a-haunted-ship EP 189 - Could You Be Tricked Into Destroying The World? (A.I. In A Box episode) https://deadrabbitradio.libsyn.com/ep-189-could-you-be-tricked-into-destroying-the-world Shadowlands Arizona (Hayden High School Mesa Arizona, Ghost football player, Ghost Cheerleader, Ghost Couple, Ghost Child) http://www.theshadowlands.net/places/arizona.htm Hauntings and ghosts along the Copper Corridor https://www.copperarea.com/pages/hauntings-ghosts-along-copper-corridor/ Highland Junior High School - Mesa AZ Real Haunted Place https://www.azhauntedhouses.com/real-haunt/highland-junior-high-school.html After 39 years, Powell Junior High School says goodbye https://www.eastvalleytribune.com/local/after-39-years-powell-junior-high-school-says-goodbye/article_f5bade6c-69be-11df-86d2-001cc4c03286.html Powell Junior High School https://www.facebook.com/Powell-Junior-High-School-168690903217859/ The Haunted California Toys "R" Us That Hosted a Séance https://www.mentalfloss.com/article/651670/haunted-toys-r-us Haunted Toys ‘R' Us https://www.snopes.com/fact-check/haunted-toys-r-us/ The Haunted California Toys "R" Us That Hosted a Séance https://www.mentalfloss.com/article/651670/haunted-toys-r-us Haunted Toys ‘R' Us https://www.snopes.com/fact-check/haunted-toys-r-us/ Haunted Toys R Us in Sunnyvale, California pt. 1 https://www.youtube.com/watch?v=jN5vEX3sYGU&ab_channel=StayinAlive55 Haunted Toys R Us in Sunnyvale, California pt. 2 https://www.youtube.com/watch?v=ABB9B_O2SXM&ab_channel=StayinAlive55 Superintelligence: Paths, Dangers, Strategies https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies The unfinished fable of the sparrows https://blog.oup.com/2014/08/unfinished-fable-sparrows-superintelligence/ Nick Bostrom https://en.wikipedia.org/wiki/Nick_Bostrom The CEO of Google DeepMind is worried that tech giants won't work together at the time of the intelligence explosion https://www.businessinsider.com/google-deepmind-demis-hassabis-worries-ai-superintelligence-coordination-2017-2 Stephen Hawking and Elon Musk backed 23 principles to ensure humanity benefits from AI https://www.businessinsider.com/stephen-hawking-elon-musk-backed-asimolar-ai-principles-for-artificial-intelligence-2017-2 Listen to the daily podcast anywhere you listen to podcasts! ------------------------------------------------ Logo Art By Ash Black Opening Song: "Atlantis Attacks" Closing Song: "Bella Royale" Music By Simple Rabbitron 3000 created by Eerbud Thanks to Chris K, Founder Of The Golden Rabbit Brigade Dead Rabbit Archivist Some Weirdo On Twitter AKA Jack YouTube Champ Stewart Meatball The Haunted Mic Arm provided by Chyme Chili Thanks to Fabio N! Pintrest https://www.pinterest.com/basque5150/jason-carpenter-hood-river/ http://www.DeadRabbit.com Email: DeadRabbitRadio@gmail.com Twitter: @DeadRabbitRadio Facebook: www.Facebook.com/DeadRabbitRadio TikTok: https://www.tiktok.com/@deadrabbitradio Jason Carpenter PO Box 1363 Hood River, OR 97031 Paranormal, Conspiracy, and True Crime news as it happens! Jason Carpenter breaks the stories they'll be talking about tomorrow, assuming the world doesn't end today. All Contents Of This Podcast Copyright Jason Carpenter 2018 – 2022
Why would advanced AI systems pose an existential risk, and what would it look like to develop safer systems? In this episode, I interview Paul Christiano about his views of how AI could be so dangerous, what bad AI scenarios could look like, and what he thinks about various techniques to reduce this risk. Topics we discuss, and timestamps (due to mp3 compression, the timestamps may be tens of seconds off): 00:00:38 - How AI may pose an existential threat 00:13:36 - AI timelines 00:24:49 - Why we might build risky AI 00:33:58 - Takeoff speeds 00:51:33 - Why AI could have bad motivations 00:56:33 - Lessons from our current world 01:08:23 - "Superintelligence" 01:15:21 - Technical causes of AI x-risk 01:19:32 - Intent alignment 01:33:52 - Outer and inner alignment 01:43:45 - Thoughts on agent foundations 01:49:35 - Possible technical solutions to AI x-risk 01:49:35 - Imitation learning, inverse reinforcement learning, and ease of evaluation 02:00:34 - Paul's favorite outer alignment solutions 02:01:20 - Solutions researched by others 02:06:13 - Decoupling planning from knowledge 02:17:18 - Factored cognition 02:25:34 - Possible solutions to inner alignment 02:31:56 - About Paul 02:31:56 - Paul's research style 02:36:36 - Disagreements and uncertainties 02:46:08 - Some favorite organizations 02:48:21 - Following Paul's work The transcript Paul's blog posts on AI alignment Material that we mention: Cold Takes - The Most Important Century Open Philanthropy reports on: Modeling the human trajectory The computational power of the human brain AI timelines (draft) Whether AI could drive explosive economic growth Takeoff speeds Superintelligence: Paths, Dangers, Strategies Wei Dai on metaphilosophical competence: Two neglected problems in human-AI safety The argument from philosophical difficulty Some thoughts on metaphilosophy AI safety via debate Iterated distillation and amplification Scalable agent alignment via reward modeling: a research direction Learning the prior Imitative generalisation (AKA 'learning the prior') When is unaligned AI morally valuable?
As artificial intelligence gets more and more powerful, the need becomes greater to ensure that machines do the right thing. But what does that even mean? Brian Christian joins Vasant Dhar in episode 13 of Brave New World to discuss, as the title of his new book goes, the alignment problem. Useful resources: 1. Brian Christian's homepage. 2. The Alignment Problem: Machine Learning and Human Values -- Brian Christian. 3. Algorithms to Live By: The Computer Science of Human Decisions -- Brian Christian and Tom Griffiths. 4. The Most Human Human -- Brian Christian. 5. How Social Media Threatens Society -- Episode 8 of Brave New World (w Jonathan Haidt). 6. Are We Becoming a New Species? -- Episode 12 of Brave New World (w Molly Crockett). 7. The Nature of Intelligence -- Episode 7 of Brave New World (w Yann le Cunn) 8. Some Moral and Technical Consequences of Automation -- Norbert Wiener. 9.Superintelligence: Paths, Dangers, Strategies -- Nick Bostrom. 10. Human Compatible: AI and the Problem of Control -- Stuart Russell. 11. OpenAI. 12. Center for Human-Compatible AI. 13. Concrete Problems in AI Safety -- Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané. 14. Machine Bias -- Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner. 15. Inherent Trade-Offs in the Fair Determination of Risk Scores -- Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan. 16. Algorithmic Decision Making and the Cost of Fairness -- Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq.. 17. Predictions Put Into Practice -- Jessica Saunders, Priscillia Hunt, John S. Hollywood 18. An Engine, Not a Camera: How Financial Models Shape Markets -- Donald MacKenzie. 19. An Anthropologist on Mars -- Oliver Sacks. 20. Deep Reinforcement Learning from Human Preferences -- Paul F Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, Dario Amadei for OpenAI & Deep Mind.
Das war es erst einmal mit dem Thema Super Intelligenz.Heute besprechen wir noch ein paar folgende Lösungen, die uns einfallen und überlegen, ob wir so eine Intelligenz haben wollen oder nicht und viel schlimmer - ob wir überhaupt eine Wahl haben.Wir freuen uns auf deine Meinung zu dem Thema, schreib uns als gerne an info@northpro.consulting.Du möchtest selber das Buch lesen? Hier gehts lang: Superintelligence: Paths, Dangers, Strategies https://smile.amazon.de/dp/0198739834/ref=cm_sw_r_cp_api_glc_i_65BQ441143J0MCY0H55Q
What is superintelligence? Can a superintelligence be controlled? Why aren't people (especially academics, computer scientists, and companies) more worried about superintelligence alignment problems? Is it possible to determine whether or not an AI is conscious? Do today's neural networks experience some form of consciousness? Are humans general intelligences? How do artificial superintelligence and artificial general intelligence differ? What sort of threats do malevolent actors pose over and above those posed by the usual problems in AI safety?Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy's main areas of interest are Artificial Intelligence Safety and Cybersecurity. Follow him on Twitter at @romanyam.Further reading:Superintelligence: Paths, Dangers, Strategies by Nick Bostrom[Read more]
What is superintelligence? Can a superintelligence be controlled? Why aren't people (especially academics, computer scientists, and companies) more worried about superintelligence alignment problems? Is it possible to determine whether or not an AI is conscious? Do today's neural networks experience some form of consciousness? Are humans general intelligences? How do artificial superintelligence and artificial general intelligence differ? What sort of threats do malevolent actors pose over and above those posed by the usual problems in AI safety?Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy's main areas of interest are Artificial Intelligence Safety and Cybersecurity. Follow him on Twitter at @romanyam.Further reading:Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
What is superintelligence? Can a superintelligence be controlled? Why aren't people (especially academics, computer scientists, and companies) more worried about superintelligence alignment problems? Is it possible to determine whether or not an AI is conscious? Do today's neural networks experience some form of consciousness? Are humans general intelligences? How do artificial superintelligence and artificial general intelligence differ? What sort of threats do malevolent actors pose over and above those posed by the usual problems in AI safety?Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy's main areas of interest are Artificial Intelligence Safety and Cybersecurity. Follow him on Twitter at @romanyam.Further reading:Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
In this video I will talk about the Superintelligence: Paths, Dangers, Strategies book by Nick Bostrom. It's a book about the dangers of superintelligence. Artificial Intelligence and Superintelligence will play a big part in our future, it's crucial to understand the risks and opportunities on a deeper level. This book offers an insight into just that. Twitter: https://twitter.com/AttilaonthWorld YouTube channel: https://www.youtube.com/channel/UCADpTO2CJBS7HNudJu9-nvg
This talk is part of the Marketplace Lecture Series in an attempt to bridge the gap between academia and industry hosted by @decafquest. First Lecture: Mahmoud Dgheim | Towards a More Responsible Tech Development Bio: Mahmoud Dgheim (@dgheim) is a Lebanese blockchain evangelist and Co-founder & CEO at Castalise - a platform introducing the potential behind blockchain and privacy-preserving technologies to the rapidly growing world of data analytics. https://twitter.com/Decafquest/status/1322959710270738438 Relevant links: Ko-Fi: https://ko-fi.com/decafquest Gumroad: https://gumroad.com/l/ohYop Twitter: https://twitter.com/Decafquest For more info: info@decafquest.com Schedule of upcoming talks: https://twitter.com/Decafquest/status/1319985717032964102 Suggested readings: Superintelligence: Paths, Dangers, Strategies Book by Nick Bostrom Life 3.0 Book by Max Tegmark The Singularity Is Near: When Humans Transcend Biology Book by Ray Kurzweil Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World Book by Alex Tapscott and Don Tapscott Permanent Record Book by Edward Snowden Andrew Trask privacy-preserving AI https://youtu.be/4zrU54VIK6k
Denne udgave af ALL CAPS handler om hvad QAnon er og hvorfor Twitter slår så hårdt ned på alt, hvad der har med det at gøre. Vi taler om historiens største “hackerangreb” på Twitter og hvor heldige vi alle var, da det blot blev brugt til et simpelt bitcoinscam… eller gjorde det? Vi taler også om Kanye West og hvorfor han har opført sig så besynderligt på de sociale medier i de sidste par uger. Derudover taler vi kort om den rullende sag om nøgenbilleder, der deles uden samtykke på sociale platforme som eksempelvis Reddit.Det er en “hvad-er-der-sket-på-nettet-imens-vi-har-holdt-en-lidt-for-kort-sommerferie?”-agtig udgave af programmet.Gæsterne er:Kevin Shakir, vært på debatprogrammet Touché på Radio Loud og ekspert i alt hvad der har med QAnon at gøre. Nørd, ligefrem.Mads Viktor, vært på podcastet 56k, der handler om tech, sociale medier og internetkultur. Derudover er han vært på Soundvenues podcast Standard, der handler om hiphop. LINKS:Rabbit Hole (podcast fra New York Times): https://www.nytimes.com/column/rabbit-holeDaily life in North Korea - “My Brothers and Sisters in the North”: https://www.youtube.com/watch?v=IBqeC8ihsO8Videoer om GPT-3 (AI): https://www.youtube.com/results?search_query=gpt-3Superintelligence (bog af Nick Bostrom) https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_StrategiesOur Final Invention (bog af James Barrat): https://en.wikipedia.org/wiki/Our_Final_Invention56k (podcast): https://www.56kpodcast.dk/Standard (podcast): https://soundvenue.com/podcasts/standardTouché på Radio Loud: https://radioloud.dk/shows/touche-2/ALL CAPS er ugentligt program om hvad der rør sig lige nu på de sociale medier; influencers, youtubedrama, shitstorms, memes, troldehære og alt det andet. I ALL CAPS tager Anton Gade-Nielsen internetkulturen og -humoren alvorligt når han sammen med gæster og lyttere undersøger, hvor tendenser og trends opstår og hvordan vi navigerer i dem.ALL CAPS bliver produceret til LOUD af ENIGMA – museum for post, tele & kommunikation.
★DOWNLOAD THIS FREE PDF SUMMARY BY CLICKING BELOW https://go.bestbookbits.com/freepdf
Nigel Inkster, IISS Senior Adviser, joins Dr Kori Schake for this episode of Sounds Strategic.With the rivalry between the US and China set to become the defining feature of international politics in the 21st century, Nigel Inkster gives context to the geopolitical competition that lies at its heart. Nigel emphasises the important role technology will play in allowing great powers to project their power internationally in the future. He argues that this brings new significance to the Huawei affair and wider US–China trade disputes. Fundamentally, China views the post-World War international order as no longer fit for purpose. It is therefore looking to foster an alternative world order that is more in line with its own political objectives and priorities. Nigel and Kori discuss what the possible features of this Sino-centric world order would be and the profound importance technological innovation around 5G will play in determining who will lead the world in the next century.Recommended Reading: Richard McGregor, The Party: The Secret World of China's Communist Rulers, (London: Penguin Books, 2012)Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, (Oxford: Oxford University Press, 2017)Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence, (London: Penguin Books, 2018)Date of recording: 8 October 2019Sounds Strategic is recorded and produced at the IISS in London.Theme music: ‘Safety in Numbers' by We Were Promised Jetpacks. See acast.com/privacy for privacy and opt-out information.
Was soll das sein, "der Mensch"? In dieser Folge spreche ich mit Julia Grillmayr über Perspektiven des Posthumanen: vom Versuch der optimierenden Festschreibung im Transhumanismus bis zur umarmenden Großzügigkeit des kritischen Posthumanismus, von Unsterblichkeitsphantasien bis zur Freude über den eigenen Körper als Kompost. Interessante & relevante Links: von und über Julia Grillmayr Homepage von Julia Grillmayr https://scifi-fafo.com/ Julias Podcast "Superscience me" auf Radio Orange https://o94.at/programm/sendereihen/id/1462404 Julia auf Twitter https://twitter.com/jugrill?lang=en zu kritischem Posthumanismus, Kompost und dem Chtuluzän Homepage von Rosi Braidotti https://rosibraidotti.com/ Rosi Braidotti auf Academia https://uu.academia.edu/RosiBraidotti Lecture Rosi Braidotti @ Aarhus University https://www.youtube.com/watch?v=UEMLBSRh5Dk&list=PLRizMRYsWFGNnhJhgzUVMtqShsKq3WnbI Donna Haraway bei Monoskop https://monoskop.org/Donna_Haraway Donna Haraway "A Manifesto for Cyborgs" https://monoskop.org/images/4/4c/Haraway_Donna_1985_A_Manifesto_for_Cyborgs_Science_Technology_and_Socialist_Feminism_in_the_1980s.pdf Donna Haraway "Staying with the trouble" https://www.dukeupress.edu/staying-with-the-trouble Deutsche Übersetzung von "Staying with the trouble" -> "Unruhig bleiben" (Übersetzung: Karin Harrasser) https://www.campus.de/buecher-campus-verlag/wissenschaft/soziologie/unruhig_bleiben-14845.html Lecture Donna Haraway & Anna Tsing https://www.youtube.com/watch?v=FkZSh8Wb-t8&list=PLRizMRYsWFGNnhJhgzUVMtqShsKq3WnbI&index=29 Dokumentation über Donna Haraway von Fabrizio Terranova https://earthlysurvival.org/ Anna L. Tsing bei Monoskop https://monoskop.org/Anna_L._Tsing Homepage von Katherine Hayles http://nkhayles.com/ zu Transhumanismus Homepage von Nick Bostrom https://nickbostrom.com/ Future of Humanity Institute (Leitung durch Bostrom) https://www.fhi.ox.ac.uk/ Buch "Superintelligence" von Nick Bostrom https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies Wiki Ray Kurzweil https://en.wikipedia.org/wiki/Ray_Kurzweil Kurzweil online Library https://www.kurzweilai.net/about-the-kurzweil-library Podcast "Algocracy and the Transhumanist Project" https://algocracy.wordpress.com/blog/ Wiki Hans Moravec https://en.wikipedia.org/wiki/Hans_Moravec Hompage von Natasha Vita-More https://natashavita-more.com/ Wiki Max More https://en.wikipedia.org/wiki/Max_More Ihab Hassan https://transhumanism.fandom.com/wiki/Ihab_Hassan Ebenfalls in der Folge erwähnt: Isabelle Stengers (Wiki) https://de.wikipedia.org/wiki/Isabelle_Stengers Michel Foucault, "Die Ordnung der Dinge" https://www.suhrkamp.de/buecher/die_ordnung_der_dinge-michel_foucault_27696.html Anthropozän (Wiki) https://de.wikipedia.org/wiki/Anthropoz%C3%A4n Wenn euch Future Histories gefällt, dann erwägt doch bitte eine Unterstützung auf Patreon: https://www.patreon.com/join/FutureHistories? Schreibt mir unter future_histories@protonmail.com und diskutiert mit auf Twitter (#FutureHistories): https://twitter.com/FutureHpodcast oder auf Reddit https://www.reddit.com/r/FutureHistories/ www.futurehistories.today
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller. Website: nickbostrom.com
Sam Harris speaks with Nick Bostrom about the problem of existential risk. They discuss public goods, moral illusions, the asymmetry between happiness and suffering, utilitarianism, “the vulnerable world hypothesis,” the history of nuclear deterrence, the possible need for “turnkey totalitarianism,” whether we’re living in a computer simulation, the Doomsday Argument, the implications of extraterrestrial life, and other topics. Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller. Website: nickbostrom.com Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.
In this podcast, Cody Gough and Ashley Hamer discuss the following stories from Curiosity.com to help you get smarter and learn something new in just a few minutes: SKILLSHARE: Two months of unlimited access to more than 20 thousand classes for just 99 cents Inversion Is a Thinking Technique Used By Billionaires Are We Living in a Computer Simulation? That's a Heated Debate This Nonprofit is Using Old Cell Phones to Save the Rainforest To read more from the man who popularized the simulation hypothesis, check out "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom, which is all about what might happen when the robots surpass us in intelligence. We handpick reading recommendations we think you may like. If you choose to make a purchase, Curiosity will get a share of the sale. Want to support our show? Register for the 2018 Podcast Awards and nominate Curiosity Daily to win for People's Choice, Education, and Science & Medicine. Just register at the link and select Curiosity Daily from the drop-down menus (no need to pick nominees in every category): https://curiosity.im/podcast-awards-2018 Learn more about these topics and more onCuriosity.com, and download our5-star app for Android and iOS. Then, join the conversation onFacebook,Twitter, andInstagram. Plus: Amazon smart speaker users, enable ourAlexa Flash Briefing to learn something new in just a few minutes every day! See omnystudio.com/listener for privacy information.
Prof Nick Bostrom is widely respected as the premier academic thinker on topics related to strong artificial intelligence, transhumanism, and existential risks. His talks, books, and articles cover all of these topics, and his vocation involves bringing attention and critical thought to these most pressing human issues. He is founder and director of the Future of Huminity Institute at Oxford, and author of the new book "Superintelligence: Paths, Dangers, Strategies." In this episode, Nick and I explore the topic of identifying "existential" human risks (those that could wipe out life forever), and how individuals and groups might mediate these risks on a grand scale to better secure the flourishing of humanity in the coming decades and centuries. For More Information, Visit the HUB of Startups / Business in Emerging Technology. From Robotic Limbs to Getting Angel Investment, from Biotech to Intellectual Property: http://www.TechEmergence.com For more information on Nick Bostrom himself, please visit his personal website at: http://www.NickBostrom.com