American non-fiction author and poet
POPULARITY
Even the Ancient Greeks had to navigate business regulations, and the roots of such ideas are discussed in one of philosophy's greatest works—Plato's Republic. Join host Ben Jackson and IPA CEO Brian Tate, as they delve into this classic text and its relevance today. In this episode, the team explores: How The Republic can provide foundational ground rules for business conversations. The limits of philosophy in the practical world. Whether Plato's allegory of the cave holds insights into modern innovation and management. This is the first installment in our 2025 Book Club series. Our next book will be The Alignment Problem by Brian Christian, where we dive into the ethical and practical challenges of artificial intelligence. Special Offer for Podcast Listeners: Make sure to attend the Innovative Payments Conference in Washington, D.C., from April 29–May 1, 2025, to hear from top legislators, regulators, and industry experts. Use the promo code Podcast at checkout to get $25 off your registration fee. Recorded on March 6, 2025, this episode remains a timeless conversation about enduring ideas that continue to shape our understanding of innovation, ethics, and enterprise. Links and Contact Info: Read along and join the book club! Email Ben at bjackson@ipa.org. Conference details: Innovative Payments Conference
Bio Bala has rich experience in retail technology and process transformation. Most recently, he worked as a Principal Architect for Intelligent Automation, Innovation & Supply Chain in a global Fortune 100 retail corporation. Currently he works for a luxury brand as Principal Architect for Intelligent Automation providing technology advice for the responsible use of technology (Low Code, RPA, Chatbots, and AI). He is passionate about technology and spends his free time reading, writing technical blogs and co-chairing a special interest group with The OR Society. Interview Highlights 02:00 Mentors and peers 04:00 Community bus 07:10 Defining AI 08:20 Contextual awareness 11:45 GenAI 14:30 The human loop 17:30 Natural Language Processing 20:45 Sentiment analysis 24:00 Implementing AI solutions 26:30 Ethics and AI 27:30 Biased algorithms 32:00 EU AI Act 33:00 Responsible use of technology Connect Bala Madhusoodhanan on LinkedIn Books and references · https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html - NLP · https://www.theregister.com/2021/05/27/clearview_europe/ - Facial Technology Issue · https://www.designnews.com/electronics-test/apple-card-most-high-profile-case-ai-bias-yet - Apple Card story · https://www.ft.com/content/2d6fc319-2165-42fb-8de1-0edf1d765be3 - Data Centre growth · https://www.technologyreview.com/2024/02/06/1087793/what-babies-can-teach-ai/ · Independent Audit of AI Systems - · Home | The Alan Turing Institute · Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, Marco Iansiti & Karim R. Lakhani · AI Superpowers: China, Silicon Valley, and the New World, Kai-Fu Lee · The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, Mike Walsh · Human+Machine: Reimagining Work in the Age of AI, Paul R Daugherty, H. James Wilson · Superintelligence: Paths, Dangers, Strategies, Nick Bostrom · The Alignment Problem: How Can Artificial Intelligence Learn Human Values, Brian Christian · Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, Reid Blackman · Wanted: Human-AI Translators: Artificial Intelligence Demystified, Geertrui Mieke De Ketelaere · The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond, Michio Kaku, Feodor Chin et al Episode Transcript Intro: Hello and welcome to the Agile Innovation Leaders podcast. I'm Ula Ojiaku. On this podcast I speak with world-class leaders and doers about themselves and a variety of topics spanning Agile, Lean Innovation, Business, Leadership and much more – with actionable takeaways for you the listener. Ula Ojiaku So I have with me here, Bala Madhusoodhanan, who is a principal architect with a global luxury brand, and he looks after their RPA and AI transformation. So it's a pleasure to have you on the Agile Innovation Leaders podcast, Bala, thank you for making the time. Bala Madhusoodhanan It's a pleasure to have a conversation with the podcast and the podcast audience, Ula. I follow the podcast and there have been fantastic speakers in the past. So I feel privileged to join you on this conversation. Ula Ojiaku Well, the privilege is mine. So could you start off with telling us about yourself Bala, what have been the key points or the highlights of your life that have led to you being the Bala we know now? Bala Madhusoodhanan It's putting self into uncharted territory. So my background is mechanical engineering, and when I got the job, it was either you go into the mechanical engineering manufacturing side or the software side, which was slightly booming at that point of time, and obviously it was paying more then decided to take the software route, but eventually somewhere the path kind of overlapped. So from a mainframe background, started working on supply chain, and then came back to optimisation, tied back to manufacturing industry. Somewhere there is an overlap, but yeah, that was the first decision that probably got me here. The second decision was to work in a UK geography, rather than a US geography, which is again very strange in a lot of my peers. They generally go to Silicon Valley or East Coast, but I just took a choice to stay here for personal reasons. And then the third was like the mindset. I mean, I had over the last 15, 20 years, I had really good mentors, really good peers, so I always had their help to soundboard my crazy ideas, and I always try to keep a relationship ongoing. Ula Ojiaku What I'm hearing is, based on what you said, lots of relationships have been key to getting you to where you are today, both from mentors, peers. Could you expand on that? In what way? Bala Madhusoodhanan The technology is changing quite a lot, at least in the last 10 years. So if you look into pre-2010, there was no machine learning or it was statistics. People were just saying everything is statistics and accessibility to information was not that much, but post 2010, 2011, people started getting accessibility. Then there was a data buzz, big data came in, so there were a lot of opportunities where I could have taken a different career path, but every time I was in a dilemma which route to take, I had someone with whom either I have worked or who was my team lead or manager to guide me to tell me, like, take emotion out of the decision making and think in a calm mind, because you might jump into something and you might like it, you might not like it, you should not regret it. So again, over the course of so many such decisions, my cognitive mind has also started thinking about it. So those conversations really help. And again, collective experience. If you look into the decision making, it's not just my decision, I'm going through conversations that I had with people where they have applied their experience, so it's not just me or just not one situation, and to understand the why behind that, and that actually helps. In short, it's like a collection of conversations that I had with peers. A few of them are visionary leaders, they are good readers. So they always had a good insight on where I should focus, where I shouldn't focus, and of late recently, there has been a community bus. So a lot of things are moving to open source, there is a lot of community exchange of conversation, the blogging has picked up a lot. So, connecting to those parts also gives you a different dimension to think about. Ula Ojiaku So you said community bus, some of the listeners or people who are watching the video might not understand what you mean by the community bus. Are you talking about like meetups or communities that come around to discuss shared interests? Bala Madhusoodhanan If you are very much specifically interested in AI, or you are specifically interested in, power platform or a low code platform, there are a lot of content creators on those topics. You can go to YouTube, LinkedIn, and you get a lot of information about what's happening. They do a lot of hackathons, again, you need to invest time in all these things. If you don't, then you are basically missing the boat, but there are various channels like hackathon or meetup groups, or, I mean, it could be us like a virtual conversation like you and me, we both have some passionate topics, that's why we resonate and we are talking about it. So it's all about you taking an initiative, you finding time for it, and then you have tons and tons of information available through community or through conferences or through meetup groups. Ula Ojiaku Thanks for clarifying. So, you said as well, you had a collection of conversations that helped you whenever you were at a crossroad, some new technology or something emerges or there's a decision you had to make and checking in with your mentors, your peers and your personal Board of Directors almost, that they give you guidance. Now, looking back, would you say there were some turns you took that knowing what you know now, you would have done differently? Bala Madhusoodhanan I would have liked to study more. That is the only thing, because sometimes the educational degree, even though without a practical knowledge has a bigger advantage in certain conversation, otherwise your experience and your content should speak for you and it takes a little bit of effort and time to get that trust among leaders or peers just to, even them to trust saying like, okay, this person knows what he's talking about. I should probably trust rather than, someone has done a PhD and it's just finding the right balance of when I should have invested time in continuing my education, if I had time, I would have gone back two years and did everything that I had done, like minus two years off-set it by two years earlier. It would have given me different pathways. That is what I would think, but again, it's all constraints. I did the best at that point in time with whatever constraints I had. So I don't have any regret per se, but yeah, if there is a magic wand, I would do that. Ula Ojiaku So you are a LinkedIn top voice from AI. How would you define AI, artificial intelligence? Bala Madhusoodhanan I am a bit reluctant to give a term Artificial Intelligence. It's in my mind, it is Artificial Narrow Intelligence, it's slightly different. So let me start with a building block, which is machine learning. So machine learning is like a data labeller. You go to a Tesco store, you read the label, you know it is a can of soup because you have read the label, your brain is not only processing that image, it understands the surrounding. It does a lot of things when you pick that can of soup. You can't expect that by just feeding one model to a robot. So that's why I'm saying like it's AI is a bit over glorified in my mind. It is artificial narrow intelligence. What you do to automate certain specific tasks using a data set which is legal, ethical, and drives business value is what I would call machine learning, but yeah, it's just overhyped and heavily utilised term AI. Ula Ojiaku You said, there's a hype around artificial intelligence. So what do you mean by that? And where do you see it going? Bala Madhusoodhanan Going back to the machine learning definition that I said, it's basically predicting an output based on some input. That's as simple as what we would say machine learning. The word algorithm is basically something like a pattern finder. What you're doing is you are giving a lot of data, which is properly labelled, which has proper diversity of information, and there are multiple algorithms that can find patterns. The cleverness or engineering mind that you bring in is to select which pattern or which algorithm you would like to do for your use case. Now you're channelling the whole machine learning into one use case. That's why I'm going with the term narrow intelligence. Computers can do brilliant jobs. So you ask computers to do like a Rubik's cubes solving. It will do it very quickly because the task is very simple and it is just doing a lot of calculation. You give a Rubik's cube to a kid. It has to apply it. The brain is not trained enough, so it has to cognitively learn. Maybe it will be faster. So anything which is just pure calculation, pure computing, if the data is labelled properly, you want to predict an outcome, yes, you can use computers. One of the interesting videos that I showed in one of my previous talks was a robot trying to walk across the street. This is in 2018 or 19. The first video was basically talking about a robot crossing a street and there were vehicles coming across and the robot just had a headbutt and it just fell off. Now a four year old kid was asked to walk and it knew that I have to press a red signal. So it went to the signal stop. It knew, or the baby knew that I can only walk when it is green. And then it looks around and then walks so you can see the difference – a four year old kid has a contextual awareness of what is happening, whereas the robot, which is supposed to be called as artificial intelligence couldn't see that. So again, if you look, our human brains have been evolved over millions of years. There are like 10 billion neurons or something, and it is highly optimised. So when I sleep, there are different set of neurons which are running. When I speak to you, my eyes and ears are running, my motion sensor neurons are running, but these are all highly optimised. So the mother control knows how much energy should be sent on which neuron, right, whereas all these large language models, there is only one task. You ask it, it's just going to do that. It doesn't have that intelligence to optimise. When I sleep, maybe 90 percent of my neurons are sleeping. It's getting recharged. Only the dream neurons are working. Whereas once you put a model live, it doesn't matter, all the hundred thousand neurons would run. So, yeah, it's in very infancy state, maybe with quantum computing, maybe with more power and better chips things might change, but I don't see that happening in the next five to 10 years. Ula Ojiaku Now, what do you say about Gen AI? Would you also classify generative AI as purely artificial neural intelligence? Bala Madhusoodhanan The thing with generative AI is you're trying to generalise a lot of use cases, say ChatGPT, you can throw in a PDF, you can ask something, or you can say, hey, can you create a content for my blog or things like that, right? Again, all it is trying to do is it has some historical content with which it is trying to come up with a response. So the thing that I would say is humans are really good with creativity. If a problem is thrown at a person, he will find creative ways to solve it. The tool with which we are going to solve might be a GenAI tool, I don't know, because I don't know the problem, but because GenAI is in a hype cycle, every problem doesn't need GenAI, that's my view. So there was an interesting research which was done by someone in Montreal University. It talks about 10 of the basic tasks like converting text to text or text to speech and with a generative AI model or multiple models, because you have a lot of vendors providing different GenAI models, and then they went with task specific models and the thing that they found was the task specific models were cheap to run, very, very scalable and robust and highly accurate, right. Whereas GenAI, if, when you try to use it and when it goes into a production ready or enterprise ready and if it is used by customers or third party, which are not part of your ecosystem, you are putting yourself in some kind of risk category. There could be a risk of copyright issues. There could be a risk of IP issues. There could be risk of not getting the right consent from someone. I can say, can you create an image of a podcaster named Ula? You never know because you don't remember that one of your photos on Google or Twitter or somewhere is not set as private. No one has come and asked you saying, I'm using this image. And yeah, it's finding the right balance. So even before taking the technology, I think people should think about what problem are they trying to solve? In my mind, AI or artificial intelligence, or narrow intelligence can have two buckets, right. The first bucket is to do with how can I optimise the existing process? Like there are a lot of things that I'm doing, is there a better way to do it? Is there an efficient way to do it? Can I save time? Can I save money? Stuff like that. So that is an optimisation or driving efficiency lever. Other one could be, I know what to do. I have a lot of data, but I don't have infrastructure or people to do it, like workforce augmentation. Say, I have 10 data entry persons who are graduate level. Their only job is to review the receipts or invoices. I work in FCA. I have to manually look at it, approve it, and file it, right? Now it is a very tedious job. So all you are doing is you are augmenting the whole process with an OCR engine. So OCR is Optical Character Recognition. So there are models, which again, it's a beautiful term for what our eyes do. When we travel somewhere, we get an invoice, we exactly know where to look, right? What is the total amount? What is the currency I have paid? Have they taken the correct credit card? Is my address right? All those things, unconsciously, your brain does it. Whereas our models given by different software vendors, which have trained to capture these specific entities which are universal language, to just pass, on data set, you just pass the image on it. It just picks and maps that information. Someone else will do that job. But as part of your process design, what you would do is I will do the heavy lifting of identifying the points. And I'll give it to someone because I want someone to validate it. It's human at the end. Someone is approving it. So they basically put a human in loop and, human centric design to a problem solving situation. That's your efficiency lever, right? Then you have something called innovation level - I need to do something radical, I have not done this product or service. Yeah, that's a space where you can use AI, again, to do small proof of concepts. One example could be, I'm opening a new store, it's in a new country, I don't know how the store layout should look like. These are my products. This is the store square footage. Can you recommend me the best way so that I can sell through a lot? Now, a visual merchandising team will have some ideas on where the things should be, they might give that prompt. Those texts can be converted into image. Once you get the base image, then it's human. It's us. So it will be a starting point rather than someone implementing everything. It could be a starting point. But can you trust it? I don't know. Ula Ojiaku And that's why you said the importance of having a human in the loop. Bala Madhusoodhanan Yeah. So the human loop again, it's because we humans bring contextual awareness to the situation, which machine doesn't know. So I'll tie back this to the NLP. So Natural Language Processing, it has two components, so you have natural language understanding and then you have natural language generation. When you create a machine learning model, all it is doing is, it is understanding the structure of language. It's called form. I'm giving you 10,000 PDFs, or you're reading a Harry Potter book. There is a difference between you reading a Harry Potter book and the machine interpreting that Harry Potter book. You would have imagination. You will have context of, oh, in the last chapter, we were in the hilly region or in a valley, I think it will be like this, the words like mist, cold, wood. You started already forming images and visualising stuff. The machine doesn't do that. Machine works on this is the word, this is a pronoun, this is the noun, this is the structure of language, so the next one should be this, right? So, coming back to the natural language understanding, that is where the context and the form comes into play. Just think of some alphabets put in front of you. You have no idea, but these are the alphabet. You recognise A, you recognise B, you recognise the word, but you don't understand the context. One example is I'm swimming against the current. Now, current here is the motion of water, right? My current code base is version 01. I'm using the same current, right? The context is different. So interpreting the structure of language is one thing. So, in natural language understanding, what we try to do is we try to understand the context. NLG, Natural Language Generation, is basically how can I respond in a way where I'm giving you an answer to your query. And this combined is NLP. It's a big field, there was a research done, the professor is Emily Bender, and she one of the leading professors in the NLP space. So the experiment was very funny. It was about a parrot in an island talking to someone, and there was a shark in between, or some sea creature, which basically broke the connection and was listening to what this person was saying and mimicking. Again, this is the problem with NLP, right? You don't have understanding of the context. You don't put empathy to it. You don't understand the voice modulation. Like when I'm talking to you, you can judge what my emotion cues are, you can put empathy, you can tailor the conversation. If I'm feeling sad, you can put a different spin, whereas if I'm chatting to a robot, it's just going to give a standard response. So again, you have to be very careful in which situation you're going to use it, whether it is for a small team, whether it is going to be in public, stuff like that. Ula Ojiaku So that's interesting because sometimes I join the Masters of Scale strategy sessions and at the last one there was someone whose organisational startup was featured and apparently what their startup is doing is to build AI solutions that are able to do sentiment analysis. And I think some of these, again, in their early stages, but some of these things are already available to try to understand the tone of voice, the words they say, and match it with maybe the expression and actually can transcribe virtual meetings and say, okay, this person said this, they looked perplexed or they looked slightly happy. So what do you think about that? I understand you're saying that machines can't do that, but it seems like there are already organisations trying to push the envelope towards that direction. Bala Madhusoodhanan So the example that you gave, sentiment of the conversation, again, it is going by the structure or the words that I'm using. I am feeling good. So good, here is positive sentiment. Again, for me the capability is slightly overhyped, the reason being is it might do 20 percent or 30 percent of what a human might do, but the human is any day better than that particular use case, right? So the sentiment analysis typically works on the sentiment data set, which would say, these are the certain proverbs, these are the certain types of words, this generally referred to positive sentiment or a good sentiment or feel good factor, but the model is only good as good as the data is, right? So no one is going and constantly updating that dictionary. No one is thinking about it, like Gen Z have a different lingo, millennials had a different lingo. So, again, you have to treat it use case by use case, Ula. Ula Ojiaku At the end of the day, the way things currently are is that machines aren't at the place where they are as good as humans. Humans are still good at doing what humans do, and that's the key thing. Bala Madhusoodhanan Interesting use case that I recently read probably after COVID was immersive reading. So people with dyslexia. So again, AI is used for good as well, I'm not saying it is completely bad. So AI is used for good, like, teaching kids who are dyslexic, right? Speech to text can talk, or can translate a paragraph, the kid can hear it, and on the screen, I think one note has an immersive reader, it actually highlights which word it is, uttering into the ears and research study showed that kids who were part of the study group with this immersive reading audio textbook, they had a better grasp of the context and they performed well and they were able to manage dyslexia better. Now, again, we are using the technology, but again, kudos to the research team, they identified a real problem, they formulated how the problem could be solved, they were successful. So, again, technology is being used again. Cancer research, they invest heavily, in image clustering, brain tumours, I mean, there are a lot of use cases where it's used for good, but then again, when you're using it, you just need to think about biases. You need to understand the risk, I mean, everything is risk and reward. If your reward is out-paying the minimum risk that you're taking, then it's acceptable. Ula Ojiaku What would you advise leaders of organisations who are considering implementing AI solutions? What are the things we need to consider? Bala Madhusoodhanan Okay. So going back to the business strategy and growth. So that is something that the enterprises or big organisations would have in mind. Always have your AI goals aligned to what they want. So as I said, there are two buckets. One is your efficiency driver, operational efficiency bucket. The other one is your innovation bucket. Just have a sense check of where the business wants to invest in. Just because AI is there doesn't mean you have to use it right. Look into opportunities where you can drive more values. So that would be my first line of thought. The second would be more to do with educating leaders about AI literacy, like what each models are, what do they do? What are the pitfalls, the ethical awareness about use of AI, data privacy is big. So again, that education is just like high level, with some examples on the same business domain where it has been successful, where it has been not so successful, what are the challenges that they face? That's something that I would urge everyone to invest time in. I think I did mention about security again, over the years, the practice has been security is always kept as last. So again, I was fortunate enough to work in organisations where security first mindset was put in place, because once you have a proof of value, once you show that to people, people get excited, and it's about messaging it and making sure it is very secured, protecting the end users. So the third one would be talking about having secure first design policies or principles. Machine learning or AI is of no good if your data quality is not there. So have a data strategy is something that I would definitely recommend. Start small. I mean, just like agile, you take a value, you start small, you realise whether your hypothesis was correct or not, you monitor how you performed and then you think about scale just by hello world doesn't mean that you have mastered that. So have that mindset, start small, monitor, have constant feedback, and then you think about scaling. Ula Ojiaku What are the key things about ethics and AI, do you think leaders should be aware of at this point in time? Bala Madhusoodhanan So again, ethical is very subjective. So it's about having different stakeholders to give their honest opinion of whether your solution is the right thing to do against the value of the enterprise. And it's not your view or my view, it's a consent view and certain things where people are involved, you might need to get HR, you might need to get legal, you might need to get brand reputation team to come and assist you because you don't understand the why behind certain policies were put in place. So one is, is the solution or is the AI ethical to the core value of the enterprise? So that's the first sense check that you need to do. If you pass that sense check, then comes about a lot of other threats, I would say like, is the model that I'm using, did it have a fair representation of all data set? There's a classic case study on one of a big cloud computing giant using an AI algorithm to filter resumes and they had to stop it immediately because the data set was all Ivy League, male, white, dominant, it didn't have the right representation. Over the 10 years, if I'm just hiring certain type of people, my data is inherently biased, no matter how good my algorithm is, if I don't have that data set. The other example is clarify AI. They got into trouble on using very biased data to give an outcome on some decision making to immigration, which has a bigger ramification. Then you talk about fairness, whether the AI system is fair to give you an output. So there was a funny story about a man and a woman in California living together, and I think the woman wasn't provided a credit card, even though everything, the postcode is the same, both of them work in the same company, and it was, I think it has to do with Apple Pay. Apple Pay wanted to bring in a silver credit card, Apple card or whatever it is, but then it is so unfair that the women who was equally qualified was not given the right credit limit, and the bank clearly said the algorithm said so. Then you have privacy concern, right? So all these generic models that you have that is available, even ChatGPT for that matter. Now you can chat with ChatGPT multiple times. You can talk about someone like Trevor Noah and you can say hey, can you create a joke? Now it has been trained with the jokes that he has done, it might be available publicly. But has the creator of model got a consent saying, hey Trevor, I'm going to use your content so that I can give better, and how many such consent, even Wikipedia, if you look into Wikipedia, about 80 percent of the information is public, but it is not diversified. What I mean by that is you can search for a lot of information. If the person is from America or from UK or from Europe, maybe from India to some extent, but what is the quality of data, if you think about countries in Africa, what do you think about South America? I mean, it is not representing the total diversity of data, and we have this large language model, which has been just trained on that data, right? So there is a bias and because of that bias, your outcome might not be fair. So these two are the main things, and of course the privacy concern. So if someone goes and says, hey, you have used my data, you didn't even ask me, then you're into lawsuit. Without getting a proper consent, again, it's a bad world, it's very fast moving and people don't even, including me, I don't even read every terms and condition, I just scroll down, tick, confirm, but those things are the things where I think education should come into play. Think about it, because people don't understand what could go wrong, not to them, but someone like them. Then there is a big fear of job displacement, like if I put this AI system, what will I do with my workforce? Say I had ten people, you need to think about, you need to reimagine your workplace. These are the ten jobs my ten people are doing. If I augment six of those jobs, how can I use my ten resources effectively to do something different or that piece of puzzle is always, again, it goes back to the core values of the company, what they think about their people, how everything is back, but it's just that needs a lot of inputs from multiple stakeholders. Ula Ojiaku It ties back to the enterprise strategy, there is the values, but with technology as it has evolved over the years, things will be made obsolete, but there are new opportunities that are created, so moving from when people travelled with horses and buggies and then the automotive came up. Yes, there wasn't as much demand for horseshoes and horses and buggies, but there was a new industry, the people who would mechanics or garages and things like that. So I think it's really about that. Like, going back to what you're saying, how can you redeploy people? And that might involve, again, training, reskilling, and investing in education of the workforce so that they're able to harness AI and to do those creative things that you've emphasised over this conversation about human beings, that creative aspect, that ability to understand context and nuance and apply it to the situation. Bala Madhusoodhanan So I was fortunate to work with ForHumanity, an NGO which basically is trying to certify people to look into auditing AI systems. So EU AI Act is now in place, it will be enforced soon. So you need people to have controls on all these AI systems to protect - it's done to protect people, it's done to protect the enterprise. So I was fortunate enough to be part of that community. I'm still working closely with the Operation Research Society. Again, you should be passionate enough, you should find time to do it, and if you do it, then the universe will find a way to give you something interesting to work with. And our society, The Alan Turing Institute, the ForHumanity Society, I had a few ICO workshops, which was quite interesting because when you hear perspectives from people from different facets of life, like lawyers and solicitors, you would think, ah, this statement, I wouldn't interpret in this way. It was a good learning experience and I'm sure if I have time, I would still continue to do that and invest time in ethical AI. As technology, it's not only AI, it's ethical use of technology, so sustainability is also part of ethical bucket if you look into it. So there was an interesting paper it talks about how many data centres have been opened between 2018 to 2024, which is like six years and the power consumption has gone from X to three times X or two times X, so we have opened a lot. We have already caused damage to the environment with all these technology, and just because the technology is there, it doesn't mean you have to use it, but again, it's that educational bit, what is the right thing to do? And even the ESG awareness, people are not aware. Like now, if you go to the current TikTok trenders, they know I need to look into certified B Corp when I am buying something. The reason is because they know, and they're more passionate about saving the world. Maybe we are not, I don't know, but again, once you start educating and, telling those stories, humans are really good, so you will have a change of heart. Ula Ojiaku What I'm hearing you say is that education is key to help us to make informed choices. There is a time and place where you would need to use AI, but not everything requires it, and if we're more thoughtful in how we approach, these, because these are tools at the end of the day, then we can at least try to be more balanced in the risks and taking advantage of opportunities versus the risks around it and the impact these decisions and the tools that we choose to use make on the environment. Now, what books have you found yourself recommending most to people, and why? Bala Madhusoodhanan Because we have been talking on AI, AI Superpower is one book which was written by Kai-Fu Lee. There is this book by Brian Christian, The Alignment Problem: Machine Learning and Human Values alignment of human values and machine it was basically talking about what are the human values? Where do you want to use machine learning? How do you basically come up with a decision making, that's a really interesting read. Then there is a book called Ethical Machines by Reid Blackman. So it talks about all the ethical facets of AI, like biases, fairnesses, like data privacy, transparency, explainability, and he gives quite a detail, example and walkthrough of what that means. Another interesting book was Wanted: Human-AI Translators: Artificial Intelligence Demystified by a Dutch professor, again, really, really lovely narration of what algorithms are, what AI is, where, and all you should think about, what controls and stuff like that. So that is an interesting book. Harvard Professor Kahrim Lakhani, he wrote something called, Competing in the Age of AI, that's a good book. The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You by Mike Walsh is another good book, which I finished a couple of months back. Ula Ojiaku And if the audience wants to find you, how can they reach out to you? Bala Madhusoodhanan They can always reach out to me at LinkedIn, I would be happy to touch base through LinkedIn. Ula Ojiaku Awesome. And do you have any final words and or ask of the audience? Bala Madhusoodhanan The final word is, again, responsible use of technology. Think about not just the use case, think about the environmental impact, think about the future generation, because I think the damage is already done. So, at least not in this lifetime, maybe three or four lifetimes down the line, it might not be the beautiful earth that we have. Ula Ojiaku It's been a pleasure, as always, speaking with you, Bala, and thank you so much for sharing your insights and wisdom, and thank you for being a guest on the Agile Innovation Leaders Podcast. Bala Madhusoodhanan Thank you, lovely conversation, and yeah, looking forward to connecting with more like minded LinkedIn colleagues. Ula Ojiaku That's all we have for now. Thanks for listening. If you liked this show, do subscribe at www.agileinnovationleaders.com or your favourite podcast provider. Also share with friends and do leave a review on iTunes. This would help others find this show. I'd also love to hear from you, so please drop me an email at ula@agileinnovationleaders.com Take care and God bless!
Lead me, O Yahweh, in your righteousness because of my enemies; make your way straight before me. - Psalm 5:8 This Episode's Links and Timestamps: 00:00 – Scripture Reading 02:17 – Introduction 06:36 – My Commentary on Psalm 5 32:44 - The Best Male Hobbies – Speeed, YouTube 53:37 - ‘The Alignment Problem: Machine Learning and Human Values' by Brian Christian - Goodreads
In ‘Measure of a Man', episode nine of the second season of Star Trek: The Next Generation, the Enterprise crew debate whether their robot companion, Data, is truly alive. More interesting for us, though, is the way they interact with artificial intelligence (AI) in general. Not just for what it tells us about how AI tools might evolve, but also for how we humans work with them. So in this special episode of The Mind Tools L&D Podcast, Ross Ganer, Claire, Ross Dick and Nahdia discuss: · How closely the Enterprise computer reflects current tools like ChatGPT, · Whether we want robots to work alongside us, · Whether the Turing Test still has relevance. During the discussion, Ross Garner talked about how Moderna is using ChatGPT, how ELIZA passed the Turing Test, and the ongoing discussion around whether ChatGPT's new voice is too similar to Scarlett Johansson's. Ross D discussed Microsoft's Copilot+ PCs, Brian Christian's book The Alignment Problem, and OpenAI CEO Sam Altman's views on neural networks. In ‘What I Learned This Week', Nahdia recommended the movie Atlas, available on Netflix. Ross Garner recommended following visual effects artist Todd Vaziri. For more from us, including access to our back catalogue of podcasts, visit mindtools.com/business. There, you'll also find details of our award-winning performance support toolkit, our off-the-shelf e-learning, and our custom work. Or become a member to support our show! Visit mindtools.com Connect with our speakers If you'd like to share your thoughts on this episode, connect with us on LinkedIn: · Ross Garner · Nahdia Khan · Ross Dickie · Claire Gibson
CONTENTThe guest today is Chris Lincoln Jones. Chris is a retired Army Officer and worked for 17 years in the Defence Industry where he was an Intelligence Surveillance and Target Acquisition Specialist with a particular knowledge of Uncrewed Air Systems. He has been a visiting lecturer at the University of South Wales on an MSc course in Global Governance and has contributed to the Birmingham University Advisory Group at the Institute for Conflict Cooperation and Security on their project: ‘Towards a global restraining consensus on the use of armed drones'.In 2017 he collaborated on the Ares Report for the ICRC: A Technical Analysis of the Employment, Accuracy and Effects of Indirect Artillery Weapons. His section dealt with the lawful use of lethal force and targeting procedures. Chris has also advised on two films, Eye in the Sky and Official Secrets and has just published a book, Doctor Moore's Automaton, that looks at what could be achieved with current technology and AI if given enough money and will.On this pod we discuss his book, the military application of AI in targeting and on autonomous platforms, the morality of AI in warfare and use of drones in Ukraine.Chris's book choice on Desert Islands Dits is "The Alignment Problem - Machine Learning & Human Values" by Brian Christian. My choice is Chris's Book "Dr Moore's Automaton".WHERE TO GET OUR DESERT ISLAND DITS BOOK CHOICESMost of our book recommendations can be bought via the Unconventional Soldier Bookshop. 10% of each purchase supports the pod and helps independent book stores on line sales. "BUY ME A COFFEE"If you want to support the podcast you can buy me a coffee here.SOCIAL MEDIAInstagram @the_unconventional_soldier_pod.Facebook @lateo82. Twitter @TheUCS473.Download these and other platforms via Link Tree.Email us: unconventionalsoldier@gmail.com. This episode brought to you in association with ISARR a veteran owned company. Hosted on Acast. See acast.com/privacy for more information.
Nos sumergimos en la revolución agrícola con Eddy Alvarado, fundador de Agro360, desentrañando cómo la tecnología está redefiniendo la agricultura en República Dominicana. Exploramos cómo Agro360 está transformando el panorama agrícola, integrando inteligencia artificial y sistemas avanzados para enfrentar desafíos como el cambio climático y enfermedades de cultivos. Eddy comparte su visión sobre el futuro de la agricultura sustentable y cómo el conocimiento tecnológico se convierte en una herramienta crucial para los agricultores, garantizando una producción más eficiente y respetuosa con el medio ambiente. Descubre con nosotros cómo la innovación tecnológica no solo mejora la productividad agrícola, sino que también promete una nueva era para el campo dominicano, marcando el camino hacia una agricultura inteligente y sostenible. ---------- *Eddy Alvarado* LinkedIn: https://www.linkedin.com/in/soyeddyalvarado/ *Mangú Tecnologico* Instagram: https://www.instagram.com/mangutecnologico/ Patreon: https://www.patreon.com/mangutecnologico ---------- *Referencias:* ● Producción y protección vegetal, FAO: https://www.fao.org/plant-production-protection/about/es ● Agro360-Tecnologías Agrícolas: https://www.youtube.com/watch?v=FPULWV3bn_4 ● Por qué los agricultores familiares necesitan un mayor acceso a la tecnología, UNCTAD: https://unctad.org/es/news/por-que-los-agricultores-familiares-necesitan-un-mayor-acceso-la-tecnologia ● Libro: Algorithms to live by: The Computer Science of Huma Decisions por Brian Christian y Tom Griffiths: https://a.co/d/4xHI5OH ---------- Este episodio llega gracias a: *Tinkko* Un espacio moderno pero profesional, con una comunidad dispuesta a ayudarte y facilitar muchos procesos que de otra forma generarían gastos para ti. En Tinkko solo deberás preocuparte por el crecimiento de tu emprendimiento o negocio. Conoce más en https://tinkko.com. ---------- La tecnología es un mundo lleno de aventuras extraordinarias, pero como ya sabemos en su interior está llena de tecnicismos. En este Podcast Alian, Gregori y Oscar vienen a dejar todo eso de lado y a mostrarte la cara sencilla de la tecnología. Este audio tiene derechos reservados para Mangú Tecnológico Media Group y sus propietarios. El uso de el contenido producido por Mangú Tecnológico sin autorización previa puede tener consecuencias legales. ---------- Para colaboraciones, escríbenos a: mangutecnologico@gmail.com --- Send in a voice message: https://podcasters.spotify.com/pod/show/mangu-tecnolgico/message
In dit de derde deel van de serie Toekomstperspectieven bespreekt Willem de Witte de zorgen die er bestaan omtrent AI met Sebastiaan Crul en Pim Korsten van Freedom lab. Bronnen en links bij deze uitzending: Deel 1 van de reeks Toekomstperspectieven: https://www.youtube.com/watch?v=5mYGLUG8lus Deel 2: https://www.youtube.com/watch?v=EUGdJvJv90I https://www.goodreads.com/nl/book/show/44767248 De site van Freedom lab: https://www.freedomlab.com/ Het boek 'Human Compatible: Artificial Intelligence and the Problem of Control' van Stuart Russell: https://www.goodreads.com/nl/book/show/44767248 Het boek 'The Alignment Problem: Machine Learning and Human Values' van Brian Christian: https://www.goodreads.com/nl/book/show/50489349
Bevor wir uns mit Shift Happens in die Winterpause begeben, haben wir zum Ende dieses Podcast-Jahres noch eine besondere Bonus-Folge für euch: ein Gespräch, das Miriam mit dem US-amerikanischen KI-Experten und Bestsellerautor Brian Christian beim diesjährigen Ada Lovelace Festival geführt hat. Darin sprechen die beiden über das „Alignment-Problem” – die Herausforderung die Ziele und Funktionen von Künstlicher Intelligenz mit menschlichen Werten und Zielen zu synchronisieren –, das Brian Christian in seinem gleichnamigen Buch beschreibt. Es geht um die numerische Repräsentation menschlicher Konzepte, das universale Geschäftsmodell von ChatGPT und hyperlokale KI-Chatbots.
Hi friends, we're on hiatus for the fall. To tide you over, we're putting up some favorite episodes from our archives. Enjoy! ---- [originally aired February 17, 2021] Guess what folks: we are celebrating a birthday this week. That's right, Many Minds has reached the ripe age of one year old. Not sure how old that is in podcast years, exactly, but it's definitely a landmark that we're proud of. Please no gifts, but, as always, you're encouraged to share the show with a friend, write a review, or give us a shout out on social. To help mark this milestone we've got a great episode for you. My guest is the writer, Brian Christian. Brian is a visiting scholar at the University of California Berkeley and the author of three widely acclaimed books: The Most Human Human, published in 2011; Algorithms To Live By, co-authored with Tom Griffiths and published in 2016; and most recently, The Alignment Problem. It was published this past fall and it's the focus of our conversation in this episode. The alignment problem, put simply, is the problem of building artificial intelligences—machine learning systems, for instance—that do what we want them to do, that both reflect and further our values. This is harder to do than you might think, and it's more important than ever. As Brian and I discuss, machine learning is becoming increasingly pervasive in everyday life—though it's sometimes invisible. It's working in the background every time we snap a photo or hop on Facebook. Companies are using it to sift resumes; courts are using it to make parole decisions. We are already trusting these systems with a bunch of important tasks, in other words. And as we rely on them in more and more domains, the alignment problem will only become that much more pressing. In the course of laying out this problem, Brian's book also offers a captivating history of machine learning and AI. Since their very beginnings, these fields have been formed through interaction with philosophy, psychology, mathematics, and neuroscience. Brian traces these interactions in fascinating detail—and brings them right up to the present moment. As he describes, machine learning today is not only informed by the latest advances in the cognitive sciences, it's also propelling those advances. This is a wide-ranging and illuminating conversation folks. And, if I may say so, it's also an important one. Brian makes a compelling case, I think, that the alignment problem is one of the defining issues of our age. And he writes about it—and talks about it here—with such clarity and insight. I hope you enjoy this one. And, if you do, be sure to check out Brian's book. Happy birthday to us—and on to my conversation with Brian Christian. Enjoy! A transcript of this show is available here. Notes and links 7:26 - Norbert Wiener's article from 1960, ‘Some moral and technical consequences of automation'. 8:35 - ‘The Sorcerer's Apprentice' is an episode from the animated film, Fantasia (1940). Before that, it was a poem by Goethe. 13:00 - A well-known incident in which Google's nascent auto-tagging function went terribly awry. 13:30 - The ‘Labeled Faces in the Wild' database can be viewed here. 18:35 - A groundbreaking article in ProPublica on the biases inherent in the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool. 25:00 – The website of the Future of Humanity Institute, mentioned in several places, is here. 25:55 - For an account of the collaboration between Walter Pitts and Warren McCulloch, see here. 29:35- An article about the racial biases built into photographic film technology in the 20th century. 31:45 - The much-investigated Tempe crash involving a driverless car and a pedestrian: 37:17 - The psychologist Edward Thorndike developed the “law of effect.” Here is one of his papers on the law. 44:40 - A highly influential 2015 paper in Nature in which a deep-Q network was able to surpass human performance on a number of classic Atari games, and yet not score a single point on ‘Montezuma's Revenge.' 47:38 - A chapter on the classic “preferential looking” paradigm in developmental psychology: 53:40 - A blog post discussing the relationship between dopamine in the brain and temporal difference learning. Here is the paper in Science in which this relationship was first articulated. 1:00:00 - A paper on the concept of “coherent extrapolated volition.” 1:01:40 - An article on the notion of “iterated distillation and amplification.” 1:10:15 - The fourth edition of a seminal textbook by Stuart Russell and Peter Norvig, AI a Modern approach, is available here: http://aima.cs.berkeley.edu/ 1:13:00 - An article on Warren McCulloch's poetry. 1:17:45 - The concept of “reductions” is central in computer science and mathematics. Brian Christian's end-of-show reading recommendations: The Alignment Newsletter, written by Rohin Shah Invisible Women, by Caroline Criado Perez: The Gardener and the Carpenter, Alison Gopnik: You can keep up with Brian at his personal website or on Twitter. Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Twitter: @ManyMindsPod.
In this episode, John and Sven discuss risk and technology ethics. They focus, in particular, on the perennially popular and widely discussed problems of value alignment (how to get technology to align with our values) and control (making sure technology doesn't do something terrible). They start the conversation with the famous case study of Stanislov Petrov and the prevention of nuclear war. You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommendations for further reading Atoosa Kasirzadeh and Iason Gabriel, 'In Conversation with AI: Aligning Language Models with Human Values' Nick Bostrom, relevant chapters from Superintelligence Stuart Russell, Human Compatible Langdon Winner, 'Do Artifacts Have Politics?' Iason Gabriel, 'Artificial Intelligence, Values and Alignment' Brian Christian, The Alignment Problem Discount You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher's website. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
The science fiction fantasy of machine consciousness is swiftly moving towards becoming a reality. In 2021 a Google engineer was fired after publicly claiming the LaMDA chatbot he'd been testing was sentient, and last year the chief scientist of the company behind ChatGPT tweeted that some of most cutting-edge AI networks might be "slightly conscious". So what would it mean for humans if AI technology became conscious? And how would we even know they were? Computer scientist Brian Christian is the author of The Alignment Problem, Algorithms to Live By (with Tom Griffiths), and The Most Human Human. He is part of the AI Policy and Governance Working Group at the Institute for Advanced Study.
In Coaching to Flourish #087, CTEDU founder John Andrew Williams brings us a special episode about the book ‘Algorithms to Live By,' by Brian Christian and Tom Griffiths, an exploration of how algorithms apply to the human mind and everyday decision-making. John discusses his top 5 insights from the book, and how each of them apply in a coaching setting. Join us for this illuminating episode! Prefer to watch or read this episode? Find everything you need at our blog: https://www.coachtrainingedu.com/blog/ Interested in becoming a Life Coach? Learn more about our programs and speak to an advisor today: https://www.coachtrainingedu.com/training/ Do you have a coaching question you'd like to have us answer on the podcast? Submit your questions to contact@coachtrainingedu.com!
In our final episode, Host Gary Marcus shares his hopes for and fears about an AI-driven future. On the one hand, AI could accelerate solutions to some of society's most difficult problems; on the other, it could deepen existing problems and create new existential risks to humanity. Getting it right, Marcus emphasizes, depends on establishing both national and international standards for the industry as soon as possible. He is joined by Dr. Alondra Nelson, who led the White House Office of Science and Technology Policy in 2021, and Brian Christian an AI researcher and the author of The Alignment Problem; Machine Learning and Human Values. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Today, we're pulling one of our best episodes from the vaults, featuring the brilliant Brian Christian. Recommend this show by sharing the link: pod.link/2Pages One thing I don't mention often is that the thesis I wrote for my law degree was an attempt to combine my interest in literature with a perspective on law. So I wrote about the phenomenon of plain English: that's trying to write law without the legalese. And I tried to write about it through the lens of literary theories of language. I honestly did not understand what I was trying to do. And also nobody in law school understood what I was trying to do. What I can see now, with the benefit of hindsight and some self-esteem and some marketing speak, is that I was a boundary rider. I've come to learn that the interesting things often take place on the edges, those intermediate areas where X meets Y and some sort of new life is born. Brian Christian is a boundary rider too. He's just way more successful and interesting than law school Micheal. He thinks deeply and writes about deep patterns of life through technology and AI and algorithms. He's the author of The Most Human Human, the Alignment Problem, and Algorithms to Live By. After the introduction I just gave you, you're probably going to guess that Brian isn't just a science guy. Get book links and resources at https://www.mbs.works/2-pages-podcast/ Brian reads from Godel, Escher, Bach by Douglas Hofstadter. [Reading begins at 15:10] Hear us Discuss: Metaphor can be one of the main mechanisms by which science happens. [6:20] | Rules that are delightful to break. [24:35] | “I have this deep conviction […] we are on to some philosophical paydirt here. There is a very real way in which we are building [AI] systems in our own image, and as a result they come to be a mirror for ourselves.” [28:40] | What is the heart of the human experience? [38:10] | “Humans are not so special.” [42.50]
On Season 4 of the Theory and Practice podcast, hosts Anthony Philippakis and Alex Wiltschko explore the many aspects of what it means to be human in the new era of artificial intelligence: from communication to robotic surgery and decision-making.In episode 1, Dr. Greg Corrado, Distinguished Scientist and Head of Health AI at Google Health, explains how to responsibly introduce AI into healthcare. AI has proven itself in detecting diabetic eye disease, managing the risk of cardiovascular disease, and even encoding medical knowledge to answer patient queries, among many new and exciting applications.Greg discusses safety concepts in AI: bias, robustness, transparency, explainability, and groundedness. He also discusses developing and maintaining datasets reflecting real-world patient realities and values. Following this conversation, Anthony and Alex discuss Brian Christian's book “The Most Human Human.”
What does ‘AI alignment mean? Can philosophy help make AI less biased? How does reinforcement learning influence AI's unpredictability? How does AI's ‘frame problem' affect its ability to understand objects? What role does human feedback play in machine learning and AI fine-tuning?An acclaimed author and researcher who explores the human implications of computer science, Brian Christian is best known for his bestselling series of books: "The Most Human Human" (2011), "Algorithms to Live By" (2016), and "The Alignment Problem" (2020). The latter explores the ethical issues in AI, highlighting the biases and unintended outcomes in these systems and the crucial efforts to resolve them, defining our evolving bond with technology. With his deep insights and experiences, Brian brings a unique perspective to the conversation about ethics and safety challenges confronting the field of AI.Listen to the episode and read the transcript at superset.comGuest: Brian ChristianTwitter: @supersetstudio@ClosedSeshPod @tommychavez @vsvaidya
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/brian_christian_how_to_manage_your_time_more_effectively_according_to_machines ■Post on this topic (You can get FREE learning materials!) https://englist.me/69-academic-words-reference-from-brian-christian-how-to-manage-your-time-more-effectively-according-to-machines-ted-talk/ ■Youtube Video https://youtu.be/5e7lG15tpbg (All Words) https://youtu.be/Sby-x_UnVPU (Advanced Words) https://youtu.be/qmpKP_gLLw8 (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
The Alignment Problem by Brian Christian was an excellent read. If you are interested in AI—including its history, its future implications, and the philosophical ramifications—then this is a book for you. Don't forget to subscribe at productthinking.ccWeekly newsletter: The Alignment Problem: Machine Learning and Human Values - Book ReviewOr just want to leave a tip: buy me a coffee?Twitter: @kylelarryevans and @producthinking and @prod_by_designTikTok: @producthinking and @kylelarryevans ★ Support this podcast ★
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/brian_christian_how_to_get_better_at_video_games_according_to_babies ■Post on this topic (You can get FREE learning materials!) https://englist.me/57-academic-words-reference-from-brian-christian-how-to-get-better-at-video-games-according-to-babies--ted-talk/ ■Youtube Video https://youtu.be/ElNl-jfnFkQ (All Words) https://youtu.be/k-NDw5U4okw (Advanced Words) https://youtu.be/doH4D6mn3jc (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
This week Elon Musk and Apple's co founder Steve Wosniak were among signatories to an open letter calling for a six-month pause in the training of systems more powerful than GPT-4. They're part of a growing chorus worried that the unchecked speed of AI development could result in unintended harm. Computer scientist and author Brian Christian writes about one of the fundamental problems of AI development in his book The Alignment Problem: how do we ensure machine learning systems represent the best human values rather than magnify the worst? Christian is also the bestselling author of Algorithms to Live By (with Tom Griffiths), and The Most Human Human. He holds degrees in philosophy, computer science, and poetry and is a visiting scholar at the University of California, Berkeley. Collage of Brian Christian and the cover of this book "The Alignment Problem"
Artificial intelligence is now so much a part of our lives that it seems almost mundane. So is that something to be excited about? Or is the world a scarier place because of it?NPR's Bobby Allyn reports on how some new AI advances showcase both the power and the peril of the technology.And NPR's Ari Shapiro talks to Brian Christian, author of the book "The Alignment Problem: Machine Learning and Human Values", about what we might see in field of artificial intelligence in the year to come. In participating regions, you'll also hear a local news segment to help you make sense of what's going on in your community.Email us at considerthis@npr.org.
This past year, we've witnessed considerable progress in the development of artificial intelligence, from the release of the image generators like DALL-E 2 to chat bots like ChatGPT and Cicero to a flurry of self-driving cars. So this week, we're revisiting some of our favorite conversations about the rise of A.I. and what it means for the world. Brian Christian's “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I've read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that's a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn't understand how it really worked or what we had actually asked it to do.So this conversation, originally recorded in June 2021 is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we're dealing with algorithms we don't really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I've ever heard of, why the problem of automation isn't so much job loss as dignity loss and much more.Mentioned: “Human-level control through deep reinforcement learning”“Some Moral and Technical Consequences of Automation” by Norbert WienerRecommendations: "What to Expect When You're Expecting Robots" by Julie Shah and Laura Major"Finite and Infinite Games" by James P. Carse "How to Do Nothing" by Jenny OdellThoughts? Email us at ezrakleinshow@nytimes.com. Guest suggestions? Fill out this form.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.
Welcome back, Team! This week we're bringing you Part Two of our discussion with Megan Young, Performance Coach for the Seattle Sounders! In this episode, we dive into Megan's approach to training. She discusses programming for teams, training around competition schedules, and how she takes training data into account.Books that Megan recommends on the show:Synergetics by Buckminster FullerUnderstanding the Human Foot by James EarlsFascia by Robert SchleipAlgorithms to Live By by Brian Christian and Tom GriffithsSoccermatics by David SumpterThe Brain that Changes itself by Norman DoidgeSupertraining by Yuri VerkhoshanskyTheory U by C. Otto ScharmerThe Rise of Superman by Steven KotlerWhy We Sleep by Matthew Walker50% OFF Just Move — https://justmove.com/50offAsk us a question — https://kaisafit.com/askCheck out the Just Move library! — https://justmove.com/Check out Start Moving, a workout series for true beginners! — https://startmoving.com/
There is no shortage of places to find information. You have social channels, news outlets, and the internet. Further, our brains are limited in the amount of information they can hold and all these combined can take an emotional toll. Ross Dawson joins Kevin to discuss information overload and how it can be perceived as a problem or an opportunity. Key Points Ross Dawson talks about the difference between overwhelmed and abundance. He shares 5 powers to manage information overload. These include: Purpose - understanding why we want information at all, so we can understand how to use it well. Framing - connecting all the information we encounter to make sense of it, build knowledge, and develop expertise. Filtering - being able to see what is useful to us and discarding everything else. Attention - intentionally selecting where we direct our awareness, not just through focus, but in exploring and regenerating ourselves. Synthesis - the ultimate human capability of connecting the dots to understand the world, see opportunities, and make better decisions. He discusses practical tools we can use for filtering. Meet Ross Name: Ross Dawson His Story: Ross Dawson is the author of Thriving on Overload: The 5 Powers for Success in a World of Exponential Information. He is the Founding Chairman of the Advanced Human Technologies group of companies, with clients including industry leaders such as Citibank, Coca-Cola, Google, Microsoft, News Limited, Procter & Gamble, PwC, and Walmart. Worth Mentioning: https://thrivingonoverload.com/ https://twitter.com/rossdawson This episode is brought to you by... Remarkable Masterclasses. Each masterclass is designed to help you become the remarkable leader and human you were born to be. Details on how to get on board for a specific skill or get discounts each month can be found on our website. Book Recommendations Thriving on Overload: The 5 Powers for Success in a World of Exponential Information by Ross Dawson The Beginning of Infinity: Explanations That Transform the World by David Deutsch The Alignment Problem: Machine Learning and Human Values by Brian Christian Related Episodes Beyond Collaboration Overload with Rob Cross A Minute to Think with Juliet Funt
What's up, Team! This week, we've got a special guest: Megan Young, Performance Coach for the Seattle Sounders! In Part One of this episode, Kaisa and Megan discuss how Megan got her start in the industry and trace the upward trajectory of her career path, from Megan's early beginnings and the role of sports in her life growing up, to her journey towards personal training and later, athletic coaching. They touch on Megan's motivation and drive, her successful battle with leukemia, her stellar educational pathway, and how it all led her to her role today working for the Sounders.Be sure to tune in next week for Part Two!Books that Megan recommends on the show:Synergetics by Buckminster FullerUnderstanding the Human Foot by James EarlsFascia by Robert SchleipAlgorithms to Live By by Brian Christian and Tom GriffithsSoccermatics by David SumpterThe Brain that Changes itself by Norman DoidgeSupertraining by Yuri VerkhoshanskyTheory U by C. Otto ScharmerThe Rise of Superman by Steven KotlerWhy We Sleep by Matthew Walker50% OFF Just Move — https://justmove.com/50offAsk us a question — https://kaisafit.com/askCheck out the Just Move library! — https://justmove.com/Check out Start Moving, a workout series for true beginners! — https://startmoving.com/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A philosopher's critique of RLHF, published by ThomasW on November 7, 2022 on LessWrong. In the spring, I went to a talk with Brian Christian at Yale. He talked about his book, The Alignment Problem, and then there was an audience Q&A. There was a really remarkable question in that Q&A, which I have transcribed here. It came from the Yale philosophy professor L.A. Paul. I have since spoken to Professor Paul, and she has done some work on AI (and coauthored the paper “Effective Altruism and Transformative Experience”) but my general impression was that she hasn't yet spent a huge amount of time thinking about AI safety. Partly because of this question, I invited her to speak at the CAIS Philosophy Fellowship, which she will be doing in the spring. The transcript below really doesn't do her question justice, so I'd recommend watching the recording, starting at 55 minutes. During the talk, Brian Christian described reinforcement learning from human feedback (RLHF), specifically the original paper, where a model was trained with a reward signal generated by having humans rate which of two videos of a simulated robot was closer to a backflip. Paul's question is about this (punctuation added, obviously): L.A. Paul: So, I found it very interesting, but I'm just not fully understanding the optimistic note you ended on...so, in that example, what was key was that the humans that did the "better" thing, knew what a backflip was. It was something they recognized. It was something they recognized so they could make a judgment. But the real issue for us is recognizing, or for machines is recognizing, entirely new kinds of events, like a pandemic, or a president that doesn't follow the rule of law, or something interesting called the internet, you know there's radically new technological advances. And when something like that happens, those rough judgments of "this is better than that"... In other words, those new things: first, we're terrible at describing them before they come and predicting them. (Although, humans are very good at a kind of one shot learning, so they can make judgments quite quickly. Machines are not like that). L.A. Paul: Moreover, these better-than judgments that the machine might be relying on could I think quite straightforwardly be invalidated, because everything changes, or deep things change, in all kinds of unexpected ways. That just seems to be...that's the real problem. It's not... using machines for things that we already have control over. No, it's about trust with entirely new categories of events. So, I was just sort of deeply unclear on... I mean that seems like a nice thing...but that's not, for me, the real alignment problem. Brian Christian: [Agrees, and then talks about calibrated uncertainty in models.] L.A. Paul: Sorry, there's a difference between uncertainty, where you're not sure if it's A, B, or C, and unknown, OK, which is a different kind of uncertainty in probabilistic literature. And then you haven't got, oh is it A, is it B, is it C? It's some other kind of thing that you can't classify and that's the problem I'm trying to target. I'm not claiming that these are original ideas or that they represent all possible critiques of RLHF. Rather: I think that the phrasing is especially crisp (for speaking, I'm sure she could write it even more crisply). I think it's interesting that somebody who is very intelligent and accomplished in philosophy but is not (I think) steeped in the alignment literature could seemingly easily carve this problem at its joints. Also, the rest of the talk is pretty good too! Especially the Q&A, there were some other pretty good questions (including one from myself). But this one stood out for me. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Single and looking for love, Dr Robert Epstein found himself chatting with a slim, attractive brunette online. She seemed perfect... perhaps even too good to be true. Dr Epstein is an expert on artificial conversation - so surely he'd be the last person to fall for a computer? Chatbots fool us more often than we think... especially when they replicate our very worst conversational habits. To read more on this topic try Brian Christian's “The Most Human Human”. For a full list of sources go to timharford.com. See omnystudio.com/listener for privacy information.
In this episode we talk to Percussionist, Photographer, and Music Promoter, Brian Christian Adam aka The Endeavorist. We discuss the Chicagoland music scene and what drives him to be one of the busiest and most sought-after musicians and photographers in the Chicagoland area.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Peter Eckersley (1979-2022), published by Gavin on September 3, 2022 on The Effective Altruism Forum. Security engineer, digital rights activist, AI safety and policy researcher. Beloved in these communities. Eckersley is most famous as an advocate/developer at the intersection of tech and legal activism. His work on the Let's Encrypt free certificate authority, HTTPS Everywhere, and the SSL Observatory made the internet significantly less terrible. Mizza: Let's Encrypt is something we all came to take for granted very quickly, but lots of us remember when getting an SSL certificate was an expensive and tedious process. Deprecating a billion dollar industry overnight and providing better security for internet users everywhere is a hell of a legacy to leave behind, and I hope one that will be an inspiration for generations to come. He identified "device fingerprinting" as a major privacy hole and spent a long time trying to mitigate it. His PhD thesis was a characteristic mix of software engineering, economic theory, and law. We'd call it Web3 these days. He was the 8th person ever to take the GWWC pledge. A few places say he was on the board of CEA US at some point. Around 2017 he went into AI risk. His work bridged AI ethics and AI safety, covering recidivism prediction, self-driving cars, security against ML hacking, and military AI but also big-brain topics like impossibility theorems in social choice and ML, the cybersecurity implications of emulated minds, and the queer theory of human alignment. At the end, he was running the AI Objectives Institute (notable for mobilising people already concerned about corporate maximisation) and mentoring for PIBBSS. He was on many of the big "technical AI policy" papers of the last few years. The Alignment Problem is dedicated to him, after he convinced his friend Brian Christian of it. During covid he worked intensely on private contact tracing algorithms. He received a cancer diagnosis a week before his death. A minor consolation of dying in the present age is people telling beautiful stories about you for strangers and for posterity. So you can know that he was fluent in French, that he organised guerilla meetups, that he was an avid cyclist, that he was a memorable remedial teacher of programming. This is an Eckersley quotes account. Similarly, I can tell you he was writing code until one month before his death. He lived sans LinkedIn. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
In episode 38 of The Gradient Podcast, Daniel Bashir speaks to Been Kim.Been is a staff research scientist at Google Brain focused on interpretability–helping humans communicate with complex machine learning models by not only building tools but also studying how humans interact with these systems. She has served with a number of conferences including ICLR, NeurIPS, ICML, and AISTATS. She gave the keynotes at ICLR 2022, ECML 2020, and the G20 meeting in Argentina in 2018. Her work TCAV received the UNESCO Netexplo award, was featured at Google I/O 2019 and in Brian Christian's book The Alignment Problem.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:(00:00) Intro(02:20) Path to AI/interpretability(06:10) The Progression of Been's thinking / PhD thesis(11:30) Towards a Rigorous Science of Interpretable Machine Learning(24:52) Interpretability and Software Testing(27:00) Been's ICLR Keynote and Human-Machine “Language”(37:30) TCAV(43:30) Mood Board Search and CAV Camera(48:00) TCAV's Limitations and Follow-up Work(56:00) Acquisition of Chess Knowledge in AlphaZero(1:07:00) Daniel spends a very long time asking “what does it mean to you to be a researcher?”(1:09:00) The everyday drudgery, more lessons from Been(1:11:32) OutroLinks:Been's websiteCAVcamera app Get full access to The Gradient at thegradientpub.substack.com/subscribe
Tech Refactored is on a short summer vacation. We can't wait to bring you Season Three of our show, beginning in August 2022, but as we near 100 total episodes our team needs a beat to rest and recharge. While we're away, please enjoy some summer staff favorites. The following episode was originally posted in February of 2022. Best selling author Brian Christian joins the podcast to discuss machine learning and his latest book, The Alignment Problem. Brian and Gus cover the alignment problem - when what we intend to teach machines to do, isn't what they do - and the many challenges and concerns surrounding artificial intelligence and machine learning. This episode was the first featuring a speaker from this semester's speaker series, a part of the Nebraska Governance and Technology Center's Fellows Program. Coming later this season is Christopher Ali on rural broadband, and Anita Allen on race and privacy.
Apostle Brian spent years in the deliverence ministry. Then after having a vision of continually shooting himself in the foot, he changed his theology. If you are tired of being exhausted from constantly being in spiritual warfare, this is a must listen to episode. Father of Glory Global Ministry Website Purchase Apostle Brian's Book
One thing I don't mention often is that the thesis I wrote for my law degree was an attempt to combine my interest in literature with a perspective on law. So I wrote about the phenomenon of plain English: that's trying to write law without the legalese. And I tried to write about it through the lens of literary theories of language. I honestly did not understand what I was trying to do. And also nobody in law school understood what I was trying to do. What I can see now, with the benefit of hindsight and some self-esteem and some marketing speak, is that I was a boundary rider. I've come to learn that the interesting things often take place on the edges, those intermediate areas where X meets Y and some sort of new life is born. Brian Christian is a boundary rider too. He's just way more successful and interesting than law school Micheal. He thinks deeply and writes about deep patterns of life through technology and AI and algorithms. He's the author of The Most Human Human, the Alignment Problem, and Algorithms to Live By. After the introduction I just gave you, you're probably going to guess that Brian isn't just a science guy. Get book links and resources at https://www.mbs.works/2-pages-podcast/ Brian reads from Godel, Escher, Bach by Douglas Hofstadter. [Reading begins at 15:10] Hear us Discuss: Metaphor can be one of the main mechanisms by which science happens. [6:20] | Rules that are delightful to break. [24:35] | “I have this deep conviction […] we are on to some philosophical paydirt here. There is a very real way in which we are building [AI] systems in our own image, and as a result they come to be a mirror for ourselves.” [28:40] | What is the heart of the human experience? [38:10] | “Humans are not so special.” [42.50]
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resources I send to AI researchers about AI safety, published by Vael Gates on June 14, 2022 on LessWrong. This is my masterlist of resources I send AI researchers who are mildly interested in learning more about AI safety. I pick and choose which resources to send based on the researcher's interests. The resources at the top of the email draft are the ones I usually send, and I add in later sections as seems useful. I'll also sometimes send The Alignment Problem, Human-Compatible, or The Precipice.I've also included a list of resources that I had students read through for the course Stanford first-year course "Preventing Human Extinction", though I'd most recommend sufficiently motivated students read AGISF Technical Agenda. These reading choices are drawn from the various other reading lists; this is not original in any way, just something to draw from if you're trying to send someone some of the more accessible resources. There's a decent chance that I'll continue updating this post as time goes on, since my current use case is copy-pasting sections of this email to interested parties. Note that "I" and "Vael" are mentioned a few times, so you'll need to edit a bit if you're copy-pasting. Happy to make any edits and take suggestions. [Crossposted to the EA Forum] List for AI researchers Hello X, Very nice to speak to you! As promised, some resources on AI alignment. I tried to include a bunch of stuff so you could look at whatever you found interesting. Happy to chat more about anything, and thanks again. Introduction to the ideas "The case for taking AI seriously as a threat to humanity" by Kelsey Piper (Vox) The Most Important Century and specifically "Forecasting Transformative AI" by Holden Karnofsky, blog series and podcast. Most recommended for timelines. A short interview from Prof. Stuart Russell (UC Berkeley) about his book, Human-Compatible (the other main book in the space is The Alignment Problem, by Brian Christian, which is written in a style I particularly enjoyed) Technical work on AI alignment Empirical work by DeepMind's Safety team on alignment Empirical work by Anthropic on alignment Talk (and transcript) by Paul Christiano describing the AI alignment landscape in 2020 Podcast (and transcript) by Rohin Shah, describing the state of AI value alignment in 2021 Alignment Newsletter and ML Safety Newsletter Unsolved Problems in ML Safety by Hendrycks et al. (2022) Alignment Research Center Interpretability work aimed at alignment: Elhage et al. (2021) and Olah et al. (2020) AI Safety Resources by Victoria Krakovna (DeepMind) and Technical Alignment Curriculum Introduction to large-scale risks from humanity, including "existential risks" that could lead to the extinction of humanity The first third of this book summary (copied below) of the book "The Precipice: Existential Risk and the Future of Humanity" by Toby Ord Chapter 3 is on natural risks, including risks of asteroid and comet impacts, supervolcanic eruptions, and stellar explosions. Ord argues that we can appeal to the fact that we have already survived for 2,000 centuries as evidence that the total existential risk posed by these threats from nature is relatively low (less than one in 2,000 per century). Chapter 4 is on anthropogenic risks, including risks from nuclear war, climate change, and environmental damage. Ord estimates these risks as significantly higher, each posing about a one in 1,000 chance of existential catastrophe within the next 100 years. However, the odds are much higher that climate change will result in non-existential catastrophes, which could in turn make us more vulnerable to other existential risks. Chapter 5 is on future risks, including engineered pandemics and artificial intelligence. Worryingly, Ord puts the risk of engineered pandemics causing an existential ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resources I send to AI researchers about AI safety, published by Vael Gates on June 14, 2022 on The Effective Altruism Forum. This is my masterlist of resources I send AI researchers who are mildly interested in learning more about AI safety. I pick and choose which resources to send based on the researcher's interests. The resources at the top of the email draft are the ones I usually send, and I add in later sections as seems useful. I'll also sometimes send The Alignment Problem, Human-Compatible, or The Precipice.I've also included a list of resources that I had students read through for the course Stanford first-year course "Preventing Human Extinction", though I'd most recommend sufficiently motivated students read AGISF Technical Agenda. These reading choices are drawn from the various other reading lists; this is not original in any way, just something to draw from if you're trying to send someone some of the more accessible resources. There's a decent chance that I'll continue updating this post as time goes on, since my current use case is copy-pasting sections of this email to interested parties. Note that "I" and "Vael" are mentioned a few times, so you'll need to edit a bit if you're copy-pasting. Happy to make any edits and take suggestions. [Crossposted to LessWrong] List for AI researchers Hello X, Very nice to speak to you! As promised, some resources on AI alignment. I tried to include a bunch of stuff so you could look at whatever you found interesting. Happy to chat more about anything, and thanks again. Introduction to the ideas "The case for taking AI seriously as a threat to humanity" by Kelsey Piper (Vox) The Most Important Century and specifically "Forecasting Transformative AI" by Holden Karnofsky, blog series and podcast. Most recommended for timelines. A short interview from Prof. Stuart Russell (UC Berkeley) about his book, Human-Compatible (the other main book in the space is The Alignment Problem, by Brian Christian, which is written in a style I particularly enjoyed) Technical work on AI alignment Empirical work by DeepMind's Safety team on alignment Empirical work by Anthropic on alignment Talk (and transcript) by Paul Christiano describing the AI alignment landscape in 2020 Podcast (and transcript) by Rohin Shah, describing the state of AI value alignment in 2021 Alignment Newsletter and ML Safety Newsletter Unsolved Problems in ML Safety by Hendrycks et al. (2022) Alignment Research Center Interpretability work aimed at alignment: Elhage et al. (2021) and Olah et al. (2020) AI Safety Resources by Victoria Krakovna (DeepMind) and Technical Alignment Curriculum Introduction to large-scale risks from humanity, including "existential risks" that could lead to the extinction of humanity The first third of this book summary (copied below) of the book "The Precipice: Existential Risk and the Future of Humanity" by Toby Ord Chapter 3 is on natural risks, including risks of asteroid and comet impacts, supervolcanic eruptions, and stellar explosions. Ord argues that we can appeal to the fact that we have already survived for 2,000 centuries as evidence that the total existential risk posed by these threats from nature is relatively low (less than one in 2,000 per century). Chapter 4 is on anthropogenic risks, including risks from nuclear war, climate change, and environmental damage. Ord estimates these risks as significantly higher, each posing about a one in 1,000 chance of existential catastrophe within the next 100 years. However, the odds are much higher that climate change will result in non-existential catastrophes, which could in turn make us more vulnerable to other existential risks. Chapter 5 is on future risks, including engineered pandemics and artificial intelligence. Worryingly, Ord puts the risk of engineered pandemics causing...
Best selling author Brian Christian joins the podcast to discuss machine learning and his latest book, The Alignment Problem. Brian and Gus cover the alignment problem - when what we intend to teach machines to do, isn't what they do - and the many challenges and concerns surrounding artificial intelligence and machine learning. This episode is the first featuring a speaker from this semester's speaker series, a part of the Nebraska Governance and Technology Center's Fellows Program. Coming later this season is Christopher Ali on rural broadband, and Anita Allen on race and privacy.
“There's freedom in radical authenticity” The Truth Alchemist himself, Brian Christian Jenkins, joins us on Fam Life to share the potent relationship tools of curiosity and deep listening. As a leadership and self-mastery coach, Brian helps couples and individuals into their highest and deepest levels of authenticity. In this empowering episode, Brian and Phil share why narrowly focusing on outcomes prevents joy, how people pleasing destroys relationships, and where societal programming limits men. Holding back on your truth has consequences! Lean into a men's circle or support system, and express what's on your heart for the sake of ultimate freedom. Highlights: [03:30] How to use curiosity to better your life. [08:15] Why dropping the need to be right is VITAL in relationships. [11:35] How being on a men's team helps Phil in emotional regulation. [16:20] Do men have a difficult time expressing joy and bliss? [20:11] Why you won't always feel 100% in purpose. [25:39] Does our societal programming limit our potential? [33:03] How to move on from shameful acts from the past. [39:50] Why the honorable action is often only known by the doer. [42:30] Why there's freedom in authenticity. [47:01] How people pleasing sacrifices our inner creativity. Connect with Brian Jenkins: Instagram || @brianchristianjenkins Website || brianchristianjenkins.com Rising Man Links: Instagram || @risingmanmovement DOJO || A 4 day, carefully crafted sequence of rigorous exercises that will challenge the limits of your mental, physical, and emotional fortitude. Men's Circle || Are you ready to join other men JUST LIKE YOU who have found a way to RISE ABOVE life's challenges and create a life of purpose and fulfillment? Join our online FIRE CIRCLE! Online Course || IGNITE is a 12 week online program designed to ignite your purpose and propel you into freedom. Features weekly calls, online modules, growth assignments, and community support! Initiation || Compass is a 4 day 4 night Vision Fast in the wilderness, with preparation and incorporation calls in the months before and after. YouTube || The Rising Man Movement Website || RisingMan.org Dancing Bee Memorial Scholarship Fund || Supports our mission to provide young men with access to the medicine of nature, brotherhood, and transformational experiences.
This is Cognitive Revolution, my show about the personal side of the intellectual journey. Each week, I interview an eminent scientist, writer, or academic about the experiences that shaped their ideas. The show is available wherever you listen to podcasts.Tom Griffiths is Professor Psychology and Computation Science at Princeton University, where he directs the Computational Cognitive Science Lab. Tom uses algorithms from AI to inform his work as a psychologist—testing the ways in which hims align with or deviate from the standards set by the AI models. He’s a central figure in this field, and in this episode we go deep on how it first occurred to Tom to use computers to study the mind—as well as where this work has taken him over the years. Tom recently released a podcast series through Audible, co-hosted with Brian Christian, called Algorithms at Work. I finished it recently and can confidently say it’s one of the best podcast series I’ll listen to all year!Like this episode? Here’s another one to check out:I’d love to know what you thought of this episode! Just reply to this email or send a note directly to my inbox. Feel free to tweet me @CodyKommers. You can also leave a rating for the show on iTunes (or another platform). This is super helpful, as high ratings are one of the biggest factors platforms look at in their recommender system algorithms. The better the ratings, the more they present the show to new potential listeners. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit codykommers.substack.com/subscribe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to become an AI safety researcher, published by peterbarnett on April 12, 2022 on The Effective Altruism Forum. What skills do you need to work on AI safety? And what can we learn from the paths people have taken into the field? We were inspired by the 80,000 Hours podcast with Catherine Olsson and Daniel Ziegler, which had great personal stories and advice about getting into AI safety, so we wanted to do it for a larger sample size. To better understand the lives and careers of AI safety researchers, I talked to eleven AI safety researchers in a variety of organizations, roles, and subfields. If you're interested in getting into AI safety research, we hope this helps you be better informed about what pursuing a career in the field might entail, including things like: How to develop research taste Which specific technical skills to build What non-technical skills you'll need The first section is about the general patterns we noticed, and the second section describes each person's individual path. Of note, the people we talked with are not a random sample of AI safety researchers, and it is also important to consider the effects of survivorship bias. However, we still think it's useful and informative to hear about how they got into the field and what skills they have found valuable. This post is part of a project I've been working on at Nonlinear. Paths into AI safety What degrees did people get? Perhaps unsurprisingly, the researchers we talked to universally studied at least one STEM field in college, most commonly computer science or mathematics. Most had done research as undergraduates, although this often wasn't in AI safety specifically; people often said that getting early research experience was valuable. It is sometimes joked that the qualification needed for doing AI safety work is dropping out of a PhD program, which three people here have done (not that we would exactly recommend doing this!). Aside from those three, almost everyone else is doing or has completed a PhD. These PhD programs were often but not universally, in machine learning, or else they were in related fields like computer science or cognitive science. All of the researchers we talked with had at least familiarity with Effective Altruism and/or Rationality, with most people being actively involved in at least one of these communities. For influential reading, Superintelligence and writing by 80,000 Hours were each mentioned by three people as being particularly impactful on their decision to work on AI safety. It is worth noting that Superintelligence was one of the main books about risks from AI when the people we talked with were becoming interested, but may not be the best book to recommend to people now. More recent books would include Human Compatible by Stuart Russell, or The Alignment Problem by Brian Christian. Finally, many of the safety researchers participated in a program designed for early-career researchers, such as those run by MIRI, CHAI, and FHI. Skills The researchers interviewed described the utility of both technical skills (e.g. machine learning, linear algebra) and more general research skills (e.g. developing research taste, writing well). What technical skills should you learn? Technical AI safety research requires a strong understanding of the technical side of machine learning. By ‘technical' here I basically mean skills related to programming and math. Indeed, a strong command of concepts in the field is important even for those engaged in less technical roles such as field building and strategy. These skills still seem important for understanding the field, especially if you're talking to technical researchers. Depending on the area you work on, some specific areas will be more useful than others. If you want to do “hands-on” machine learning where you trai...
I speak to Brian Christian, author of 'The Alignment Problem – How AI Learns Human Values' about how we still face some significant hurdles when it comes to building machines that are truly capable of thinking and acting like humans.
Best selling author Brian Christian joins the podcast to discuss machine learning and his latest book, The Alignment Problem. Brian and Gus cover the alignment problem - when what we intend to teach machines to do, isn't what they do - and the many challenges and concerns surrounding artificial intelligence and machine learning. This episode is the first featuring a speaker from this semester's speaker series, a part of the Nebraska Governance and Technology Center's Fellows Program. Coming later this season is Christopher Ali on rural broadband, and Anita Allen on race and privacy.
For our third Wintermas podcast episode, in Episode 21 we'll discuss examples of how we are building AI systems and what can and is going wrong with the AI systems being developed, sold, and used today. And, of course, true to the many examples we've covered in previous TechnoSlipstream podcast episodes, we'll see that creating and designing technologies is not just a technical problem--it's a human problem as well, and unfortunately that human element is often left out of technical and engineering designs and discussions. To motivate our look at AI and human values we'll do a deep dive into the book *The Alignment Problem: Machine Learning and Human Values* written by Brian Christian.Follow on Twitter: https://twitter.com/kendallgilesJoin to support the show and for exclusive content, including episode notes, scripts, and other writings: https://patreon.com/kendallgiles
This is Cognitive Revolution, my show about the personal side of the intellectual journey. Each week, I interview an eminent scientist, writer, or academic about the experiences that shaped their ideas. The show is available wherever you listen to podcasts.I am excited to introduce my first return guest on the show, Brian Christian. I knew from the very first time we talked that I wanted to do a part two with him. This wasn’t through any great feat of perspicacity. It was primarily because we didn’t even get the opportunity to talk about his latest book, The Alignment Problem. In the first conversation, we talked about Brian’s background in poetry and computer science. We talked at length about how he became a writer and the process behind his first book, The Most Human Human. Now in this conversation we go deep on The Alignment Problem. The book’s been out for more than a year now, and it’s gotten some pretty good coverage. One of my favorite interviews Brian did was with Ezra Klein, which is definitely worth checking out as well. I tried to get aspects of Brian’s work that haven’t been covered as much in previous discussions. Overall, the book is about the development of artificial intelligence, and throughout each chapter we see AI become increasingly capable of accomplishing more nuanced tasks—and, importantly, tasks which become increasingly embedded into the fabric of our society. Whereas a lot of my interviews on Cognitive Revolution go deep on an author’s backstory, this one is very much focused on content.And make sure to check out Brian’s new audiobook—Algorithms to Work By, available via Audible—when it comes out in February!Brian’s Books:The Alignment ProblemAlgorithms to Live ByThe Most Human HumanBooks we talked about: Michael Tomasello’s Cultural Origins of Human CognitionNick Bostrom’s Superintelligence (well, by implication… when we mentioned AI safety books that “hit you over the head” with their thesis)Like this episode? Here’s another one to check out:As well as my original conversation with Brian:I’d love to know what you thought of this episode! Just reply to this email or send a note directly to my inbox. Feel free to tweet the show @CogRevPod or me @CodyKommers. You can also leave a rating for the show on iTunes (or another platform). This is super helpful, as high ratings are one of the biggest factors platforms look at in their recommender system algorithms. The better the ratings, the more they present the show to new potential listeners.Also: If you’d like to unsubscribe from these weekly podcast emails, you can do so while still remaining on the email list that features my weekly writing. Thanks for following my work! This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit codykommers.substack.com/subscribe
AI & machine learning have been at the center of moral debate for decades now. Scientists and lay people alike know, or assume, that there will come a day where machines will not just figure out how to accomplish goals, but will also figure out what those goals are. That is the central question of our guests latest book, The Alignment Problem.Brian Christian is a visiting scholar at UC Berkeley working with human compatible AI and the Citrus Foundation. He is also known for co-authoring Algorithms To Live By: The Computer Science Of Human Decisions. How do we make sure that our training dataset is representative? How do we make sure learning systems are learning the right things? And how can we make statistical classifiers more accurate when it comes to using algorithms for issues like probation and parole? These questions and more are all tackled in this episode.Episode Quotes:How can we aim to avoid unconscious bias in datasets?I think there's a lot of work to be done both in terms of thinking about what are the data sets that we're really using, how do they generalize to the populations of people that are really going to be affected by a technology?And also how do we articulate what we are truly trying to get this system to do? If certain mistakes are many orders of magnitude more costly than others, we have to find some way of expressing that or else by default, we're going to get a system that doesn't do anything close to what we want. Issues with statistical classifiers to think about probation, parole in US:But what you did see in terms of the difference between black and white defendants was a difference in the actual kinds of errors that it was making.So, if you look specifically at the defendants that were mispredicted, misclassified, you find that black defendants relative to white defendants are about two to one more likely to be misclassified as high-risk. White defendants, the other way around, two to one more likely to have been misclassified as lower risk than they really were.Encouraging research on the future of machine learning:My personal favorite is a paper between open AI and deep mind, where they were able to get a system to learn how to perform a backflip in this virtual world. Merely by showing users pairs of video clips of it moving randomly and saying which of these is slightly more like a backflip.And by showing people 900 such video clips and just saying which of these is slightly more like a backflip, the system was able to infer a representation of what a backflip was. That was good enough for it to then learn how to execute these beautiful gymnastically, perfect backflips, sticking the landing, et cetera. And I think that to me frankly, it's about as hopeful as I have felt about this problem space in many years, because I think there is this promise that we can develop a methodology for extracting, somehow , the norms, the preferences that we have in our head. Show Links:Guest Profile:Brian Christian WebsiteBrian Christian on TwitterAcademic Profile at Simons Institute for the Theory of Computing at UC BerkeleyHis work:The Alignment Problem: Machine Learning and Human ValuesAlgorithms to Live By: The Computer Science of Human DecisionsGameMaker Programming By ExampleThe Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive
Google and Facebook have an outrageous amount of market power, and that's dangerous for society. Dina Srinivasan joins Vasant Dhar in episode 22 of Brave New World to discuss the dangers, and how we can tackle them. Useful resources: 1. Dina Srinivasan's homepage and Twitter. 2. The Antitrust Case Against Facebook -- Dina Srinivasan. 3. Why Google Dominates Advertising Markets -- Dina Srinivasan. 4. Aldous Huxley interviewed by Mike Wallace. 5. James Robinson on What Makes a Successful State -- Episode 19 of Brave New World. 6. Flash Boys: A Wall Street Revolt -- Michael Lewis. 7. Bias, Lies, and Democracy -- Episode 14 of Brave New World (w Ali Velshi). 8. Uplift the Unremarkables -- Episode 2 of Brave New World (w Scott Galloway). 9. Can a Machine Have Human Values? -- Episode 13 of Brave New World (w Brian Christian).
How do we come upon our morality? How much of our moral judgement is intuitive and how much is learned? Moral philosopher Peter Railton joins Vasant Dhar in episode 17 of Brave New World to talk about his thinking in this area -- and also how the fields of ethics and AI are colliding. Useful resources: 1. Facts, Values, and Norms: Essays toward a Morality of Consequence -- Peter Railton. 2. Moral Learning: Conceptual foundations and normative relevance -- Peter Railton. 3. Ethical Learning, Natural and Artificial -- Peter Railton. 4. Ethics of Artificial Intelligence -- S Matthew Liao (Editor) 5. Can a Machine Have Human Values? -- Episode 13 of Brave New World (w Brian Christian). 6. Are We Becoming a New Species? -- Episode 12 of Brave New World (w Molly Crockett). 7. Samuel Moyn on Humane War -- Episode 16 of Brave New World. 8. Human-level performance in 3D multiplayer games with population-based reinforcement learning -- Max Jaderberg et al. 9. A multi-agent reinforcement learning model of common-pool resource appropriation -- Julien Perolat et al.
Is our democracy in peril? What role has the media played in this? What should we do about it? Ali Velshi joins Vasant Dhar in episode 14 of Brave New World to discuss the subtleties of bias, fake news, information consumption and what to do about social media. Useful resources: 1. Ali Velshi on Twitter and Instagram. 2. Understanding QE in the New World -- Episode 10 of Brave New World (w Paul Sheard). 3. How Social Media Threatens Society -- Episode 8 of Brave New World (w Jonathan Haidt). 4. Can a Machine Have Human Values? -- Episode 13 of Brave New World (w Brian Christian). 5. The Social Media Industrial Complex -- Episode 3 of Brave New World (w Sinan Aral).
As artificial intelligence gets more and more powerful, the need becomes greater to ensure that machines do the right thing. But what does that even mean? Brian Christian joins Vasant Dhar in episode 13 of Brave New World to discuss, as the title of his new book goes, the alignment problem. Useful resources: 1. Brian Christian's homepage. 2. The Alignment Problem: Machine Learning and Human Values -- Brian Christian. 3. Algorithms to Live By: The Computer Science of Human Decisions -- Brian Christian and Tom Griffiths. 4. The Most Human Human -- Brian Christian. 5. How Social Media Threatens Society -- Episode 8 of Brave New World (w Jonathan Haidt). 6. Are We Becoming a New Species? -- Episode 12 of Brave New World (w Molly Crockett). 7. The Nature of Intelligence -- Episode 7 of Brave New World (w Yann le Cunn) 8. Some Moral and Technical Consequences of Automation -- Norbert Wiener. 9.Superintelligence: Paths, Dangers, Strategies -- Nick Bostrom. 10. Human Compatible: AI and the Problem of Control -- Stuart Russell. 11. OpenAI. 12. Center for Human-Compatible AI. 13. Concrete Problems in AI Safety -- Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané. 14. Machine Bias -- Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner. 15. Inherent Trade-Offs in the Fair Determination of Risk Scores -- Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan. 16. Algorithmic Decision Making and the Cost of Fairness -- Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq.. 17. Predictions Put Into Practice -- Jessica Saunders, Priscillia Hunt, John S. Hollywood 18. An Engine, Not a Camera: How Financial Models Shape Markets -- Donald MacKenzie. 19. An Anthropologist on Mars -- Oliver Sacks. 20. Deep Reinforcement Learning from Human Preferences -- Paul F Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, Dario Amadei for OpenAI & Deep Mind.
Social media is changing human behavior. How and why are humans being transformed by algorithms? Molly Crockett joins Vasant Dhar in episode 12 of Brave New World to describe her work at the meeting place of technology and morality. Useful resources: 1. Molly Crockett at Yale, Oxford Neuroscience, Google Scholar and Twitter. 2. Crockett Lab. 3. Moral outrage in the digital age -- MJ Crockett. 4. The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online -- William J Brady, MJ Crockett and Jay J Van Bavel. 5. Inference of trustworthiness from intuitive moral judgments -- Jim AC Everett, David A Pizarro and MJ Crockett. 6. The Social Media Industrial Complex -- Episode 3 of Brave New World (w Sinan Aral). 7. How Social Media Threatens Society -- Episode 8 of Brave New World (w Jonathan Haidt). 8. A computational reward learning account of social media engagement -- Björn Lindström and others. 9. The Alignment Problem -- Brian Christian. 10. You and the Algorithm: It Takes Two to Tango -- Nick Clegg. 11. Moral Learning: Conceptual foundations and normative relevance -- Peter Railton. 12. The social dilemma of autonomous vehicles -- Jean-François Bonnefon, Azim Shariff and Iyad Rahwan. 13. Emotion shapes the diffusion of moralized content in social networks -- William J Brady and others.
TOPICS: The nanny state, the future or artificial intelligence, the essays of Flannery O'Connor, & the philosophy of luckHost Scot Bertram talks with Paul Rahe, Professor of History & Chair in the Western Heritage at Hillsdale, about the origins of the "nanny state" and attempts to roll it back. Brian Christian, author of THE MOST HUMAN HUMAN discusses the future of artificial intelligence. On the anniversary of her birth, Flannery O'Connor is featured in a conversation with Hillsdale Professor of English Michael Jordan. And Ian Church, Assistant Professor of Philosophy at Hillsdale, talks about the philosophy of luck and a recent book he edited on the topic.See omnystudio.com/listener for privacy information.