POPULARITY
Criminals need only 20 images of one child to produce a deep-fake version of their identity including a video. That's the warning from cyber security experts. To discuss this further with Shane was Sam Stockwell who is Research Associate at the Alan Turing Institute in the UK.
Criminals need only 20 images of one child to produce a deep-fake version of their identity including a video. That's the warning from cyber security experts. To discuss this further with Shane was Sam Stockwell who is Research Associate at the Alan Turing Institute in the UK.
NorthStandard's Helen Barden discusses the transformative role of AI in the shipping industry with Professor Adam Sobey from the Alan Turing Institute. Professor Sobey explains the basics of artificial intelligence, machine learning, and data science, emphasising their applications in ship design, manufacturing and operations.The conversation explores AI's potential to optimise vessel efficiency, enhance safety protocols, support the energy transition, and improve regulatory compliance.Additionally, they address industry hesitancy towards AI adoption, workforce implications, and the importance of integrating human learning with AI. Professor Sobey also highlights ongoing initiatives to democratise AI through no-code approaches and the role of educational efforts in preparing future engineers for a data-driven industry.00:43 Understanding Artificial Intelligence02:35 AI in Ship Design and Manufacturing05:00 AI's Role in Emission Detection and Enforcement07:48 The Future of Autonomous Shipping10:09 Overcoming Hesitancy and Embracing AI14:37 Scaling Up AI in the Industry18:25 Human-Machine Learning in AI
In this episode of the Future Thinking mini-series from NorthStandard, Helen Barden is joined by Kimberly Tam of the Alan Turing Institute and the University of Plymouth discuss the prevalence of cyber attacks in the maritime industry and the role of AI in defending against these threats.The conversation explores the types of cyber attacks common in the sector, the challenges in detecting and reporting them, and the vulnerabilities at different stages of AI development.They look at how AI can assist human analysts in managing cyber security and the importance of industry-wide communication and training to bolster cyber resilience. The episode also addresses the costs of cyber incidents and the need for improved information sharing across the maritime industry.00:38 Prevalence and Types of Cyber Attacks in Shipping02:28 AI's Role in Cyber Defence04:32 Vulnerabilities of AI to Cyber Attacks06:02 Cyber Risks in the Maritime Industry08:17 Cost Impacts of Cyber Attacks10:14 Improving Cyber Resilience through Information Sharing15:25 Ensuring Cyber Security in Autonomous Vessels18:51 Top Tips for Cyber Awareness and Communication
Artificial intelligence is increasingly being touted as a game-changer across various sectors, including public services. But while AI presents significant opportunities for improving efficiency and effectiveness, concerns about fairness, equity, and past failures in public sector IT transformations loom large. And, of course, the idea of tech moguls like Elon Musk wielding immense influence over our daily lives is unsettling for many.So, what are the real opportunities AI offers for public services? What risks need to be managed? And how well are governments—particularly in the UK—rising to the challenge?In this episode, we dive into these questions with three expert guests who have recently published an article in The Political Quarterly on the subject:Helen Margetts – Professor of Society and the Internet at the Oxford Internet Institute, University of Oxford, and Director of the Public Policy Programme at The Alan Turing Institute. Previously, she was Director of the School of Public Policy at UCL.Cosmina Dorobantu – Co-director of the Public Policy Programme at The Alan Turing Institute.Jonathan Bright – Head of Public Services and AI Safety at The Alan Turing Institute. Mentioned in this episode:Margetts, H., Dorobantu, C. and Bright, J. (2024), How to Build Progressive Public Services with Data Science and Artificial Intelligence. The Political Quarterly. UCL's Department of Political Science and School of Public Policy offers a uniquely stimulating environment for the study of all fields of politics, including international relations, political theory, human rights, public policy-making and administration. The Department is recognised for its world-class research and policy impact, ranking among the top departments in the UK on both the 2021 Research Excellence Framework and the latest Guardian rankings.
Bio Bala has rich experience in retail technology and process transformation. Most recently, he worked as a Principal Architect for Intelligent Automation, Innovation & Supply Chain in a global Fortune 100 retail corporation. Currently he works for a luxury brand as Principal Architect for Intelligent Automation providing technology advice for the responsible use of technology (Low Code, RPA, Chatbots, and AI). He is passionate about technology and spends his free time reading, writing technical blogs and co-chairing a special interest group with The OR Society. Interview Highlights 02:00 Mentors and peers 04:00 Community bus 07:10 Defining AI 08:20 Contextual awareness 11:45 GenAI 14:30 The human loop 17:30 Natural Language Processing 20:45 Sentiment analysis 24:00 Implementing AI solutions 26:30 Ethics and AI 27:30 Biased algorithms 32:00 EU AI Act 33:00 Responsible use of technology Connect Bala Madhusoodhanan on LinkedIn Books and references · https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html - NLP · https://www.theregister.com/2021/05/27/clearview_europe/ - Facial Technology Issue · https://www.designnews.com/electronics-test/apple-card-most-high-profile-case-ai-bias-yet - Apple Card story · https://www.ft.com/content/2d6fc319-2165-42fb-8de1-0edf1d765be3 - Data Centre growth · https://www.technologyreview.com/2024/02/06/1087793/what-babies-can-teach-ai/ · Independent Audit of AI Systems - · Home | The Alan Turing Institute · Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, Marco Iansiti & Karim R. Lakhani · AI Superpowers: China, Silicon Valley, and the New World, Kai-Fu Lee · The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, Mike Walsh · Human+Machine: Reimagining Work in the Age of AI, Paul R Daugherty, H. James Wilson · Superintelligence: Paths, Dangers, Strategies, Nick Bostrom · The Alignment Problem: How Can Artificial Intelligence Learn Human Values, Brian Christian · Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, Reid Blackman · Wanted: Human-AI Translators: Artificial Intelligence Demystified, Geertrui Mieke De Ketelaere · The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond, Michio Kaku, Feodor Chin et al Episode Transcript Intro: Hello and welcome to the Agile Innovation Leaders podcast. I'm Ula Ojiaku. On this podcast I speak with world-class leaders and doers about themselves and a variety of topics spanning Agile, Lean Innovation, Business, Leadership and much more – with actionable takeaways for you the listener. Ula Ojiaku So I have with me here, Bala Madhusoodhanan, who is a principal architect with a global luxury brand, and he looks after their RPA and AI transformation. So it's a pleasure to have you on the Agile Innovation Leaders podcast, Bala, thank you for making the time. Bala Madhusoodhanan It's a pleasure to have a conversation with the podcast and the podcast audience, Ula. I follow the podcast and there have been fantastic speakers in the past. So I feel privileged to join you on this conversation. Ula Ojiaku Well, the privilege is mine. So could you start off with telling us about yourself Bala, what have been the key points or the highlights of your life that have led to you being the Bala we know now? Bala Madhusoodhanan It's putting self into uncharted territory. So my background is mechanical engineering, and when I got the job, it was either you go into the mechanical engineering manufacturing side or the software side, which was slightly booming at that point of time, and obviously it was paying more then decided to take the software route, but eventually somewhere the path kind of overlapped. So from a mainframe background, started working on supply chain, and then came back to optimisation, tied back to manufacturing industry. Somewhere there is an overlap, but yeah, that was the first decision that probably got me here. The second decision was to work in a UK geography, rather than a US geography, which is again very strange in a lot of my peers. They generally go to Silicon Valley or East Coast, but I just took a choice to stay here for personal reasons. And then the third was like the mindset. I mean, I had over the last 15, 20 years, I had really good mentors, really good peers, so I always had their help to soundboard my crazy ideas, and I always try to keep a relationship ongoing. Ula Ojiaku What I'm hearing is, based on what you said, lots of relationships have been key to getting you to where you are today, both from mentors, peers. Could you expand on that? In what way? Bala Madhusoodhanan The technology is changing quite a lot, at least in the last 10 years. So if you look into pre-2010, there was no machine learning or it was statistics. People were just saying everything is statistics and accessibility to information was not that much, but post 2010, 2011, people started getting accessibility. Then there was a data buzz, big data came in, so there were a lot of opportunities where I could have taken a different career path, but every time I was in a dilemma which route to take, I had someone with whom either I have worked or who was my team lead or manager to guide me to tell me, like, take emotion out of the decision making and think in a calm mind, because you might jump into something and you might like it, you might not like it, you should not regret it. So again, over the course of so many such decisions, my cognitive mind has also started thinking about it. So those conversations really help. And again, collective experience. If you look into the decision making, it's not just my decision, I'm going through conversations that I had with people where they have applied their experience, so it's not just me or just not one situation, and to understand the why behind that, and that actually helps. In short, it's like a collection of conversations that I had with peers. A few of them are visionary leaders, they are good readers. So they always had a good insight on where I should focus, where I shouldn't focus, and of late recently, there has been a community bus. So a lot of things are moving to open source, there is a lot of community exchange of conversation, the blogging has picked up a lot. So, connecting to those parts also gives you a different dimension to think about. Ula Ojiaku So you said community bus, some of the listeners or people who are watching the video might not understand what you mean by the community bus. Are you talking about like meetups or communities that come around to discuss shared interests? Bala Madhusoodhanan If you are very much specifically interested in AI, or you are specifically interested in, power platform or a low code platform, there are a lot of content creators on those topics. You can go to YouTube, LinkedIn, and you get a lot of information about what's happening. They do a lot of hackathons, again, you need to invest time in all these things. If you don't, then you are basically missing the boat, but there are various channels like hackathon or meetup groups, or, I mean, it could be us like a virtual conversation like you and me, we both have some passionate topics, that's why we resonate and we are talking about it. So it's all about you taking an initiative, you finding time for it, and then you have tons and tons of information available through community or through conferences or through meetup groups. Ula Ojiaku Thanks for clarifying. So, you said as well, you had a collection of conversations that helped you whenever you were at a crossroad, some new technology or something emerges or there's a decision you had to make and checking in with your mentors, your peers and your personal Board of Directors almost, that they give you guidance. Now, looking back, would you say there were some turns you took that knowing what you know now, you would have done differently? Bala Madhusoodhanan I would have liked to study more. That is the only thing, because sometimes the educational degree, even though without a practical knowledge has a bigger advantage in certain conversation, otherwise your experience and your content should speak for you and it takes a little bit of effort and time to get that trust among leaders or peers just to, even them to trust saying like, okay, this person knows what he's talking about. I should probably trust rather than, someone has done a PhD and it's just finding the right balance of when I should have invested time in continuing my education, if I had time, I would have gone back two years and did everything that I had done, like minus two years off-set it by two years earlier. It would have given me different pathways. That is what I would think, but again, it's all constraints. I did the best at that point in time with whatever constraints I had. So I don't have any regret per se, but yeah, if there is a magic wand, I would do that. Ula Ojiaku So you are a LinkedIn top voice from AI. How would you define AI, artificial intelligence? Bala Madhusoodhanan I am a bit reluctant to give a term Artificial Intelligence. It's in my mind, it is Artificial Narrow Intelligence, it's slightly different. So let me start with a building block, which is machine learning. So machine learning is like a data labeller. You go to a Tesco store, you read the label, you know it is a can of soup because you have read the label, your brain is not only processing that image, it understands the surrounding. It does a lot of things when you pick that can of soup. You can't expect that by just feeding one model to a robot. So that's why I'm saying like it's AI is a bit over glorified in my mind. It is artificial narrow intelligence. What you do to automate certain specific tasks using a data set which is legal, ethical, and drives business value is what I would call machine learning, but yeah, it's just overhyped and heavily utilised term AI. Ula Ojiaku You said, there's a hype around artificial intelligence. So what do you mean by that? And where do you see it going? Bala Madhusoodhanan Going back to the machine learning definition that I said, it's basically predicting an output based on some input. That's as simple as what we would say machine learning. The word algorithm is basically something like a pattern finder. What you're doing is you are giving a lot of data, which is properly labelled, which has proper diversity of information, and there are multiple algorithms that can find patterns. The cleverness or engineering mind that you bring in is to select which pattern or which algorithm you would like to do for your use case. Now you're channelling the whole machine learning into one use case. That's why I'm going with the term narrow intelligence. Computers can do brilliant jobs. So you ask computers to do like a Rubik's cubes solving. It will do it very quickly because the task is very simple and it is just doing a lot of calculation. You give a Rubik's cube to a kid. It has to apply it. The brain is not trained enough, so it has to cognitively learn. Maybe it will be faster. So anything which is just pure calculation, pure computing, if the data is labelled properly, you want to predict an outcome, yes, you can use computers. One of the interesting videos that I showed in one of my previous talks was a robot trying to walk across the street. This is in 2018 or 19. The first video was basically talking about a robot crossing a street and there were vehicles coming across and the robot just had a headbutt and it just fell off. Now a four year old kid was asked to walk and it knew that I have to press a red signal. So it went to the signal stop. It knew, or the baby knew that I can only walk when it is green. And then it looks around and then walks so you can see the difference – a four year old kid has a contextual awareness of what is happening, whereas the robot, which is supposed to be called as artificial intelligence couldn't see that. So again, if you look, our human brains have been evolved over millions of years. There are like 10 billion neurons or something, and it is highly optimised. So when I sleep, there are different set of neurons which are running. When I speak to you, my eyes and ears are running, my motion sensor neurons are running, but these are all highly optimised. So the mother control knows how much energy should be sent on which neuron, right, whereas all these large language models, there is only one task. You ask it, it's just going to do that. It doesn't have that intelligence to optimise. When I sleep, maybe 90 percent of my neurons are sleeping. It's getting recharged. Only the dream neurons are working. Whereas once you put a model live, it doesn't matter, all the hundred thousand neurons would run. So, yeah, it's in very infancy state, maybe with quantum computing, maybe with more power and better chips things might change, but I don't see that happening in the next five to 10 years. Ula Ojiaku Now, what do you say about Gen AI? Would you also classify generative AI as purely artificial neural intelligence? Bala Madhusoodhanan The thing with generative AI is you're trying to generalise a lot of use cases, say ChatGPT, you can throw in a PDF, you can ask something, or you can say, hey, can you create a content for my blog or things like that, right? Again, all it is trying to do is it has some historical content with which it is trying to come up with a response. So the thing that I would say is humans are really good with creativity. If a problem is thrown at a person, he will find creative ways to solve it. The tool with which we are going to solve might be a GenAI tool, I don't know, because I don't know the problem, but because GenAI is in a hype cycle, every problem doesn't need GenAI, that's my view. So there was an interesting research which was done by someone in Montreal University. It talks about 10 of the basic tasks like converting text to text or text to speech and with a generative AI model or multiple models, because you have a lot of vendors providing different GenAI models, and then they went with task specific models and the thing that they found was the task specific models were cheap to run, very, very scalable and robust and highly accurate, right. Whereas GenAI, if, when you try to use it and when it goes into a production ready or enterprise ready and if it is used by customers or third party, which are not part of your ecosystem, you are putting yourself in some kind of risk category. There could be a risk of copyright issues. There could be a risk of IP issues. There could be risk of not getting the right consent from someone. I can say, can you create an image of a podcaster named Ula? You never know because you don't remember that one of your photos on Google or Twitter or somewhere is not set as private. No one has come and asked you saying, I'm using this image. And yeah, it's finding the right balance. So even before taking the technology, I think people should think about what problem are they trying to solve? In my mind, AI or artificial intelligence, or narrow intelligence can have two buckets, right. The first bucket is to do with how can I optimise the existing process? Like there are a lot of things that I'm doing, is there a better way to do it? Is there an efficient way to do it? Can I save time? Can I save money? Stuff like that. So that is an optimisation or driving efficiency lever. Other one could be, I know what to do. I have a lot of data, but I don't have infrastructure or people to do it, like workforce augmentation. Say, I have 10 data entry persons who are graduate level. Their only job is to review the receipts or invoices. I work in FCA. I have to manually look at it, approve it, and file it, right? Now it is a very tedious job. So all you are doing is you are augmenting the whole process with an OCR engine. So OCR is Optical Character Recognition. So there are models, which again, it's a beautiful term for what our eyes do. When we travel somewhere, we get an invoice, we exactly know where to look, right? What is the total amount? What is the currency I have paid? Have they taken the correct credit card? Is my address right? All those things, unconsciously, your brain does it. Whereas our models given by different software vendors, which have trained to capture these specific entities which are universal language, to just pass, on data set, you just pass the image on it. It just picks and maps that information. Someone else will do that job. But as part of your process design, what you would do is I will do the heavy lifting of identifying the points. And I'll give it to someone because I want someone to validate it. It's human at the end. Someone is approving it. So they basically put a human in loop and, human centric design to a problem solving situation. That's your efficiency lever, right? Then you have something called innovation level - I need to do something radical, I have not done this product or service. Yeah, that's a space where you can use AI, again, to do small proof of concepts. One example could be, I'm opening a new store, it's in a new country, I don't know how the store layout should look like. These are my products. This is the store square footage. Can you recommend me the best way so that I can sell through a lot? Now, a visual merchandising team will have some ideas on where the things should be, they might give that prompt. Those texts can be converted into image. Once you get the base image, then it's human. It's us. So it will be a starting point rather than someone implementing everything. It could be a starting point. But can you trust it? I don't know. Ula Ojiaku And that's why you said the importance of having a human in the loop. Bala Madhusoodhanan Yeah. So the human loop again, it's because we humans bring contextual awareness to the situation, which machine doesn't know. So I'll tie back this to the NLP. So Natural Language Processing, it has two components, so you have natural language understanding and then you have natural language generation. When you create a machine learning model, all it is doing is, it is understanding the structure of language. It's called form. I'm giving you 10,000 PDFs, or you're reading a Harry Potter book. There is a difference between you reading a Harry Potter book and the machine interpreting that Harry Potter book. You would have imagination. You will have context of, oh, in the last chapter, we were in the hilly region or in a valley, I think it will be like this, the words like mist, cold, wood. You started already forming images and visualising stuff. The machine doesn't do that. Machine works on this is the word, this is a pronoun, this is the noun, this is the structure of language, so the next one should be this, right? So, coming back to the natural language understanding, that is where the context and the form comes into play. Just think of some alphabets put in front of you. You have no idea, but these are the alphabet. You recognise A, you recognise B, you recognise the word, but you don't understand the context. One example is I'm swimming against the current. Now, current here is the motion of water, right? My current code base is version 01. I'm using the same current, right? The context is different. So interpreting the structure of language is one thing. So, in natural language understanding, what we try to do is we try to understand the context. NLG, Natural Language Generation, is basically how can I respond in a way where I'm giving you an answer to your query. And this combined is NLP. It's a big field, there was a research done, the professor is Emily Bender, and she one of the leading professors in the NLP space. So the experiment was very funny. It was about a parrot in an island talking to someone, and there was a shark in between, or some sea creature, which basically broke the connection and was listening to what this person was saying and mimicking. Again, this is the problem with NLP, right? You don't have understanding of the context. You don't put empathy to it. You don't understand the voice modulation. Like when I'm talking to you, you can judge what my emotion cues are, you can put empathy, you can tailor the conversation. If I'm feeling sad, you can put a different spin, whereas if I'm chatting to a robot, it's just going to give a standard response. So again, you have to be very careful in which situation you're going to use it, whether it is for a small team, whether it is going to be in public, stuff like that. Ula Ojiaku So that's interesting because sometimes I join the Masters of Scale strategy sessions and at the last one there was someone whose organisational startup was featured and apparently what their startup is doing is to build AI solutions that are able to do sentiment analysis. And I think some of these, again, in their early stages, but some of these things are already available to try to understand the tone of voice, the words they say, and match it with maybe the expression and actually can transcribe virtual meetings and say, okay, this person said this, they looked perplexed or they looked slightly happy. So what do you think about that? I understand you're saying that machines can't do that, but it seems like there are already organisations trying to push the envelope towards that direction. Bala Madhusoodhanan So the example that you gave, sentiment of the conversation, again, it is going by the structure or the words that I'm using. I am feeling good. So good, here is positive sentiment. Again, for me the capability is slightly overhyped, the reason being is it might do 20 percent or 30 percent of what a human might do, but the human is any day better than that particular use case, right? So the sentiment analysis typically works on the sentiment data set, which would say, these are the certain proverbs, these are the certain types of words, this generally referred to positive sentiment or a good sentiment or feel good factor, but the model is only good as good as the data is, right? So no one is going and constantly updating that dictionary. No one is thinking about it, like Gen Z have a different lingo, millennials had a different lingo. So, again, you have to treat it use case by use case, Ula. Ula Ojiaku At the end of the day, the way things currently are is that machines aren't at the place where they are as good as humans. Humans are still good at doing what humans do, and that's the key thing. Bala Madhusoodhanan Interesting use case that I recently read probably after COVID was immersive reading. So people with dyslexia. So again, AI is used for good as well, I'm not saying it is completely bad. So AI is used for good, like, teaching kids who are dyslexic, right? Speech to text can talk, or can translate a paragraph, the kid can hear it, and on the screen, I think one note has an immersive reader, it actually highlights which word it is, uttering into the ears and research study showed that kids who were part of the study group with this immersive reading audio textbook, they had a better grasp of the context and they performed well and they were able to manage dyslexia better. Now, again, we are using the technology, but again, kudos to the research team, they identified a real problem, they formulated how the problem could be solved, they were successful. So, again, technology is being used again. Cancer research, they invest heavily, in image clustering, brain tumours, I mean, there are a lot of use cases where it's used for good, but then again, when you're using it, you just need to think about biases. You need to understand the risk, I mean, everything is risk and reward. If your reward is out-paying the minimum risk that you're taking, then it's acceptable. Ula Ojiaku What would you advise leaders of organisations who are considering implementing AI solutions? What are the things we need to consider? Bala Madhusoodhanan Okay. So going back to the business strategy and growth. So that is something that the enterprises or big organisations would have in mind. Always have your AI goals aligned to what they want. So as I said, there are two buckets. One is your efficiency driver, operational efficiency bucket. The other one is your innovation bucket. Just have a sense check of where the business wants to invest in. Just because AI is there doesn't mean you have to use it right. Look into opportunities where you can drive more values. So that would be my first line of thought. The second would be more to do with educating leaders about AI literacy, like what each models are, what do they do? What are the pitfalls, the ethical awareness about use of AI, data privacy is big. So again, that education is just like high level, with some examples on the same business domain where it has been successful, where it has been not so successful, what are the challenges that they face? That's something that I would urge everyone to invest time in. I think I did mention about security again, over the years, the practice has been security is always kept as last. So again, I was fortunate enough to work in organisations where security first mindset was put in place, because once you have a proof of value, once you show that to people, people get excited, and it's about messaging it and making sure it is very secured, protecting the end users. So the third one would be talking about having secure first design policies or principles. Machine learning or AI is of no good if your data quality is not there. So have a data strategy is something that I would definitely recommend. Start small. I mean, just like agile, you take a value, you start small, you realise whether your hypothesis was correct or not, you monitor how you performed and then you think about scale just by hello world doesn't mean that you have mastered that. So have that mindset, start small, monitor, have constant feedback, and then you think about scaling. Ula Ojiaku What are the key things about ethics and AI, do you think leaders should be aware of at this point in time? Bala Madhusoodhanan So again, ethical is very subjective. So it's about having different stakeholders to give their honest opinion of whether your solution is the right thing to do against the value of the enterprise. And it's not your view or my view, it's a consent view and certain things where people are involved, you might need to get HR, you might need to get legal, you might need to get brand reputation team to come and assist you because you don't understand the why behind certain policies were put in place. So one is, is the solution or is the AI ethical to the core value of the enterprise? So that's the first sense check that you need to do. If you pass that sense check, then comes about a lot of other threats, I would say like, is the model that I'm using, did it have a fair representation of all data set? There's a classic case study on one of a big cloud computing giant using an AI algorithm to filter resumes and they had to stop it immediately because the data set was all Ivy League, male, white, dominant, it didn't have the right representation. Over the 10 years, if I'm just hiring certain type of people, my data is inherently biased, no matter how good my algorithm is, if I don't have that data set. The other example is clarify AI. They got into trouble on using very biased data to give an outcome on some decision making to immigration, which has a bigger ramification. Then you talk about fairness, whether the AI system is fair to give you an output. So there was a funny story about a man and a woman in California living together, and I think the woman wasn't provided a credit card, even though everything, the postcode is the same, both of them work in the same company, and it was, I think it has to do with Apple Pay. Apple Pay wanted to bring in a silver credit card, Apple card or whatever it is, but then it is so unfair that the women who was equally qualified was not given the right credit limit, and the bank clearly said the algorithm said so. Then you have privacy concern, right? So all these generic models that you have that is available, even ChatGPT for that matter. Now you can chat with ChatGPT multiple times. You can talk about someone like Trevor Noah and you can say hey, can you create a joke? Now it has been trained with the jokes that he has done, it might be available publicly. But has the creator of model got a consent saying, hey Trevor, I'm going to use your content so that I can give better, and how many such consent, even Wikipedia, if you look into Wikipedia, about 80 percent of the information is public, but it is not diversified. What I mean by that is you can search for a lot of information. If the person is from America or from UK or from Europe, maybe from India to some extent, but what is the quality of data, if you think about countries in Africa, what do you think about South America? I mean, it is not representing the total diversity of data, and we have this large language model, which has been just trained on that data, right? So there is a bias and because of that bias, your outcome might not be fair. So these two are the main things, and of course the privacy concern. So if someone goes and says, hey, you have used my data, you didn't even ask me, then you're into lawsuit. Without getting a proper consent, again, it's a bad world, it's very fast moving and people don't even, including me, I don't even read every terms and condition, I just scroll down, tick, confirm, but those things are the things where I think education should come into play. Think about it, because people don't understand what could go wrong, not to them, but someone like them. Then there is a big fear of job displacement, like if I put this AI system, what will I do with my workforce? Say I had ten people, you need to think about, you need to reimagine your workplace. These are the ten jobs my ten people are doing. If I augment six of those jobs, how can I use my ten resources effectively to do something different or that piece of puzzle is always, again, it goes back to the core values of the company, what they think about their people, how everything is back, but it's just that needs a lot of inputs from multiple stakeholders. Ula Ojiaku It ties back to the enterprise strategy, there is the values, but with technology as it has evolved over the years, things will be made obsolete, but there are new opportunities that are created, so moving from when people travelled with horses and buggies and then the automotive came up. Yes, there wasn't as much demand for horseshoes and horses and buggies, but there was a new industry, the people who would mechanics or garages and things like that. So I think it's really about that. Like, going back to what you're saying, how can you redeploy people? And that might involve, again, training, reskilling, and investing in education of the workforce so that they're able to harness AI and to do those creative things that you've emphasised over this conversation about human beings, that creative aspect, that ability to understand context and nuance and apply it to the situation. Bala Madhusoodhanan So I was fortunate to work with ForHumanity, an NGO which basically is trying to certify people to look into auditing AI systems. So EU AI Act is now in place, it will be enforced soon. So you need people to have controls on all these AI systems to protect - it's done to protect people, it's done to protect the enterprise. So I was fortunate enough to be part of that community. I'm still working closely with the Operation Research Society. Again, you should be passionate enough, you should find time to do it, and if you do it, then the universe will find a way to give you something interesting to work with. And our society, The Alan Turing Institute, the ForHumanity Society, I had a few ICO workshops, which was quite interesting because when you hear perspectives from people from different facets of life, like lawyers and solicitors, you would think, ah, this statement, I wouldn't interpret in this way. It was a good learning experience and I'm sure if I have time, I would still continue to do that and invest time in ethical AI. As technology, it's not only AI, it's ethical use of technology, so sustainability is also part of ethical bucket if you look into it. So there was an interesting paper it talks about how many data centres have been opened between 2018 to 2024, which is like six years and the power consumption has gone from X to three times X or two times X, so we have opened a lot. We have already caused damage to the environment with all these technology, and just because the technology is there, it doesn't mean you have to use it, but again, it's that educational bit, what is the right thing to do? And even the ESG awareness, people are not aware. Like now, if you go to the current TikTok trenders, they know I need to look into certified B Corp when I am buying something. The reason is because they know, and they're more passionate about saving the world. Maybe we are not, I don't know, but again, once you start educating and, telling those stories, humans are really good, so you will have a change of heart. Ula Ojiaku What I'm hearing you say is that education is key to help us to make informed choices. There is a time and place where you would need to use AI, but not everything requires it, and if we're more thoughtful in how we approach, these, because these are tools at the end of the day, then we can at least try to be more balanced in the risks and taking advantage of opportunities versus the risks around it and the impact these decisions and the tools that we choose to use make on the environment. Now, what books have you found yourself recommending most to people, and why? Bala Madhusoodhanan Because we have been talking on AI, AI Superpower is one book which was written by Kai-Fu Lee. There is this book by Brian Christian, The Alignment Problem: Machine Learning and Human Values alignment of human values and machine it was basically talking about what are the human values? Where do you want to use machine learning? How do you basically come up with a decision making, that's a really interesting read. Then there is a book called Ethical Machines by Reid Blackman. So it talks about all the ethical facets of AI, like biases, fairnesses, like data privacy, transparency, explainability, and he gives quite a detail, example and walkthrough of what that means. Another interesting book was Wanted: Human-AI Translators: Artificial Intelligence Demystified by a Dutch professor, again, really, really lovely narration of what algorithms are, what AI is, where, and all you should think about, what controls and stuff like that. So that is an interesting book. Harvard Professor Kahrim Lakhani, he wrote something called, Competing in the Age of AI, that's a good book. The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You by Mike Walsh is another good book, which I finished a couple of months back. Ula Ojiaku And if the audience wants to find you, how can they reach out to you? Bala Madhusoodhanan They can always reach out to me at LinkedIn, I would be happy to touch base through LinkedIn. Ula Ojiaku Awesome. And do you have any final words and or ask of the audience? Bala Madhusoodhanan The final word is, again, responsible use of technology. Think about not just the use case, think about the environmental impact, think about the future generation, because I think the damage is already done. So, at least not in this lifetime, maybe three or four lifetimes down the line, it might not be the beautiful earth that we have. Ula Ojiaku It's been a pleasure, as always, speaking with you, Bala, and thank you so much for sharing your insights and wisdom, and thank you for being a guest on the Agile Innovation Leaders Podcast. Bala Madhusoodhanan Thank you, lovely conversation, and yeah, looking forward to connecting with more like minded LinkedIn colleagues. Ula Ojiaku That's all we have for now. Thanks for listening. If you liked this show, do subscribe at www.agileinnovationleaders.com or your favourite podcast provider. Also share with friends and do leave a review on iTunes. This would help others find this show. I'd also love to hear from you, so please drop me an email at ula@agileinnovationleaders.com Take care and God bless!
From the shockwaves caused by DeepSeek's launch, to fears of a new AI arms race, and the continued questions over the technology's energy use, AI continues to throw up new challenges. As world leaders gather for the Paris AI summit, the Guardian's global technology editor, Dan Milmo, joins Madeleine Finlay to discuss what will be top of the agenda. And young people attending the Alan Turing Institute's Children's AI summit explain what their hopes and fears for the technology are. Help support our independent journalism at theguardian.com/sciencepod
Computer Scientist Neil Lawrence shares his insights on what machine intelligence can teach us about being human, the risks of relying on technologies that prioritise efficiency and scalability over ethics, and the hubris of efforts to extend or upload human consciousness using AI. Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge. He has been working on machine learning models for over 20 years. He recently returned to academia after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the physical world. This interest was triggered by deploying machine learning in the African context, where ‘end-to-end' solutions are normally required. This has inspired new research directions at the interface of machine learning and systems research, this work is funded by a Senior AI Fellowship from the Alan Turing Institute. Neil is also visiting Professor at the University of Sheffield and the co-host of Talking Machines. ABOUT THE HOST Luke Robert Mason is a British-born futures theorist who is passionate about engaging the public with emerging scientific theories and technological developments. He hosts documentaries for Futurism, and has contributed to BBC Radio, BBC One, The Guardian, Discovery Channel, VICE Motherboard and Wired Magazine. CREDITS In Partnership with the Dubai Future Foundation Producer & Host: Luke Robert Mason Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast Follow Luke Robert Mason on Twitter at @LukeRobertMason Subscribe & Support the Podcast at http://futurespodcast.net
The robots are taking our jobs, we're going to be a nation run by chatbots, and an all-seeing, all-powerful computer is set to know everything about all of us. Well, not really. But Keir Starmer says AI is poised to generate “incredible change in our country” So what exactly is he talking about? Helen Margetts, professor of Society and the Internet at the University of Oxford and director of the Alan Turing Institute's public policy programme, joins Hannah and Paul to explore what AI could mean for the private sector, for the public sector, and whether there are good reasons to be worried about what the future might hold... Learn more about your ad choices. Visit podcastchoices.com/adchoices
De voorspelling in onder andere het Parool dat AI een grote rol zou gaan spelen in het manipuleren van verkiezingen in 2024 bleek onjuist. Onderzoek van het Alan Turing Institute toont aan dat slechts 6% van desinformatie rond de Amerikaanse verkiezingen AI-gegenereerd was, waarvan de helft satirisch. Voor Europese verkiezingen werden slechts enkele virale AI-berichten gevonden. De angst voor AI-manipulatie bleek sterk overdreven, zoals de Financial Times stelde: “We were deepfaked by the fear of deepfakes.”
Are people justified in their resistance of artificial intelligence, or should they accept its power in creating progress in the workplace? Equally, are organisations under too much pressure to act quickly in the face of evolving AI capability? In this special recording of the HR People Pod, CIPD Director of Profession David D'Souza is joined by Professor Michael Wooldridge, director for AI at Alan Turing Institute and professor of AI at University of Oxford, who suggests, “all AI programmes are, are glorified pocket calculators”. Addressing that interesting viewpoint and more, we unpack some of the burning questions around AI use in the workplace and within the profession. This episode was recorded live at the CIPD Annual Conference and Exhibition. Recorded: 6 November 2024
Ed and David chat with Professor Ibrahim Habli, Research Director at the Centre for Assuring Autonomy in the University of York, and director of the UKRI Centre for Doctoral Training in Safe AI Systems. The conversation covers the topic of defining and contextualising AI safety and risk, given existence of existing safety practices from other industries. Ibrahim has collaborated with The Alan Turing Institute on the "Trustworthy and Ethical Assurance platform", or "TEA" for short, an open-source tool for developing and communicating structured assurance arguments to show how data science and AI tech adheres to ethical principles.
Guest Batool Almarzouq Panelist Richard Littauer | Amanda Casari Show Notes In this episode, hosts Richard Littauer and Amanda Casari are joined by Batool Almarzouq, Honorary Research Fellow at the University of Liverpool and Research Project Manager at the Alan Turing Institute. They discuss Batool's work in open science, including her involvement in the Open Science Community Saudi Arabia, localization efforts, and the challenges of connecting global and local open science initiatives. The conversation covers Batool's efforts to make research more accessible and open in the Arab region, the concept of localization vs. translation, her experiences with translation management systems, and the importance of community and mentorship in advancing open science. Batool shares insights from her collaborations with various groups and the influence of Latin American communities on her work. Press download now to hear more! [00:02:11] Batool explains her roles at various institutions and how she promotes open science in Saudi Arabia and globally. [00:03:31] Batool discusses the difficulties Arab researchers face in engaging with open science, including language barriers and the Western focus of many initiatives. [00:04:50] Amanda asks about the vision for open science in Saudi Arabia and Batool talks about open science values in the Arab world and the cultural significance of knowledge sharing pre-colonization. [00:07:56] Batool talks about localization efforts and bridging the gap between Western and Arab scientific norms. [00:11:04] There's a discussion on how Batool connects researchers and community leaders in Arab countries, the grassroots nature of the Open Science Community Saudi Arabia, and the importance of local engagement. [00:14:20] Batool details the technical tools used for localization, challenges with translating right-to-left languages, and the importance of building open source tools for internationalization. [00:20:20] There's a conversation on the difficulties in securing funding for localization efforts and the importance of empowering local communities to take charge of their own knowledge production. [00:23:43] Batool shares insights on working with Latin American communities, shared challenges in open science, and the importance of community-led initiatives. [00:25:33] We hear Batool's thoughts on the importance of mentorship, community, and collective action in creating meaningful change. [00:27:51] Find out where you can follow Batool and her work online. Quotes [00:06:56] “One of the things is that science used to be more transdisciplinary.” [00:11:18] “We have our own full-time jobs, there's no system that we use in place recording or creating things. It's more about connecting people and creating that space for this discussion to grow.” [00:25:51] “There's two places I get a lot of value from outside of academia: engaging with community practice and finding mentors.” [00:27:17] “Finding people who relate to you, relate to your ideas, and also help you articulate them better and see what other people are trying to do gives you a lot of power.” Spotlight [00:28:27] Amanda's spotlight is PyLadies. [00:29:22] Richard's spotlight is American Atheists. [00:30:14] Batool's spotlight is Alycia Crall, Richie Moluno and Goodnews Sandy. Links SustainOSS (https://sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) richard@sustainoss.org (mailto:richard@sustainoss.org) SustainOSS Discourse (https://discourse.sustainoss.org/) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Socials (https://www.burntfen.com/2023-05-30/socials) Amanda Casari Linktree (https://linktr.ee/amcasari) Batool Almarzouq LinkedIn (https://www.linkedin.com/in/batool-almarzouq-093366a1/) Batool Almarzouq Website (https://batool-almarzouq.netlify.app/) The Alan Turing Institute (https://www.turing.ac.uk/) The Turing Way (https://the-turing-way.netlify.app/index.html) Open Science Community Saudi Arabia (https://osc-ksa.com/) Open Science Community Saudi Arabia-Zenodo (https://zenodo.org/communities/1231231664/records?q=&l=list&p=1&s=10&sort=newest) Ramsey Nasser-GitHub (https://github.com/nasser/) Translation management system (https://en.wikipedia.org/wiki/Translation_management_system) Crowdin (https://crowdin.com/) JSQuarto (https://github.com/Open-Science-Community-Saudi-Arabia/JSquarto) PyLadies (https://pyladies.com/) American Atheists (https://www.atheists.org/) Alycia Crall (https://carpentries.org/blog/2021/07/introducing-the-carpentries-director-of-community/) Richie Moluno (https://realrichi3.github.io/) Goodnews Sandy (https://goodnewssandy.netlify.app/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Batool Almarzouq.
Sinan Aral discusses his book "The Hype Machine," and how social media disrupts our elections, our economy, and our health. Sinan is a Professor of Management at MIT and Director of the MIT Initiative on the Digital Economy. He currently serves on the Advisory Boards of the Alan Turing Institute, the British National Institute for Data Science in London, and the Centre for Responsible Media Technology and Innovation in Bergen, Norway. Host, Kevin Craine Do you want to be a guest?
Today, we're talking about what some would characterize the “topic” of the XXIst Century (others might perhaps not!): Digital Twins. Are we excited about them? Are our eyes so rolled that they'll never recover? Or is this excited Vs eyeroll dichotomy not very helpful at all? Above all, what even are Digital Twins? Episode notes - Here is the Commentary Rachel mentions (free access): Malleson, N., Franklin, R., Arribas-Bel, D., Cheng, T., & Birkin, M. (2024). Digital twins on trial: Can they actually solve wicked societal problems and change the world for better? Environment and Planning B: Urban Analytics and City Science, 51(6), 1181-1186. https://doi.org/10.1177/23998083241262893 - The DT project for a bridge in Amsterdam, by The Alan Turing Institute: https://www.turing.ac.uk/about-us/impact/bridging-gap-between-physical-and-digital - AI for continuous environmental project mentioned by Levi: https://waterinnovation.challenges.org/winners/ai-algal-monitoring/ - Larger scale aquatic “digital twin” project at Turing, mentioned by Levi: https://www.turing.ac.uk/research/research-projects/probabilistic-programming-aquatic-ecosystem-models
Is AGI just a really dumb idea? Is the concept essentially meaningless? And are we entirely barking up the wrong tree? In this episode of TechFirst, host John Koetsier interviews Neil Lawrence, the DeepMind Professor of Machine Learning at the University of Cambridge and senior Fellow at the Alan Turing Institute about his latest book 'The Atomic Human: What Makes Us Unique in the Age of AI.' Lawrence explores the complexities of human intelligence, the misconceptions around artificial general intelligence (AGI), and the implications of large language models (LLMs) like ChatGPT. He also discusses the limitations and strengths of human decision-making, the potential risks of AI, and the importance of preserving human culture and diversity in intelligence. The conversation delves into the role of AI in enhancing human capabilities, the challenges in deploying AI in sensitive areas, and the necessity for regulatory interventions to maintain a balanced technological ecosystem. 00:00 Introduction 00:40 Meet Neil Lawrence: AI Expert 01:19 The Atomic Human: Book Insights 02:26 Philosophical Perspectives on AI 04:16 Neil Lawrence's Journey into AI 05:45 Current State of AI and AGI 08:06 The Complexity of Intelligence 18:31 Corporate Culture and Innovation 20:00 The Impact of Large Language Models 23:18 Exploring AI as an Interface 28:53 The Evolution of Internet Search 29:57 Human Flaws as Strengths 31:06 Collective Intelligence and Decision Making 33:16 AI and the River Gods 35:21 The Complexity of Culture and Technology 40:49 Regulating AI and Digital Markets 52:31 The Role of Teachers and Human Connection 55:17 Concluding Thoughts and Optimism
Over the summer we've been incredibly lucky to have been working with Justin Chen, a maths student at the University of Cambridge who is about to start his Masters. Justin has done some great work on how to explain the concept of a mathematical group, and group theory as a whole, to non-mathematicians. In this episode of Maths on the move he tells us how groups are collection of actions, akin to walking around on a field, and why group theory is often called the study of symmetry. He also marvels at the power of abstraction mathematics affords us, tells us about what it was like diving into the world of maths communication, and what his plans are for the future. You can find out more about groups in the following two collections Justin has produced: Groups: The basics Groups: A whistle-stop tour You might also want to read Justin's article Explaining AI with the help of philosophy mentioned at the beginning of the podcast. It is based on an interview with Hana Chockler, a professor at King's College London, conducted at a recent event organised by the Newton Gateway to Mathematics and the Alan Turing Institute. This article was produced as part of our collaborations with the Isaac Newton Institute for Mathematical Sciences (INI) and the Newton Gateway to Mathematics. The INI is an international research centre and our neighbour here on the University of Cambridge's maths campus. The Newton Gateway is the impact initiative of the INI, which engages with users of mathematics. You can find all the content from the collaboration here.
In a special episode recorded live at the British Science Festival, Madeleine Finlay and guests explore the question: will AI make a good companion? AI could give us new ways to tackle difficult problems, from young people's mental health issues to isolation in care homes. It also raises challenging questions about the increasing role of tech in our personal lives. To explore these questions, Madeleine is joined by the Guardian's science editor, Ian Sample; Tony Prescott, a professor of computational robotics at Sheffield University; and Dr Mhairi Aitken, an ethics fellow at the Alan Turing Institute and visiting senior lecturer at Queen Mary University of London.. Help support our independent journalism at theguardian.com/sciencepod
Stephen Sackur speaks to leading artificial intelligence researcher Neil Lawrence. He's Professor of Machine Learning at the University of Cambridge and has a Senior AI Fellowship at the Alan Turing Institute. His new book – The Atomic Human – explores the transformational potential of artificial intelligence, while reflecting on the qualities of the human mind that cannot be replicated by even the most sophisticated machines.As more and more aspects of our lives are impacted by the rollout of machine learning, as control of big data and the development of algorithms to exploit it becomes a source of immense power in the 21st century, tech futurists are divided on whether we should embrace AI or fear it. In the end what will matter most isn't the technology but the humans who develop and deploy it. Should we have faith in ourselves to get it right?
We're in the midst of a new technological revolution. Much like the Industrial Revolution, the AI era will challenge all aspects of society. David Leslie, Director of Ethics and Responsible Innovation at the Alan Turing Institute, joins David Rothkopf for a discussion on the ways in which artificial intelligence will challenge the framework of society and how to ensure that AI serves the public good. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
We're in the midst of a new technological revolution. Much like the Industrial Revolution, the AI era will challenge all aspects of society. David Leslie, Director of Ethics and Responsible Innovation at the Alan Turing Institute, joins David Rothkopf for a discussion on the ways in which artificial intelligence will challenge the framework of society and how to ensure that AI serves the public good. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Guests: Vas Mavroudis, Principal Research Scientist, The Alan Turing InstituteWebsite | https://mavroud.is/At BlackHat | https://www.blackhat.com/us-24/briefings/schedule/speakers.html#vasilios-mavroudis-34757Jamie Gawith, Assistant Professor of Electrical Engineering, University of BathOn LinkedIn | https://www.linkedin.com/in/jamie-gawith-63560b60/At BlackHat | https://www.blackhat.com/us-24/briefings/schedule/speakers.html#jamie-gawith-48261____________________________Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesAs Black Hat Conference 2024 approaches, Sean Martin and Marco Ciappelli are gearing up for a conversation about the complexities of deep reinforcement learning and the potential cybersecurity threats posed by backdoors in these systems. They will be joined by Vas Mavroudis from the Alan Turing Institute and Jamie Gawith from the University of Bath, who will be presenting their cutting-edge research at the event.Setting the Stage: The discussion begins with Sean and Marco sharing their excitement about the upcoming conference. They set a professional and engaging tone, seamlessly leading into the introduction of their guests, Jamie and Vas.The Core Discussion: Sean introduces the main focus of their upcoming session, titled "Backdoors in Deep Reinforcement Learning Agents." Expressing curiosity and anticipation, he invites Jamie and Vas to share more about their backgrounds and the significance of their work in this area.Expert Introductions: Jamie Gawith explains his journey from working in power electronics and nuclear fusion to focusing on cybersecurity. His collaboration with Vas arose from a shared interest in using reinforcement learning agents for controlling nuclear fusion reactors. He describes the crucial role these agents play and the potential risks associated with their deployment in critical environments.Vas Mavroudis introduces himself as a principal research scientist at the Alan Turing Institute, leading a team focused on autonomous cyber defense. His work involves developing and securing autonomous agents tasked with defending networks and systems from cyber threats. The conversation highlights the vulnerabilities of these agents to backdoors and the need for robust security measures.Deep Dive into Reinforcement Learning: Vas offers an overview of reinforcement learning, highlighting its differences from supervised and unsupervised learning. He emphasizes the importance of real-world experiences in training these agents to make optimal decisions through trial and error. The conversation also touches on the use of deep neural networks, which enhance the capabilities of reinforcement learning models but also introduce complexities that can be exploited.Security Concerns: The discussion then shifts to the security challenges associated with reinforcement learning models. Vas explains the concept of backdoors in machine learning and the unique challenges they present. Unlike traditional software backdoors, these are hidden within the neural network layers, making detection difficult.Real-World Implications: Jamie discusses the practical implications of these security issues, particularly in high-stakes scenarios like nuclear fusion reactors. He outlines the potential catastrophic consequences of a backdoor-triggered failure, underscoring the importance of securing these models to prevent malicious exploitation.Looking Ahead: Sean and Marco express their anticipation for the upcoming session, highlighting the collaborative efforts of Vas, Jamie, and their teams in tackling these critical issues. They emphasize the significance of this research and its implications for the future of autonomous systems.Conclusion: This pre-event conversation sets the stage for a compelling session at Black Hat Conference 2024. It offers attendees a preview of the insights and discussions they can expect about the intersection of deep reinforcement learning and cybersecurity. The session promises to provide valuable knowledge on protecting advanced technologies from emerging threats.Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsLevelBlue: https://itspm.ag/levelblue266f6cCoro: https://itspm.ag/coronet-30deSquareX: https://itspm.ag/sqrx-l91____________________________Follow our Black Hat USA 2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegasOn YouTube:
Guests: Vas Mavroudis, Principal Research Scientist, The Alan Turing InstituteWebsite | https://mavroud.is/At BlackHat | https://www.blackhat.com/us-24/briefings/schedule/speakers.html#vasilios-mavroudis-34757Jamie Gawith, Assistant Professor of Electrical Engineering, University of BathOn LinkedIn | https://www.linkedin.com/in/jamie-gawith-63560b60/At BlackHat | https://www.blackhat.com/us-24/briefings/schedule/speakers.html#jamie-gawith-48261____________________________Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesAs Black Hat Conference 2024 approaches, Sean Martin and Marco Ciappelli are gearing up for a conversation about the complexities of deep reinforcement learning and the potential cybersecurity threats posed by backdoors in these systems. They will be joined by Vas Mavroudis from the Alan Turing Institute and Jamie Gawith from the University of Bath, who will be presenting their cutting-edge research at the event.Setting the Stage: The discussion begins with Sean and Marco sharing their excitement about the upcoming conference. They set a professional and engaging tone, seamlessly leading into the introduction of their guests, Jamie and Vas.The Core Discussion: Sean introduces the main focus of their upcoming session, titled "Backdoors in Deep Reinforcement Learning Agents." Expressing curiosity and anticipation, he invites Jamie and Vas to share more about their backgrounds and the significance of their work in this area.Expert Introductions: Jamie Gawith explains his journey from working in power electronics and nuclear fusion to focusing on cybersecurity. His collaboration with Vas arose from a shared interest in using reinforcement learning agents for controlling nuclear fusion reactors. He describes the crucial role these agents play and the potential risks associated with their deployment in critical environments.Vas Mavroudis introduces himself as a principal research scientist at the Alan Turing Institute, leading a team focused on autonomous cyber defense. His work involves developing and securing autonomous agents tasked with defending networks and systems from cyber threats. The conversation highlights the vulnerabilities of these agents to backdoors and the need for robust security measures.Deep Dive into Reinforcement Learning: Vas offers an overview of reinforcement learning, highlighting its differences from supervised and unsupervised learning. He emphasizes the importance of real-world experiences in training these agents to make optimal decisions through trial and error. The conversation also touches on the use of deep neural networks, which enhance the capabilities of reinforcement learning models but also introduce complexities that can be exploited.Security Concerns: The discussion then shifts to the security challenges associated with reinforcement learning models. Vas explains the concept of backdoors in machine learning and the unique challenges they present. Unlike traditional software backdoors, these are hidden within the neural network layers, making detection difficult.Real-World Implications: Jamie discusses the practical implications of these security issues, particularly in high-stakes scenarios like nuclear fusion reactors. He outlines the potential catastrophic consequences of a backdoor-triggered failure, underscoring the importance of securing these models to prevent malicious exploitation.Looking Ahead: Sean and Marco express their anticipation for the upcoming session, highlighting the collaborative efforts of Vas, Jamie, and their teams in tackling these critical issues. They emphasize the significance of this research and its implications for the future of autonomous systems.Conclusion: This pre-event conversation sets the stage for a compelling session at Black Hat Conference 2024. It offers attendees a preview of the insights and discussions they can expect about the intersection of deep reinforcement learning and cybersecurity. The session promises to provide valuable knowledge on protecting advanced technologies from emerging threats.Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsLevelBlue: https://itspm.ag/levelblue266f6cCoro: https://itspm.ag/coronet-30deSquareX: https://itspm.ag/sqrx-l91____________________________Follow our Black Hat USA 2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegasOn YouTube:
Alain Zemkoho is an associate professor in operational research at the School of Mathematical Sciences within the University of Southampton where he is affiliated to the OR Group and CORMSIS. Prior to joining Southampton, he was a research fellow at the University of Birmingham (UK) and had previously worked as a research associate at the Technical University of Freiberg (Germany). He is an Alexander von Humboldt Experienced Fellow for 2024-2026, a Fellow of the Institute of Mathematics & Its Applications, a Fellow of the Higher Education Academy, and had been a Fellow of the Alan Turing Institute for Data Science and Artificial Intelligence for 2019-2023. Alain's research interests revolve around continuous optimization with specific focus on bilevel optimization, stability analysis for parametric optimization, and machine learning modelling, theory, and numerical methods. He has published 40 papers around these topics and has secured grants totalling close to £2M in full economic cost (as PI or Co-I) to fund some of his research. Alain also serves as a member of the EPSRC Peer Review College and of the OR Society Research Committee.
Send us a Text Message.In the second part episode on the Turing Way, I meet with contributors like Patricia Herterich and Sarah Gibson - who were part of the original team - as well as Malvika Sharan and Anne Steele who joined later.Through these conversations I hope to show how the Turing Way transitioned from being an online guide for reproducibility and open science to a thriving community with members across the globe. https://book.the-turing-way.org/index.html the home page of the Turing Wayhttps://edsbook.org/welcome.html The environmental data science book https://nasa.github.io/Transform-to-Open-Science/ NASA's Transform to Open Science project home pagehttps://adriennemareebrown.net/book/emergent-strategy/ Emergent Strategy a book by Adrienne Brownhttps://www.jstor.org/stable/j.ctt5hhdf6 Understanding Knowledge as a Commons: From Theory to Practice https://www.jofreeman.com/joreen/tyranny.htm Jo Freeman's article on Tyranny of Structurelessnesshttps://mybinder.org Binder project homepagehttps://jupyter.org Jupyter project homepagehttps://2i2c.org 2i2c homepageI would like to thank the Turing Way and the Alan Turing Institute in the UK for their patience, time and kind support. Support the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastodon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/
In this episode, Moshen Mosleh delves into the intricate processes related to misinformation, exploring the alarming rise of this trend and the underlying reasons why people share misinformation. He highlights the significant effects of low-quality content on public perception and trust. Mosleh also provides practical tips on how to detect misinformation, empowering listeners with the tools they need to navigate the digital information landscape critically.Furthermore, the discussion touches on the challenges of identifying AI-created content and the complexities of misinformation across multiple social platforms. By examining these issues, Mosleh sheds light on the pervasive nature of misinformation and offers insights into how we can better protect ourselves from its harmful effects.About Moshen Mosleh:Moshen is a Senior Lecturer (Permanent Assistant Professor) at University of Exeter Business School, research affiliate at MIT Sloan School of Management, and a Fellow at Alan Turing Institute for Data Science and Artificial Intelligence.X: @_mohsen_mWebsite: https://mohsenmosleh.com---ANNOUNCEMENT: Through Conversations Podcast is partnering with Tangle News to bring listeners insightful discussions on today's most pressing issues. This collaboration will combine Tangle News' unbiased reporting with Through Conversations' deep, engaging dialogues. Together, we aim to inform, educate, and inspire, fostering thoughtful discourse and a better understanding of our complex world.Join Tangle News Today - https://www.readtangle.comHighlights00:00 Intro and Episode Preview04:29 Misinformation Related Processes08:54 Rise of Misinformation Trend12:20 Reasons for Sharing Misinformation16:12 Effects of Low-Quality Content21:03 Tips to Detect Misinformation23:00 Identifying AI-Created Content27:11 Multi-Social Platforms Discussion// LINKS //Website: https://throughconversations.comNewsletter: https://throughconversations.ck.page/0c18d1ab61// SOCIAL //Twitter: https://twitter.com/thruconvpodcastInstagram: https://www.instagram.com/thruconvpodcast/?hl=enYouTube: https://www.youtube.com/channel/UCl67XqJVdVtBqiCWahS776g
(2:00) Trump Bump Ban Overruled; MAGA Press Avoids Discussion, But Here's What It MeansThe ruling as gun store owner takes the fight to Supreme Court AND WINS!The lies about Las Vegas Shooting and Bump Stock — it was all a set up from the startThe implications for other gun control by Executive Order for which Trump set the precedentSotomayor's "duck" quote — she's ducking the ConstitutionShould machine guns be banned? Is there authority to do so?Pistol Brace Ban was already in troubleBiden will try to bring back through Congress — does Congress have the authority to infringe?(41:26) Listener comments on bump stock ban (53:22) Listener letters...Remdesivir by another nameBayer's allies in press push Glyphosate(58:42) Ministry of "Love" Attacks "Hate" SpeechJudge may lock up journalist for publishing Tranny-Killer Diary as FBI pushes to destroy the info rather than have the public see tranny insanity and/or effects of pharma on mass murderJudge treats the diary as classified material, yet what happened to the "Pentagon Papers"Connecticut Bar Association threatens lawyers against speaking out in opposition to political show trials of TrumpFIVE FORBIDDEN STATEMENTS ABOUT ISRAEL: Jewish writer concerned about free speech looks at the implications of "Machiavellian" Mike Johnson's "Antisemitism Awareness Act" censoring speech in universities using financial blackmailDem Leader Hakeem Jeffries takes the next step in the attack against free speech — "Countering Antisemitism Act" that expands censoring speech about Israel beyond universities, to the general population(1:51:14) Listener comments (1:57:47) No Social Security or Veterans Benefits w/o Digital IDsCommerce Dept will take the lead in creating Digital IDs for Americans and it won't be limited to welfare. NIST will do the technical work, the same people that claimed 9/11 buildings fell into their own footprint (2:08:53) Listener comments on Real ID (2:13:07) BigTech Keeps Pushing Slave TechGates pours $4 MILLION into grant for Alan Turing Institute to fund Digital ID initiativeMore details on Adobe's new surveillance powers it's demanding if users want to use their programBut you DON'T HAVE TO FEED THE MONSTER — alternatives to Adobe from listenersBIS (Bank of International Settlements) Project Rialto — to push CBDCRussia rushes into CBDC — it's a global agenda that includes Russia & China as well(2:30:32) I'm Going to Have to Support Trump - Read My Tips, No New TaxesTrump keeps repeating "no tax on tips" — tips are how he's always come into contact with the little peopleThis is one promise he could actually keep without opposition. Should I support him so I could live tax-free? LOLWill Trump be willing/able to reduce corporate taxes to 20% (his gift to them last time was 21%)Will Trump be willing/able to replace income taxes with tariffs — or will he just raise tariffs to insane levels and keep the income tax?(2:37:19) Another foreign fundraiser for TrumpWhy I have a problem with thisAnd another reason why YOU won't have ANY effect on the Presidential race — it's purpose is to distract you from where you CAN have an effect — LOCALLY(2:42:47) CORRUPTION & CONTEMPTDoJ refuses to prosecute its head, Merrick Garland, for contempt of Congress as Bannon heads to prison. What about James Clapper & his perjury?Elon Musk on EV (Electronic Voting) corruption — let's put the RICO back in Puerto Rico?(2:50:23) WEF calls for "Reinventing" Global Food AND DistributionPushes "animal-free" productsAustralian grocer warns of egg shortage from "bird flu" culling, but government & press tell people NOT to prepPentagon wants to feed "lab meat" to military — treated as swineFind out more about the show and where you can watch it at TheDavidKnightShow.comIf you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Money should have intrinsic value AND transactional privacy: Go to DavidKnight.gold for great deals on physical gold/silverFor 10% off Gerald Celente's prescient Trends Journal, go to TrendsJournal.com and enter the code KNIGHTBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-david-knight-show--2653468/support.
Thank you to the folks at Sustain (https://sustainoss.org/) for providing the hosting account for CHAOSSCast! CHAOSScast – Episode 86 In this episode of CHAOSScast, co-hosts Alice Sowerby and Dawn Foster welcome guests Aida Mehonic, Malvika Sharan, and Kirstie Whittaker from The Alan Turing Institute. The discussion begins with delving into the Institute's strategic vision, focused on using data science and AI to address global challenges in environment, health, and security. They examine the role of open source contributions in enhancing the ethical, accessible, and impactful uses of AI. The episode highlights various projects, such as The Turing Way, and the importance of community building, inclusive research practices, and the ethical considerations of AI. They also discuss the integration of CHAOSS metrics in their work and explore future projects and initiatives at The Alan Turing Institute. Press download now to hear more! [00:02:58] Kirstie gives an overview of The Turing Institute's strategic vision and explains the three missions. [00:06:22] Aida talks about the importance of communicating with organizations to align on a shared mission and the impact and value of money of publicly funded projects. [00:08:38] Malvika brings in the stakeholders ensuring that users, communities, and patients have a say in AI development and empowering educators to incorporate AI, also she talks about working across different projects like Data Science Without Borders and BridgeAI, to accelerate AI's impact on health and SME's. [00:11:02] The conversation switches to embracing ethical AI usage and encouraging others to do the same. Kirstie details the ethical components of AI using the SAFE-D approach: Safety and sustainability, Accountability, Fairness and non-discrimination, Explainability and transparency, and Data quality, integrity, protection, and privacy. [00:17:17] Malvika talks about the importance of considering the societal impact of research at The Turing Institute, she highlights the differences between the EU AI Act and the open source community approach and emphasizes that users should know their rights regarding data collection and sharing. [00:19:49] Aida tells us about a case study on A/B street, an open source street planning tool. They partnered up with Bristol City Council and used this tool to facilitate community involvement in urban planning decisions. [00:23:52] Aida mentions having conversation she's been in has focused on at Turing on democratizing technology to reach a broader set of end users. [00:24:14] Dawn loves Turing's collaborative approach and acknowledges the challenges in making AI and data science intuitive for everyone. [00:24:54] Kirstie discusses the difficulty of meaningful stakeholder engagement. She talks about the importance of being willing to pivot project goals based on community feedback. [00:26:51] Alice brings up CHAOSS metrics and inquires how they fit into The Turing Institute's work. Malvika explains that CHAOSS metrics is one of the only metrics that help them for understanding equity, diversity, and inclusion (EDI) in community health. [00:31:00] Dawn highlights the need to combine quantitative metrics with qualitative research. Kirstie shares that data scientists often don't see their work as part of open source or community led projects. Aida comments on using CHAOSS metrics to justify the impact of open source research funded by taxpayer money. [00:36:05] Dawn asks about the future focus areas for The Turing Institute. Kirstie mentions the BridgeAI Initiative to support SMEs in the UK in leveraging data and the expansion of The Turing Way Practitioner Hub to support experts in organizations and foster global knowledge exchange. [00:38:28] Aida shares her excitement about a potential incubator at Turing focused on pathways to impact for research. Malvika shares her excitement for professionalization and recognition of various data science roles. Value Adds (Picks) of the week: [00:40:54] Dawn's pick is GitHub's GraphQL API. [00:41:21] Malvika's pick is celebrating 5 years of The Turing Way. [00:41:53] Aida's pick is the incubator she talked about earlier. [00:42:26] Kirstie's pick is the Organizational Mycology team: Dan Sholler, Beth Duckles, and Jonah Duckles. [00:43:30] Alice's pick is listening to some podcasts about nuclear semiotics. Panelists: Alice Sowerby Dawn Foster Guests: Aida Mehonic Malvika Sharan Kirstie Whitaker Links: CHAOSS (https://chaoss.community/) CHAOSS Project X/Twitter (https://twitter.com/chaossproj?lang=en) CHAOSScast Podcast (https://podcast.chaoss.community/) podcast@chaoss.community (mailto:podcast@chaoss.community) Georg Link Website (https://georg.link/) Dawn Foster X/Twitter (https://twitter.com/geekygirldawn?lang=en) Alice Sowerby Website (https://www.rosmarin.co.uk/) Aida Mehonic LinkedIn (https://www.linkedin.com/in/aida-mehonic-5264613/?originalSubdomain=uk) Dr. Aida Mehonic-The Alan Turing Institute (https://www.turing.ac.uk/people/business-team/aida-mehonic) Malvika Sharan LinkedIn (https://www.linkedin.com/in/malvikasharan/?originalSubdomain=uk) Dr. Malvika Sharan-The Alan Turing Institute (https://www.turing.ac.uk/people/researchers/malvika-sharan) Kirstie Whitaker LinkedIn (https://www.linkedin.com/in/kirstiewhitaker/?originalSubdomain=uk) Dr. Kirstie Whitaker-The Alan Turing Institute (https://www.turing.ac.uk/people/researchers/kirstie-whitaker) The Alan Turing Institute (https://www.turing.ac.uk/) The Turing Way (https://www.turing.ac.uk/research/research-projects/turing-way) Launch of the Data Science Without Borders Project (https://codata.org/launch-of-the-data-science-without-borders-project/) BridgeAI (https://iuk.ktn-uk.org/programme/bridge) ‘Using AI in the public sector: New comprehensive guidance'-The Alan Turing Institute (https://www.turing.ac.uk/news/using-ai-public-sector-new-comprehensive-guidance) AI Ethics and Governance in Practice-The Alan Turing Institute (https://www.turing.ac.uk/research/research-projects/ai-ethics-and-governance-practice) Operationalising the SAFE-D principles for Open Source AI-Open Source Initiative (https://opensource.org/deepdive/webinars/operationalising-the-safe-d-principles-for-open-source-ai/) A/B Street (https://a-b-street.github.io/docs/) ‘Street smart: putting neighbourhood design in the hands of Bristol residents'-The Alan Turing Institute (https://www.turing.ac.uk/blog/street-smart-putting-neighbourhood-design-hands-bristol-residents) The Turing Way Practitioners Hub (https://www.turing.ac.uk/turing-way-practitioners-hub) GraphQL API-GitHub (https://docs.github.com/en/graphql) Organizational Mycology (https://orgmycology.com/about/) American Hysteria Podcast-Talking to the Future: Nuclear Semiotics (https://podcasts.apple.com/us/podcast/talking-to-the-future-nuclear-semiotics/id1441348407?i=1000655384185) 99% Invisible Podcast: Ten Thousand Years Episode 114 (https://99percentinvisible.org/episode/ten-thousand-years/) Special Guests: Aida Mehonic, Kirstie Whitaker, and Malvika Sharan.
Send us a Text Message.The Turing Way was created 5 years ago and during the Collaborations Workshop 2024 at the University of Warwick, UK, we had a big birthday party to celebrate the occasion.In this episode, part 1 of 2 episodes on The Turing Way, I sit down with the founder of The Turing Way, Dr Kirstie Whitaker, who is also the Programme Director for Tools, Practices and Systems at the Turing Institute in the UK. You also hear a moment when a new chapter was merged into The Turing Way during this recording, done by Sophia Batchelor. https://book.the-turing-way.org/ the Home page of The Turing Wayhttps://github.com/the-turing-way/the-turing-way the GitHub repository of The Turing Wayhttps://www.turing.ac.uk/people/researchers/kirstie-whitaker Profile page of Dr Kirstie Whitaker https://book.the-turing-way.org/community-handbook/translation/translation-workflow the part that deals with localising contenthttps://github.com/the-turing-way/the-turing-way/pull/3638 the pull request for the chapter that was merged during the recording of this episodehttps://github.com/BrainonSilicon Sophia Batchelor's GitHub pageI would like to express my heartfelt thanks to the Alan Turing Institute for their support and for sponsoring this episode.Support the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastadon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/
This is a classic episode from back when we had no listeners, and today it's about AI. Will robots take over the world? What's the point in humans? Do insects have ears? And is a toaster just a toaster? Joe and Tom speak to Lydia, a research data scientist at the Alan Turing Institute, and she is way more intelligent than them. If you would like to be a guest on the show, click here: https://docs.google.com/forms/d/1rfSo3PVJgtBRZHCCAZndem-iyy2EdvGcEYDqycsM2aQ/viewform To get ad-free and longer episodes on Apple, hit the 'grow the show' button or click: https://apple.co/3sAX0xR On Spotify you can subscribe for £1 a week by clicking this link: https://anchor.fm/thingspeopledo To become an official sponsor, go to Patreon.com/thingspeopledo To grow the show on socials, look for @thingspeoplepod on Instagram, Twitter and TikTok Send us your ideas for the show, and questions for Joe and Tom to joe@crowdnetwork.co.uk Music courtesy of BMG Production Music Learn more about your ad choices. Visit podcastchoices.com/adchoices
This month we take some time to talk in depth about what exactly generative AI is, what it can do, and what it can't do. In this special episode, derived from a webinar titled "Future Shock: Grappling With the Generative AI Revolution," host Xiao-Li Meng collaborates with Harvard's Graduate School of Arts and Sciences to tackle the topic of generative AI with the help of esteemed panelists and the three co-editors of HDSR's Future Shock special issue, Francine Berman, Ralf Herbrich, and David Leslie. Stay tuned for all of this and more on the Harvard Data Science Review Podcast. Our guests: Francine Berman, Edward P. Hamilton Distinguished Professor in Computer Science at Rensselaer Polytechnic Institute (RPI), and Director of Public Interest Technology and the Stuart Rice Research Professor in the College of Information and Computer Sciences at University of Massachusetts Amherst. Ralf Herbrich, Managing Director of Hasso Plattner Institute and Professor of Artificial Intelligence and Sustainability at the Hasso Plattner Institute and University of Potsdam. David Leslie, Director of Ethics and Responsible Innovation Research at The Alan Turing Institute, and Professor of Ethics, Technology and Society at Queen Mary University of London.
It's been a busy week in the world of artificial intelligence. OpenAI found itself in hot water with Scarlett Johansson after launching its new chatbot, Sky, drawing comparisons to the Hollywood star's character in the sci-fi film Her. In South Korea, the second global AI summit took place, and a report from the Alan Turing Institute explored how AI could influence elections. The Guardian's UK technology editor, Alex Hern, tells Madeleine Finlay about what's been happening. Help support our independent journalism at theguardian.com/sciencepod
Send us a Text Message.Bastian Greshake Tzovaras arbeitet am Alan Turing Institute in London zum Thema Citizen Science, also Bürgerwissenschaften. Und dies ist auch das Thema unseres Gesprächs, denn Bastian arbeitet daran schon seit geraumer Zeit. Und er zeigt uns wie Wissenschaft durchaus von einer engeren Zusammenarbeit mit Bürgern profitieren kann.Linkshttps://tzovar.as/ Bastian's Homepagehttps://scholar.social/@gedankenstuecke Bastian's Mastodon Kontohttps://www.cambridge.org/core/journals/data-and-policy/article/how-to-cocreate-content-moderation-policies-the-case-of-the-autspaces-project/DF59760CB3F89901CB5634981C588B55 Ein Paper zum Thema Content Moderation Policieshttps://www.openhumans.org Die Open Humans Platformhttps://opensnp.org Eine Platform zur Veröffentlichung von genetischen Testdatenhttps://www.turing.ac.uk Das Alan Turing Institut in LondonSupport the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastadon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/
Send us a Text Message.In part two of this "mini-series" on diverse roles in data science teams, I speak with Dr Emma Karoune from the Alan Turing Institute https://www.turing.ac.uk/ and the deputy director of the Software Sustainability Institute of the UK, Simon Hettrick.Emma and I explore the skills and policy landscape of modern data science roles. Emma is one of the awardees of the Skills Policy Award at the Alan Turing Institute.Simon discusses how groups like the Research Software Engineering community and initiatives like the Hidden REF help to bring about real change. https://www.turing.ac.uk/skills-policy-awards-20232024 The home page for the Project Dr Emma Karoune is involved with at the Alan Turing Institutehttps://www.software.ac.uk The Software Sustainability Institutehttps://hidden-ref.org The "Hidden REF" initiative home pageBoth Emma and Simon are Committee member of the Hidden REF (The hidden REF – Celebrating all research outputs (hidden-ref.org)), a campaign to recognise all research outputs and every role that makes research possible.More information about Emma's project can be found on the project website - Professionalising traditional and infrastructure research roles in data science | The Alan Turing Institute. Emma would like to thank her project team members Malvika Sharan and Alexandra Araujo Alvarez as well as all those at The Alan Turing Institute and other data science professionals who have contributed to this project through attending workshops and interviews.Support the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastadon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/
Send us a Text Message.The Collaborations Workshop, run by the Software Sustainability Institute https://www.software.ac.uk/ is a long-running institution. I had the pleasure of going there in May 2024 for the first time. It was an exciting, but also exhausting, couple of days. The 3 themes focused on AI/ML in science, environmental sustainability and citizen science.Here is a report on the workshop with a bunch of interviews:Arfon Smith from GitHub/CopilotBecky Osselton from Newcastle UniversityDave Horsfall from Newcastle UniversityKirstie Pringle from the Software Sustainability InstituteShoaib Sufi from Uni Manchester/Software Sustainability InstituteMany thanks for the organisers and sponsors of this event (Wellcome Trust, Alan Turing Institute)https://www.software.ac.uk/workshop/collaborations-workshop-2024-cw24 Support the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastadon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/
In July 2022, we held an interdisciplinary panel at Magdalen College at the University of Oxford on the role of beauty in science. The event was sponsored by Templeton Religion Trust and Magdalen College. The panel represented perspectives from diverse disciplines: mathematics, physics, biology, philosophy of science, and sociology.We're going to share this conversation with you over the next few episodes. You can see the slides from the panelists' presentations on the YouTube video here: https://youtu.be/t6AQ9O2MLUY?si=VQpoyHYaMSWYpBoNIn this first clip, we'll hear from Dr. Ben MacArthur, who is Director of AI for Science and Government and Deputy Programme Director for Health and Medical Sciences at the Alan Turing Institute. He is also Professor in the Faculty of Medicine and the School of Mathematical Sciences at the University of Southampton.Following that we'll hear from Dr. James McAllister, who is Professor of History and Philosophy of Science at Leiden University. He is the author of Beauty and Revolution in Science (Cornell University Press, 1996). We'll hear from our other panelists in our next episode.Support the Show.
Today, Steve is speaking with Mariarosaria Taddeo, Professor of Digital Ethics and Defence Technologies and Dslt Ethics Fellow at the Alan Turing Institute. Mariarosaria brings her expertise as a philosopher to bear in this discussion of why and how we must develop agreed-upon ethical principles and governance for cyber warfare. Key Takeaways: 1. As cyber attacks increase, international humanitarian law and rules of war require a conceptual shift. 2. To maintain competitive advantage while upholding their values, liberal democracies are needing to move swiftly to develop and integrate regulation of emerging digital technologies and AI. 3. Many new technologies have a direct and harmful impact on the environment, so it's imperative that any ethical AI be developed sustainably. Tune in to hear more about: 1. The digital revolution affects how we do things, how we think about our environment, and how we interact with the environment. (1:10) 2. Regardless of how individual countries may wield new digital capabilities, liberal democracies as such must endeavor tirelessly to develop digital systems and AI that is well considered, that is ethically sound, and that does not discriminate. (5:20) 3. New digital capabilities may produce CO2 and other environmental impacts that will need to be recognized and accounted for as new technologies are being rolled out. (10:03) Standout Quotes: 1. “The way in which international humanitarian laws works or just war theory works is that we tell you what kind of force, when, and how you can use it to regulate the conduct of states in war. Now, fast forward to 2007, cyber attacks against Estonia, and you have a different kind of war, where you have an aggressive behavior, but we're not using force anymore. How do you regulate this new phenomenon, if so far, we have regulated war by regulating force, but now this new type of war is not a force in itself or does not imply the use of force? So this is a conceptual shift. A concept which is not radically changing, but has acquired or identifies a new phenomenon which is new compared to what we used to do before.” - Mariarosario Taddeo 2. “I joke with my students when they come up with this same objection, I say, well, you know, we didn't stop putting alarms and locking our doors because sooner or later, somebody will break into the house. It's the same principle. The risk is there, it's present. They're gonna do things faster in a more dangerous way, but if we give up to the regulations, then we might as well surrender immediately, right?” - Mariarosario Taddeo 3. “LLMs, for example, large language models, ChatGPT for example, they consume a lot of the resources of our environment. We did with some of the students here of AI a few years ago a study where we show that training just one round of ChatGPT-3 would produce as much CO2 as 49 cars in the US for a year. It's a huge toll on the environment. So ethical AI means also sustainably developed.” - Mariarosario Taddeo Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
Day 759. Today, we bring you the latest news from across Ukraine, look further into the history and context that inform the aftermath of the terror attack in Russia last Friday and we speak to The Kyiv Independent's, Danylo Mokryk about his new documentary that asks the question - is Russia committing genocide in Ukraine? Contributors:David Knowles (Head of Audio Development). @djknowles22 on X.Dominic Nicholls(Associate Editor, Defence). @DomNicholls on X.Francis Dearnley (Assistant Comment Editor) @FrancisDearnley on X.Roland Oliphant (Senior Foreign Correspondent). @RolandOliphant on X.With thanks to:Danylo Mokryk, a reporter with the War Crimes Team of the Kyiv Independent. @DMokryk on X.Articles & Interviews Referenced: Russia supplies oil to North Korea as UN sanctions regime nears ‘collapse' (Financial Times)https://www.ft.com/content/df23a473-ea0b-4882-be19-048ae0d501d2Russia is winning the global grain war (Politico)https://www.politico.eu/article/russia-winning-global-grain-war-farmer-ukraine-putin-agriculture/Redesigning the UK's security apparatus for a more prosperous future (Alan Turing Institute)https://cetas.turing.ac.uk/publications/emerging-shadows-redesigning-uks-security-apparatus-more-prosperous-future Francis's Interview with Danilov:https://www.youtube.com/watch?v=2fxFRcrJfc4Is Russia committing genocide in Ukraine?https://kyivindependent.com/destroy-in-whole-or-in-part-is-russia-committing-genocide-in-ukraine/Is Russia trying to destroy the Ukrainian nation?https://youtu.be/GKcy8aSntfA?si=X6zp_9oKUOzRWOqgSubscribe to The Telegraph: telegraph.co.uk/ukrainethelatestEmail: ukrainepod@telegraph.co.uk Hosted on Acast. See acast.com/privacy for more information.
Kate, Princess of Wales, has released a statement after the controversy over an official image of her and her children that was released on Mother's Day.Oscars 2024: The technology big winners. Scientists look for signs of alien technology in a mysterious perfectly synchronised solar system.Also in this episode:Italians forced to cut back on Mediterranean cooking staple olive oil as prices skyrocketStaff at Alan Turing Institute speak out after four men given top rolesNHS campaign warns of 'silent killer' of high blood pressure Scientists find way to make gold from electronic wasteSea of Thieves is the most preordered game on PlayStation Follow us on X or on Threads. Hosted on Acast. See acast.com/privacy for more information.
Guest Malvika Sharan Panelist Richard Littauer Show Notes In this episode, host Richard has a conversation with guest Malvika Sharan, a senior research associate and open source community manager at The Turing Way. Malvika gives fascinating insights into the journey and rationale behind The Turing Way, which is a data science guide and community collaborative project, seeking to make data science more open, accessible, and inclusive. The Turing Way is also much more than its curated guides, serving as a space for enthusiasts to exchange ideas and contribute to the project. Richard and Malvika dig deep into the importance of effective recognition for contributions, Malvika's commitment to financial compensation within the constraints of the system, along with the persistence of volunteerism in open source work. Amid her busy year, Malvika is preparing for this chapter of The Turing Way's journey to enter an exciting 2024 and 2025 phase as they look to engage more directly with organizations that haven't yet adopted open source collaborative practices advocated by The Turing Way. Press download not to hear more! [00:01:17] Malvika explains The Turing Way as a collaborative book project on open science and data science, founded by Kristie Whittaker in 2019, which has evolved significantly with her as co-lead. Also, The Turing Way started as a book on GitHub aimed at making data science more open. [00:04:09] Richard clarifies that The Turing Way is a Jupyter book, and Malvika discusses the evolution of The Turing Way and its role in advocating for open source practices within data science projects. [00:05:55] Richard asks why there isn't a specific guide for open source, and Malvika explains that The Turing Way integrates open source principles throughout and addresses concerns about open practices in sensitive areas. [00:08:07] Richard inquires about the name, The Turing Way, an Malvika recounts its origin relating to the Alan Turing Institute and the idea of reproducibility. [00:09:26] The discussion turns to community involvement in The Turing Way. Malvika talks about different types of community events and how individuals become part of the community through collaboration and self-identification. [00:12:15] The concept of “emergent strategy” from the book by Adrienne Maree Brown is explained as an influence on the community aspect of The Turing Way. [00:13:13] Richard inquires about the sustainability of The Turing Way as an open source project and its funding and staff structure. Malvika explains that it's funded by the Alan Turing Institute and discusses the Institute's support and staff dedicated to the project, including a new project manager. [00:16:16] Richard asks how The Turing Way collects feedback and demonstrates its impact on sustainability for projects that use it. Malvika shares experiences of community members using The Turing Way to influence their organizations and mentions collaborations like The Environmental Data Science book project. [00:18:03] Malvika talks about how she launched The Turing Way Practitioners Hub to gather and share evidence. [00:19:42] Richard questions what it means to adopt The Turing Way, and Malvika clarifies that it refers to adopting the practices of open and collaborative data science. [00:20:16] They address the breadth versus depth of The Turing Way's content and the idea of creating sector-specific resources. [00:24:00] Malvika speaks about outreach through talks and workshops, and highlights the diversity of narratives within The Turing Way community, such as activism for [00:26:20] Richard notices the lack of Welsh language resources in The Turing Way and wonders why it's not included. Malvika acknowledges this as a shortcoming due to underrepresentation in the community. [00:27:19] How does The Turing Way support careers of its contributors? Malvika talks about the importance of recognition, mentioning the All Contributors Bot for immediate recognition, chairing working groups, and listing significant contributors in the book. [00:30:31] Richard asks how The Turing Way supports contributors financially. Malvika stresses the value of volunteerism for skill-building and personal growth. She also talks about The Turing Way's efforts in compensating contributors through grants and mentions the limitations of institutional funding structures. [00:33:58] Malvika tells us where you can find updates and involvement opportunities on The Turing Way, and where you can follow her online. Quotes [00:05:08] “The project itself is built on a lot of open source technology.” [00:06:00] “The Turing Way is a sneaky open source.” [00:09:48] “Sometimes it happens that you don't think you're a part of a community, but you are part of a community, it's just someone didn't tell you that you are part of the community.” [00:24:27] “I think of The Turing Way as an extension of digital commons that people should use and fare, but also maintain and support.” Spotlight [00:35:31] Richard's spotlight is the Montpelier Library in Montpelier, Ohio. [00:36:11] Malvika's spotlight is Laurah Ondari, a wonderful podcaster, and her podcast, The Science In Real Life. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Mastodon (https://mastodon.social/@richlitt) Malvika Sharan X/Twitter (https://twitter.com/malvikasharan?lang=en) Malvika Sharan Website (https://malvikasharan.github.io/) The Turing Way (https://the-turing-way.netlify.app/index.html) The Turing Way start page (https://the-turing-way.start.page/) The Turing Way X/Twitter (https://twitter.com/turingway?lang=en) Emergent Strategy: Shaping Change, Changing Worlds by Adrienne Maree Brown (https://www.amazon.com/Emergent-Strategy-Shaping-Change-Changing/dp/1849352607/ref=asc_df_1849352607/?tag=hyprod-20&linkCode=df0&hvadid=312519927002&hvpos=&hvnetw=g&hvrand=1321696698792624233&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9010767&hvtargid=pla-499608628957&psc=1&mcid=809f2bb32586374195942cf2c6e2aca4&gclid=CjwKCAiAvoqsBhB9EiwA9XTWGfN8LbnCRl97vsL9BOr4CpRodcEJm1tCxlv4gDnFpCXz1DpueMFthoCuQsQAvD_BwE) The Alan Turing Institute AI UK (https://www.turing.ac.uk/) The Environmental Data Science book-GitHub (https://github.com/alan-turing-institute/environmental-ds-book) The Turing Way Practitioners Hub (https://www.turing.ac.uk/turing-way-practitioners-hub) Emergent Strategy: Organizing for Social Justice (Forte Labs) (https://fortelabs.com/blog/emergent-strategy-organizing-for-social-justice/) All Contributors (https://allcontributors.org/) Montpelier Public Library (https://montpelierpubliclibrary.oplin.org/) Laurah Ondari LinkedIn (https://www.linkedin.com/in/laurah-ondari-749175b6/) The Science In Real Life Podcast (https://podcastaddict.com/podcast/the-science-in-real-life-podcast/4513182) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Malvika Sharan.
Humans have a long-held fascination with the idea of Artificial Intelligence (AI) as a dystopian threat: from Mary Shelley's Frankenstein, through to the Terminator movies. But somehow, we still often think of this technology as 'futuristic': whereas in fact, it's already woven into the fabric of our daily lives, from facial recognition software to translator apps. And if we get too caught up in the entertaining sci-fi narrative around AI and the potential threat from machines, there's a more pressing danger that we overlook real and present concerns - from deep fakes to electoral disinformation. That's why Michael Wooldridge is determined to demystify AI and explain how it can improve our lives, in a whole host of different ways. A Professor of Computer Science at the University of Oxford, and the Director of Foundational AI Research at the Alan Turing Institute, Mike believes the most common fears around this technology are "misplaced". In a special 300th edition of The Life Scientific, recorded in front of an audience at London's Royal Institution (RI), Mike tells Jim Al-Khalili how he will use this year's prestigious RI Christmas Lectures to lift the lid on modern AI technology and discuss how far it could go in future. Mike also reminiscences about the days when sending an email was a thrilling novelty, discusses why people love talking to him about the Terminator at parties, and is even challenged to think up a novel future use of AI by ChatGPT... Presenter: Jim Al-Khalili Producer: Lucy Taylor Audio editor: Sophie Ormiston Production co-ordinator: Jonathan Harris
The latest episode of "AI for Everyone" features an exclusive interview with Nick Fuggle, an associate professor in rheumatology at the University of Southampton and an organizer of a clinical AI interest group at the Alan Turing Institute. In this insightful conversation, Nick shares his expertise on the transformative potential of AI in the medical industry, particularly in clinical settings. He provides real-world examples of how AI is being implemented to enhance patient care, improve diagnostic speed, and even predict future health outcomes based on medical images. This episode offers a unique opportunity to gain practical insights into the actual impact of AI in healthcare, moving beyond the hype to understand how it is revolutionizing the industry.Host Myles Dhillon also sets the stage for a series of upcoming interviews with professionals from various sectors, focusing on the real-life implementation of AI in different workflows. By delving into the tangible ways in which AI is shaping diverse industries, this episode aims to provide listeners with a deeper understanding of the actual implications of AI, beyond the headlines and positive hype. The interview with Nick Fuggle serves as an engaging introduction to this series, offering a compelling preview of the valuable insights and firsthand experiences that will be shared in the upcoming episodes. Don't miss this opportunity to explore the practical applications of AI and its potential to revolutionize various fields, including healthcare, through real-world examples and expert perspectives.The interview with Nick Fuggle not only provides a fascinating glimpse into the current and future applications of AI in healthcare but also offers a balanced perspective on the role of clinicians in decision-making. Nick emphasizes that while AI has the potential to significantly improve healthcare, the ultimate responsibility for clinical decisions still lies with the clinician. This nuanced approach ensures that listeners gain a comprehensive understanding of how AI is integrated into medical practices while underscoring the continued importance of clinical expertise in decision-making. The episode also touches on the future implications of AI in transforming patient-doctor interactions, potentially reducing wait times and allowing for more focused and meaningful communication during consultations. Join us for this illuminating discussion that navigates the intersection of AI and healthcare, providing valuable insights and thought-provoking perspectives on its practical impact.Get intouch with Myles at mylesdhillon@gmail.com
The Turing Podcast revisits Project Bluebird; a fascinating collaboration aiming to solve some of the biggest and most complex problems in air traffic control with digital twins and AI. Join Ed as he sits down with Nick Pepper of The Alan Turing Institute, George De Ath of the University of Exeter and Marc Thomas of NATS - the team behind Project Bluebird. First featured on our podcast in 2020, the team now provides a progress update at the midpoint mark. Learn how they are developing innovative AI to train a digital twin air traffic controller with the aim of enhancing aviation safety and functionality, and what the challenges are, integrating human expertise with machine intelligence.
Humans have a long-held fascination with the idea of Artificial Intelligence (AI) as a dystopian threat: from Mary Shelley's Frankenstein, through to the Terminator movies.But somehow, we still often think of this technology as 'futuristic': whereas in fact, it's already woven into the fabric of our daily lives, from facial recognition software to translator apps. And if we get too caught up in the entertaining sci-fi narrative around AI and the potential threat from machines, there's a more pressing danger that we overlook real and present concerns - from deep fakes to electoral disinformation.That's why Michael Wooldridge is determined to demystify AI and explain how it can improve our lives, in a whole host of different ways. A Professor of Computer Science at the University of Oxford, and the Director of Foundational AI Research at the Alan Turing Institute, Mike believes the most common fears around this technology are "misplaced".In a special 300th edition of The Life Scientific, recorded in front of an audience at London's Royal Institution (RI), Mike tells Jim Al-Khalili how he will use this year's prestigious RI Christmas Lectures to lift the lid on modern AI technology and discuss how far it could go in future.Mike also reminiscences about the days when sending an email was a thrilling novelty, discusses why people love talking to him about the Terminator at parties, and is even challenged to think up a novel future use of AI by ChatGPT...Produced by Lucy Taylor.
This week on The Turing Podcast we chat about some of the research going on in the collaborations between the British Antarctic Survey (BAS) and The Alan Turing Institute, including efforts to automate the route planning of ships in polar regions, and the building of a digital twin for Antarctica! The hosts are joined by Dr Scott Hosking and Dr Jonathan Smith. Scott is Co-Director for the Alan Turing Institute Research and Innovation Cluster in Digital Twins, as well as the founder and leader of the BAS AI lab. Jonathan is a Principial Research Scientist, also at the BAS AI lab.
This and all episodes at: https://aiandyou.net/ . There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the Otermans Institute, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students. Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London. In the conclusion of the interview they describe how the AI teachers work, and their definitions of Teaching and Learning 1.0, 2.0, and 3.0. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
This week the hosts chat with Dr Katie McDonough and Dr Daniel Wilson. Katie is a lecturer in Digital Humanities in the Department of History at Lancaster University and a Senior Research Fellow at The Alan Turing Institute. Daniel is a Turing Research Fellow and historian of science and technology. The episode covers their research at the Turing and the growing intersection between the worlds of data science and the humanities and arts, with a focus on the MapReader software developed explore and analyse maps (and other images) at scale.
There isn't one narrative that fits all around AI. In this episode Carl Miller looks at how the future of artificial intelligence will be one beset by bias, whether that's a recruitment algorithm preferring men to women, racial bias in law and policing, or the failures of facial recognition technology to truly see diversity in an objective light. Featuring Judy Wajcman, Principal Investigator of the Women in Data Science and AI project at The Alan Turing Institute; Henry Ajder, Generative AI & Deepfakes Expert Advisor; and Olivier Sibony, writer, educator and consultant specializing in strategy, strategic decision making and the organization of decision processes. Want the future right now? Become a supporter of Intelligence Squared to get all five episodes of POWER TRIP to binge in one go. Just visit intelligencesquared.com/membership to find out more. Learn more about your ad choices. Visit megaphone.fm/adchoices
When did you first hear of GPT, Claude, DALL-E or Bard? Feels like a while ago, right? In barely over a year AI has permeated our conversations, our places of work and it feels omnipresent in the culture. It also threatens to make some of the pillars of our society redundant. Join researcher and author Carl Miller for POWER TRIP, a brand new podcast from Intelligence Squared, to see where that journey is leading us. Want the future right now? Become a supporter of Intelligence Squared to get all five episodes of POWER TRIP to binge in one go. Just visit intelligencesquared.com/membership to find out more. Technology is going to impact the future of humanity in ways that we may never have predicted and in the coming years perhaps in ways we can no longer control. In this first episode, Carl Miller guides us through the journey of how we got to this point in the story of AI and asks whether historians in the future will look at the era as one of pre-GPT and post-GPT. Featuring Michael Wooldridge, Director of Foundational AI Research at the Turing Institute and professor of computer science at the University of Oxford; Judy Wajcman, Principal Investigator of the Women in Data Science and AI project at The Alan Turing Institute; Henry Ajder, Generative AI & Deepfakes Expert Advisor and AI researcher Connor Leahy, CEO of Conjuncture. Learn more about your ad choices. Visit megaphone.fm/adchoices
Depending on who you speak to, AI is either going to plunge us into the abyss or improve every aspect of our lives immeasurably. The hype around AI can be disorientating, so let the RTBC team steer you away from the grim end-of-humanity inevitability, as we explore a more nuanced version of the AI story. Our guests Mustafa Suleyman, Dr Mhairi Aitken and Lauren M. E. Goodlad discuss whether the benefits of AI will ever outweigh the risks, why AI hype can serve as a distraction from some very pressing issues, and whether Geoff can ever replace Ed as a more obedient podcast host.Plus: Despite the technological advances of AI, why are Ed and Geoff still hung up on Ceefax?GuestsMustafa Suleyman, Co-founder of Inflection AI and author of The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma (@mustafasuleyman)Dr Mhairi Aitken, Ethics Fellow, Alan Turing Institute (@mhairi_aitken / @turinginst)Lauren M. E. Goodlad, Professor of English and Comparative Literature and Chair of the Critical AI Initiative at Rutgers University (@CriticalAI)More informationBuy a copy of Mustafa's book hereLearn more about Inflection AI hereLearn more about the Alan Turing Institute and the work Mhairi is doing on children's rights and AILearn more about Rutgers University's Critical AI Initiative with the journal's inaugural issue to follow in October 2023 Hosted on Acast. See acast.com/privacy for more information.