A fresh look on all things AI with a focus on ethics.
In this episode, we delve into the evolving landscape of AI regulation as we unravel the intricacies of the European Union's groundbreaking AI Act. Released as a comprehensive regulatory framework, the EU's AI Act is set to shape the future of AI development, deployment, and governance across the member states and the world. Join us as we explore the key provisions of the AI Act, examining its impact on both businesses and individuals. We'll discuss the high-risk AI applications that will face stringent regulations, the requirements for transparency and human oversight, and the implications for fostering innovation while ensuring ethical AI practices.
This week we were joined by the incredible Dr. Lydia Kostopoulos. Lydia is a multifaceted expert who has worked with the United Nations, NATO, US Special Operations, US Secret Service, IEEE, the European Commission, management consultancies, industry, academia and foreign governments. And havs experience working in US, Europe, Middle East and East Asia. Lydia's expertise range across AI, AI Ethics, Cyber Security, Art, Fashion, Health & more! In this episode we explore topics ranging from the role of humans in the future of AI, the value humans offer vs AI and the environmental impact of AI. In the episode we reference Lydia's recent talk on the Corporate Social Responsibility of AI a 15 min talk that we highly recommend people watch where Lydia looks at some of the United Nations SDG's and what they mean. The Corporate Social Responsibility of Artificial Intelligence https://www.youtube.com/watch?v=dnV1E4XiEkY Presentation:https://www.slideshare.net/lkcyber/the-corporate-social-responsibility-of-artificial-intelligence?from_search=4 More of Lydia's work - EmpoweringWorkwear: https://www.empoweringworkwear.com Project Nof1 Interview Series: https://www.projectnof1.com Lydia's Portfolio: https://www.lkcyber.com Lydia's Consultancy: https://abundance.studio
Happy New Year! We back in 2024 and what a year 2023 was for tech.. In this episode, we delve into the fascinating and increasingly crucial realm of AI governance. As AI continues to evolve, questions of ethics, accountability, and regulation become paramount. Join us as we explore the challenges and opportunities surrounding AI governance, featuring this weeks special expert guest Lofred Madzou. From the ethical considerations of autonomous systems to the role of policymakers in shaping AI policies, we look at the complex landscape of governing AI. Lofred is the Director of Strategy at Truera – an AI Quality platform to explain, test, debug and monitor machine learning models, leading to higher quality and trustworthiness. Outside his day job, he is a Research Associate at the Oxford Internet Institute (University of Oxford) where he mostly focuses on the governance of AI systems through audit processes. Before Joining Truera, he was an AI Lead at the World Economic Forum where he supported global companies, across industries and jurisdictions, in their implementation of responsible AI processes as well as advised various EU and Asia-Pacific governments on AI regulation. Previously, he was a policy officer at the French Digital Council, where he advised the French Government on AI policy. Most notably, he co-drafted the French AI Strategy.
On this episode of the Philosopher's Takeover, we look back on our episode with Tess Buckley. We discuss ableism and the meaning of baselines, posthumanist feminism and Donna Haraway's foundational work "The Cyborg Manifesto". References: A Cyborg Manifesto by Donna Haraway Man-Computer Symbiosis by J. C. R. Licklider Diary of a CEO podcast Follow us on Twitter
This week we were joined by Victoria Vassileva who is the Sales Director at Arthur. In this episode we get into Victoria's experience on working with organisations across sectors to combat ethical challenges and risks and more specifically how at Arthur they are solving this with regards to large language models (LLMS). We also approach the topic of the media hype surrounding AI being sentient and why this takes attention away from the real risks and ethical issues that are happening today. Victoria's bio & contact info: LinkedIn - https://www.linkedin.com/in/vvassileva/ Twitter - https://twitter.com/hellovictoriav?lang=en-GB Victoria Vassileva (she/her) is the Sales Director at Arthur. She has spent over a dozen years in and around data, starting as an analyst programming in SAS before transitioning to the GTM side. She now works to help complex organizations bring operational maturity and Responsible and Trustworthy practices to AI/ML initiatives across industries. At Arthur, Victoria focuses primarily on partnering with F100 enterprises to bring comprehensive performance management and proactive risk reduction and mitigation across their entire AI production stack. She is deeply motivated by the opportunity to shift industry practices to a "front-end ethics" approach to place equity and fairness considerations at the forefront of machine learning and automation projects. She holds degrees in Mathematics and French from the University of Texas at Austin.
This week, we welcome the innovative Ruth Ikwu, an AI Ethicist and MLOps Engineer with a solid foundation in Computer Science. As a Senior Researcher at Fujitsu Research of Europe, Ruth delves into AI Security, Ethics and Trust, playing a role in crafting innovative and reliable AI solutions for cyberspace safety. In this episode, she educates us on the evolving landscape of online sex work, discussing how platforms like AdultWork, OnlyFans and PornHub inadvertently facilitate sex trafficking. This is a heavy topic and contains a lot of distressing information about sex trafficking, Ruth's work is extremely important in bringing forward accountability. To learn more about Ruth's work on identifying human trafficking indicators in the UK online sex market - https://link.springer.com/article/10.1007/s12117-021-09431-0 Connect with Ruth - https://www.linkedin.com/in/ruth-eneyi-i-83a699118/
In this episode we are joined by Paul Röttger who is CTO & Co-Founder at Rewire and completing his PHD at Oxford University in NLP. Paul chats to us about the challenges of tackling hate speech online and why he decided to pursue this challenge in his PHD as well as how he started Rewire. More recently, Paul was part of an expert team that were hired by OpenAI to 'break' ChatGPT4 called the 'red team', he explains what this involved and how it aimed to solve some of the dangers of ChatGPT4. If you want to connect with Paul - Twitter - https://twitter.com/paul_rottger?lang=en LinkedIn - https://www.linkedin.com/in/paul-rottger/
The Philosophers Takeover!! This is the first in a new monthly series led by Alba Curry who is a Philosophy Professor at the University of Leeds. Alba will be joining us (as well as other special guests of her choice) once a month to do a philosophical deep dive of different episodes we have covered throughout the series. Our special guest this week is Maddy Page who is a PHD student at the University of Leeds focusing on the Philosophy of Art. In this episode Maddy covers the ontology of artwork and why its important. Join us as we deep dive into the world of AI generated art and its value! Can AI be considered as an artist or does art need to be created by a human? Why do we value art based on who the artist is and what do we define as art? Alba - https://www.linkedin.com/in/albacurry/ Maddy - twitter @madeleinesjpage
This weeks episode Oriana and Amanda discuss Sentient Robots and AI News, where we explore the latest news in AI including the famous 'Letter' signed by Elon Musk. As AI technology advances, we are witnessing the emergence of 'sentient robots' - machines that are claimed to be experiencing emotions, developing personalities, and even exhibiting creativity. In this podcast, we explore sentient robotics and examine their potential impact on society, culture, and the economy. So whether you're a technology enthusiast, industry professional, or just curious about the future of robotics and AI, tune in to the Sentient Robots and AI News episode for thought-provoking discussions and insights into the world of sentient machines. (*)This description was written using chatGPT and some human editing skills.
This week we are joined by Marc van Meel who is an AI Ethicist and public speaker with a background in Data Science. He currently works as a Managing Consultant at KPMG, where he helps organizations navigate the ethical implications of Artificial Intelligence and Data Science. In this episode we get into the future of technology in our society, AI auditing, the upcoming AI regulation and of course ChatGPT! To contact Marc: https://www.linkedin.com/in/marc-van-meel/
The start of a series of Responsible AI chats with Toju Duke! Toju is a popular keynote speaker, author, and thought leader on Responsible AI. She is a Programme Manager at Google where she leads various Responsible AI programmes across Google's product and research teams with a primary focus on large-scale models. She is also the founder of Diverse AI, a community interest organisation with a mission to support and champion underrepresented groups to build a diverse and inclusive AI future. She provides consultation and advice on Responsible AI practices.Toju's book “Building Responsible AI Algorithms” is available for preorder. In this episode we focus on why Responsible AI is important to Toju, her work at Google as a Responsible AI Programme Manager and her new venture Diverse AI. To learn more about Toju: www.tojuduke.com To learn more about Diverse AI: www.diverse-ai.org
*Trigger warning* this episode we recognise includes some harmful prejudices that people with disability face and this can be upsetting for some listeners. In this episode we are joined by Tess Buckley whose primary interests include studying the intersection of AI and disability rights, AI governance and corporate digital responsibility, amplifying marginalised voices in data through AI literacy training (HumansforAI), and computational creativity in music AI systems (personal project). Our conversation covers topics across ableism in biotechnology and society as a whole and how disability is represented in datasets. We are hooping this episode opens more dialogue as often people with disabilities are not included in conversations around bias. Connect with Tess on social media: https://www.linkedin.com/in/tess-buckley-a9580b166/
Older and wiser, Oriana and Amanda are back to chatting ethics after a hiatus! In this episode, we dive into the world of AI and the technology behind it. Meet ChatGPT, a large language model developed by OpenAI, and learn about its capabilities, limitations, and potential impact on our daily lives. From language generation to answering complex questions, we'll discover how ChatGPT works and how it's being used to enhance human capabilities. Join us as we engage in a conversation with Prof. Dirk Hovy to understand the ethical implications and the future of this rapidly advancing technology. Get ready to be amazed and informed as we explore the fascinating world of AI. (*)This description was written using chatGPT and some human editing skills.
This week Alba and Amanda discuss a new book called How Humans Judge Machine by Cesar A. Hidalgo. Get the book on: https://www.judgingmachines.com/ Eric Schwitzgebel's Aiming for Moral Mediocrity: https://faculty.ucr.edu/~eschwitz/SchwitzAbs/MoralMediocrity.htm The puppy cartoon: https://images.app.goo.gl/C4zKG5hsfE6419Ra6
This week Alba Curry joins us to discuss emotion AI grounded on the story "Under Old Earth". Are we aiming for happiness that is "bland as honey and sickening in the end"? Resources: Under Old Earth by Cordwainer Smith How emotions are made by Lisa Feldman Barret Affective Computing by Rosalind W. Picard
Could sex robots enhance our intimacy in relationships? This week we are back with an incredible guest Kate Devlin who shares her super interesting research into sex robots and our relationship with technology. Bio: Kate Devlin is Senior Lecturer in the Department of Digital Humanities at King's College London. Having begun her career as an archaeologist before moving into computer science, Kate's research is in the fields of Human Computer Interaction (HCI) and Artificial Intelligence (AI), investigating how people interact with and react to technology in order to understand how emerging and future technologies will affect us and the society in which we live. Kate has become a driving force in the field of intimacy and technology, running the UK's first sex tech hackathon in 2016. In short, she has become the face of sex robots – quite literally in the case of one mis-captioned tabloid photograph. Her 2018 book, Turned On: Science, Sex and Robots, was praised for its writing and wit.
Are we addicted to our smartphones? How did we function before? This week Amanda shares her journey of giving up her smartphone (influenced by Charles Radclyffe from EthicsGrade) and we look at how social media and smart phones have infiltrated our lives! We also have an announcement - the podcast will be bi-weekly for the summer
Ever bought a five-star moisturiser only to find out it breaks you out? YutyBazar is tackling waste by ultra-personalising your beauty routine using AI. Find out more about how it in this week's episode! Want to try YutyBazar yourself? https://www.yutybazar.com/ Simi Lindgren is the founder and CEO of YutyBazar. Socials: Twitter: @YUTYBAZAR Insta: yutybazar Twitter: @letschatethics Insta: lets.chat.ethics www.letschatethics.co.uk
How does ESG work when rating big tech on their ethics? This week we are joined by Charles Radclyffe who is the Co-Founder of EthicsGrade. EthicsGrade is an ESG ratings agency specialising in evaluating companies on their maturity against AI governance best-practice. Listen to why Charles decided to give up his phone pre-pandemic.. phone addiction culture has us all trapped!! Twitter: @dataphilosopher LinkedIn: Charles Radclyffe Bio: Charles is the Co-Founder of EthicsGrade and has built and sold three tech companies. In-between he has consulted to large Financial Services organisations on Emerging Technology. Charles advises organisations on how to develop a strategy of ethical implementation of AI, Automation and Robotics as well as speak at events on this subject, co-host a soon-to-be-released podcast, and writes a blog on the ethics and societal impact of emerging technology. Charles holds an MA in Law from Cambridge University
This week we are back reflecting on the past month of incredible guests from Chinese philosophy, ethical investing and the future of innovation. We also look the new EU regulations and how this compares to China's social scoring system. We will be going deeper into the new EU regulations in a coming episode.
What will tech look like in 2025? This week we are joined by the amazing Charlie Oliver CEO and founder of the incredible platform Tech2025. We get deep into voice recognition, the effect of technology on children and Charlie shares her journey of founding Tech2025 and why it was so important to her. Twitter: @itscomplicated LinkedIn: Charlie Oliver Bio: Charlie’s years of experience in the trenches of old media include working in advertising in New York at such media goliaths as BBDO Worldwide and Condé Nast, to producing sitcoms and dramas at Sony Pictures Entertainment, Paramount Pictures, Warner Brothers, Dreamworks and Oscar-award winning indie production companies, to event management at the Sundance Film Festival. After spending several years in corporate law in document review at global firms (White & Case, Clifford Chance and Wachtel Lipton, to name a few), Charlie segued seamlessly into tech and new media as a web video producer where she co-created and co-produced experimental video projects such as an 8-hour live webathon for the 2008 presidential election and numerous web video series. Soon thereafter, Charlie launched ArtofTalk.tv (a site that brought the vast world of tv, web and radio talk shows online to Users in bite-size video snippets of debates and interviews in social media). I n 2009, Charlie launched Served Fresh Media™ (a New York-based company) where her team provides digital marketing strategy, event management, product development, and senior management advisory for companies. Clients Served Fresh Media has worked with include IBM, New York Press Club, Cognizant, Digital Flash, Digital Realty, Tierpoint, and It’s About Time, among others. In January 2017, Charlie launched Tech 2025 — a community and platform for professionals to learn about the next wave of disruptive, emerging technologies and to facilitate discourse about the impact of these technologies on society with an emphasis on problem-solving. Having produced over 80 events since they launched, coupled with providing professional services, Tech 2025 has quickly gained a reputation for helping professionals and companies to understand and embrace emerging technologies and the whirlwind changes they bring, and to strategize for the future impact of accelerating innovation.
Will venture capitalists see the value in ethical investing? Can there be growth with ethics? This week we are joined by Mike Butcher the Editor-at-large of TechCrunch to share his expertise and knowledge about VC investment and if VCs can see the value in ethics. Mike Butcher MBE is Editor-at-large of TechCrunch, and co-founder of Pathfounder editorial events and reports series on ‘impact innovation’. Mike has been named one of the most influential journalists in European technology by Wired and The Daily Telegraph, among others. He has interviewed Tony Blair, Dmitry Medvedev, Kevin Spacey, Lily Cole, Pavel Durov and many other tech leaders and celebrities. He co-founded The Europas Awards for European startups, the non-profits TechVets and Techfugees, the co-working network TechHub. He is a regular tech commentator on the BBC, Sky News, CNN, CNBC and Aljazeera and has been a judge on The Apprentice UK. GQ magazine named him one of the 100 Most Connected Men in the UK and he’s a “Maserati 100 innovator”. He has advised previous UK governments on tech startup policy and was awarded an MBE in the Queen’s Birthday Honours. Twitter: @mikebutcher
This week we are joined again by Alba Curry and Ryan Harte to discuss how Chinese history and philosophy have shaped China's AI policies. This episode is part 2 of the conversation. Ryan Harte holds a Ph.D. in the comparative thought and literature of China and Greece, specialising in ethics. He has lived and taught in East Asia and the U.S. In autumn 2021 he will take up a post as Assistant Professor of Asian and comparative philosophy at Utah Valley University. Alba Cercas-Curry is a PHD student at the University of California studying comparative literature focusing on anger in ancient China and Greece.
This week we are joined by Alba Curry and Ryan Harte to discuss how Chinese history and philosophy have shaped China's AI policies. This episode is part 1 and next week you can listen to the rest of the conversation! Ryan Harte holds a Ph.D. in the comparative thought and literature of China and Greece, specialising in ethics. He has lived and taught in East Asia and the U.S. In autumn 2021 he will take up a post as Assistant Professor of Asian and comparative philosophy at Utah Valley University. Alba Cercas-Curry is a PHD student at the University of California studying comparative literature focusing on anger in ancient China and Greece. China's Ai Ethics principles Books from Alba: 1) Daniel Bell (2006) Beyond Liberal Democracy, Political Thinking for an East Asian Context. Princeton University Press.
Is AI a new tool for art or does it take away the human touch? This week we look at digital art and art created by AI after the recent news that Christie's Auction House sold the artist Beeple's digital art for $69,346,250. Article mentions: Art Net Christie's Auction House
Is Facebook's FAIR department truly fair? This week we chat about Facebook's role in spreading misinformation and take a look at the MIT Technology Review article, 'How Facebook got addicted to spreading misinformation' by Karen Hao. Article: Link
Can ethical AI ever work in industry? Will it take regulation or will corporations take it upon themselves? This week we look more Google's approach to ethical AI and predictions for the future of ethical AI in industry. Article mention: link
What does human centric tech actually mean? How can we make products more accessible? This week we are joined by the amazing Anne T Griffin to share her experience and expertise in tech. Anne is a leader in product management, a startup advisor, and subject matter expert in AI, blockchain, tech ethics and product inclusivity. She is the founder and Chief Product Officer at Griffin Product & Growth, and the Chief Product & Curriculum Officer of Tech 2025, an emerging technology community and platform to teach professionals to prepare for the future of work. Her workshop Human First, Product Second teaches organizations and professionals how to think about building more inclusive and ethical tech products. She has lectured at prestigious universities across North America such as Columbia University, the University of Montreal, and Morgan State University, spoken at major events such as SXSW, and created courses for O’Reilly Media. She has built her career in tech over the last decade working with organizations such as Priceline, Microsoft, Comcast, Mercedes-Benz, and ConsenSys, the premiere blockchain software technology company on the Ethereum blockchain. Anne continues to work with and research the practical human aspects of technology and building products with emerging and disruptive technologies. Outside of her work, Anne is a voracious learner, frequent traveler (when we’re not in a pandemic), and seriously committed to her self-care and workouts. Twitter: @annetgriffin LinkedIn: Anne T Griffin Website: www.annetgriffin.com Book mention: Building for Everyone: Expand Your Market With Design Practices from Google's Product Inclusion Team by Annie Jean-Baptiste
This week we chat about the recent New Yorker article 'Who Should Stop Unethical AI, Google firing their co-leads of Ethical AI and we bring back the conversation about publishing unethical research. Article: Who Should Stop Unethical AI?
Is AI sustainable? What is the carbon footprint and long term sustainability of machine learning? This week we look at an academic paper 'Energy and Policy Considerations for Deep Learning in NLP' by Emma Strubell, Ananya Ganesh and Andrew McCallum from the University of Massachusetts.
Will robots take over the world and come and get us? Can machines have feeling? This week we discuss how we got into the space of Ethical AI and defining what ethical AI actually is in practical terms.
Why is combating online hate speech so hard? How do we get to a safe space on social media?We are back with Zeerak Waseem for the second part of our conversation about online hate speech. Zeerak is currently a PhD student at the University of Sheffield. He researches and specialises in online hate speech. Disclaimer: we might have gone off topic and got political again..
What should Big Tech do about hate speech online? What CAN they do?This week we are joined by Zeerak Waseem, who researches and specialises in online hate speech. Zeerak is currently a PhD student at the University of Sheffield. Full disclosure: we go completely off-track.
Is trust the make-or-break ingredient in the future of AI? This week we continue our exploration of the EU's Guidelines for Trustworthy AI and discuss the importance of trust in AI systems.In the podcast we mention the work of the Z - Inspection led by Dr. Roberto V. Zicari from the Frankfurt Big Data Lab, Goethe University Frankfurt, Germany. Below we have linked the published AI Ethics Evaluation by James Brusseau, who is part of the research team for the AI medical research mentioned. What a Philosopher Learned at an AI Ethics Evaluation
This week we discuss a paper where researchers detect moral values from people's writing. Is a Myers-Briggs test for morality the next step in employee vetting? Read and article about the paper here (paper linked in article).
Happy New Year! This week we have another amazing guest joining us. Mark Durkee from the Centre for Data Ethics and Innovation. In this episode Mark will be discussing the recent report that CDEI published: Review into Bias in Algorithmic decision-making. Mark Durkee works for the Centre for Data Ethics & Innovation, leading a portfolio of work including the recently published Review into Bias in Algorithmic Decision-Making. Prior to joining CDEI in 2019, he worked in a variety of technology strategy, architecture and cyber security roles in the UK government, as a software engineer, and completed a PhD in theoretical physics.The Centre for Data Ethics and Innovation (CDEI) is an independent expert committee, led by a board of specialists, set up and tasked by the UK government to investigate and advise on how we maximise the benefits of these technologies. Our goal is to create the conditions in which ethical innovation can thrive: an environment in which the public are confident their values are reflected in the way data-driven technology is developed and deployed; where we can trust that decisions informed by algorithms are fair; and where risks posed by innovation are identified and addressed. More information about CDEI can be found at www.gov.uk/cdei.
Feliz Navidad everyone - our Christmas episode is here. Expect laughter, mince pies and mulled wine as we both reflect on the craziness of 2020 as well as the best and worst moments in ethical AI.
BONUS EP! We have Ivana Bartoletti joining us for a compelling conversation about the use of our data and racist / sexist AI. Ivana Bartoletti is Technical Director, Privacy at Deloitte, and an internationally recognised thought leader in the field of responsible technology. Author of "An Artificial Revolution, on Power, Politics and AI", she starts a Visiting Policy Fellowship at Oxford University on international flows of data in January 2021, alongside her full time job. She is founder of the influential Women Leading in AI network.
This week we chat about two hot topics in the news from the past two weeks.Can AI be the a matchmaking tool and enhance our love lives for the better? The news of Japan funding AI matchmaking to solve the declining birth rate. Should one company be allowed ultimate power and monopoly? The Federal Trade Commission (USA) lawsuit against Facebook.
Following on from our transparency episode, where we spoke about the issues surrounding big tech funding ethical AI research - this week we discuss the controversial and wrongful firing of Timnit Gebru from Google. Paper mentioned:MIT Technology Review
This week we discuss bias in tech that comes from the design of the tech itself rather than the bias that is automatically learned form data. Books mentioned:Design Justice by Sasha Costanza-ChockInvisible Women by Caroline Criado-Perez
This week we are back discussing the EU Guidelines for Trustworthy AI - looking at bias data sets. This is a two part episode where next week we will talk about bias by design.
Should unethical research be published? As a peer reviewer should your own ethical values be considered? This week we discuss an ethics panel that took place at EMNLP 2020 - The 2020 Conference on Empirical Methods in Natural Language Processing.
Can big tech really be transparent? This week we discuss The Grey Hoodie Project a paper by Mohamed Abdalla & Moustafa Abdalla. Continuing with the EU Guidelines for Trustworthy AI - we focus on the principle of Transparency.
This week we discuss Privacy and Data Governance - continuing our discussion of the EU Guidelines for Trustworthy AI.
This week Oriana and Amanda discuss Technical robustness and safety the second of the EU's Guidelines for Trustworthy AI.
This is the first episode of 7 where we analyse and discuss the EU's Ethics guidelines for trustworthy AI. This week's episode we look at the first principle Human Agency and Oversight. Can true human agency exist in tech?
In this weeks episode we tackle the topic of who gets to define what is ethical - looking at ethical guidelines and "The Algorithmic Colonization of Africa" by Abeba Birhane.
This week we discuss the Netflix documentary The Social Dilemma and Ivana Bartoletti’s book “An Artificial Revolution”.
Can a robot ever understand the difference between right and wrong? This week we discuss a research paper from TU Darmstadt, Dept. of Computer Science - Darmstadt, Germany about Alfie the robot with a moral compass. Read the paper here to learn more! Thanks for listening and if you enjoyed todays episode please subscribe!
Want to hear about interesting topics in the AI space? Join Oriana Medlicott and Amanda Curry as they discuss anything from academic papers to art, movies and books, with a special focus on ethics.