Podcast appearances and mentions of susie alegre

  • 24PODCASTS
  • 35EPISODES
  • 46mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 9, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about susie alegre

Latest podcast episodes about susie alegre

Humanism Now
28. Susie Alegre on the Algorithmic Assault on Human Rights: How AI Threatens Our Core Freedoms

Humanism Now

Play Episode Listen Later Mar 9, 2025 39:58 Transcription Available


AI technologies pose significant threats to fundamental human rights, reinforcing historical biases and power imbalances. This week, we are joined by Susie Alegre, international human rights lawyer and author, to explore the impact of generative AI on gender and racial equality, labour markets, and information ecosystems.Susie has worked for international NGOs like Amnesty International and organisations including the UN, the EU and the Council of Europe. Susie has published two books covering the critical issue or technology's impact on human rights; “Freedom to Think” (2022) was a Financial Times Technology Book of the Year 2022 and shortlisted for the Royal Society of Literature Christopher Bland Prize 2023 and “Human Rights, Robot Wrongs: Being Human in the Age of AI” published in 2024.  The episode covers; How AI systems, like ChatGPT, perpetuate gender and racial biasesThe "Pygmalion" pattern in AI designPotential longterm effects on skills, education and social interactionsThe rise of "ultra-processed information" and its consequences for the internetLegal risks and the role of effective regulationEnforcement in addressing AI's human rights risksWhen AI applications may be valuable—and when they are not

Pondering AI
Righting AI with Susie Alegre

Pondering AI

Play Episode Listen Later Jan 22, 2025 46:12


Susie Alegre makes the case for prioritizing human rights and connection, taking AI systems to account, minding the right gaps, and resisting unwitting AI dependency.  Susie and Kimberly discuss the Universal Declaration of Human Rights (UDHR); legal protections and access to justice; human rights laws; how court cases impact legislative will; the wicked problem of companion AI; abdicating accountability for AI systems; Stepford Wives and gynoid robots; human connection and agency; minding the wrong gaps with AI systems; AI dogs vs. AI pooper scoopers; the reality of care and legal work; writing to think; cultural heritage and creativity; pausing for thought; unwittingly becoming dependent on AI; and prioritizing people over technology.  Susie Alegre is an acclaimed international human rights lawyer and the author of Freedom to Think: The Long Struggle to Liberate Our Minds and Human Rights, Robot Wrongs: Being Human in the Age of AI. She is also a Senior Fellow at the Centre for International Governance and Innovation (CIGI) and Founder of the Island Rights Initiative. Learn more at her website: Susie Alegre   A transcript of this episode is here. 

The Evolving Leader
Being Human in the Age of AI with Susie Alegre

The Evolving Leader

Play Episode Listen Later Nov 13, 2024 55:04 Transcription Available


In this episode, we're delighted to welcome Susie Alegre back to The Evolving Leader. Susie is a leading human rights barrister at the internationally renowned Garden Court Chambers. She has been a legal pioneer in digital human rights, in particular the impact of artificial intelligence on the human rights of freedom of thought and opinion and she is also Senior Research Fellow at the University of Roehampton.Artificial intelligence is starting to shape every aspect of our daily lives, from how we think to who we love, and in her latest book ‘Human Rights, Robot Wrongs, Being Human in the Age of AI' Susie Alegre explores the ways in which artificial intelligence threatens our fundamental human rights – including the rights to life, liberty and fair trial; the right to private and family life; and the right to free expression – and how we protect those rights.This is an important listen for us all.Other reading from Jean Gomes and Scott Allender:Leading In A Non-Linear World (J Gomes, 2023)The Enneagram of Emotional Intelligence (S Allender, 2023)Social:Instagram           @evolvingleaderLinkedIn             The Evolving Leader PodcastTwitter               @Evolving_LeaderYouTube           @evolvingleader The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.Send a message to The Evolving Leader team

IIEA Talks
Being Human in the Age of AI

IIEA Talks

Play Episode Listen Later Nov 8, 2024 36:05


Artificial Intelligence (AI) has seized public consciousness in recent years, but public attention has often focused on the technological aspects of AI. However, as AI is inserted into every part of daily life, from dating to doctor consultations, it is important to ensure that this technology is adopted in a human-centric way. Susie Alegre examines AI through the lens of international human rights law to explore the legal frameworks we need to build the human-centric future we want. About the Speaker: Susie Alegre is an international lawyer specialising in technology and human rights and a Senior Fellow at the Centre for International Governance Innovation (CIGI).  A barrister and Associate at Garden Court Chambers in London, she has worked in the field of public international law and human rights around the world for organisations including Amnesty International, the European Union, the OSCE and the UN. She is the author of Freedom to Think, a Financial Times Technology book of the year and Human Rights, Robot Wrongs: Being Human in the Age of AI published in 2024.

Somewhere on Earth: The Global Tech Podcast

Subscriber-only episodeSend us a Text Message.Do we need new laws to control AI?Will current legislation be sufficient to control the development of AI? How is AI  affecting our human rights? Is AI good enough to draft legal submissions? Does automation bias make us want to trust the technology more than we should?  These are just some the of questions SOEP is asking international human rights attorney Susie Alegre.  She's just published her latest book “Human Rights, Robot Wrongs” and will navigate us through the impact of AI on human rights and our interaction with machines. "Smoke and Mirrors" - the Prix d'Electronica 2024Beatie Wolfe, pioneering artist and composer, has been awarded the Prix Ars Electronica "Golden Nica" for her work "Smoke and Mirrors". She's used NASA data on methane emission.  Probably the most prestigious Media Arts Award in the world Beatie represents 60 years of decades of NASA climate data – in this case rising methane levels, set alongside advertising slogans deployed by Big Oil companies to question climate change during this time.  Beatie has also beamed her music into space, weaving her second album into an NFC-enabled jacket.  SOEP discusses her piece which reflects the interface between art, music, and technology. The programme is presented by Gareth Mitchell and the studio expert is Ghislaine Boddington.More on this week's stories:Human Rights, Robot WrongsBeatie Wolfe - Prix Ars Electronica 2024 winnerEditor: Ania LichtarowiczProduction Manager: Liz Tuohy Recording and audio editing : Lansons | Team Farner For new episodes, subscribe wherever you get your podcasts or via this link:https://www.buzzsprout.com/2265960/supporters/newFollow us on all the socials: Join our Facebook group Instagram Twitter/X If you like Somewhere on Earth, please rate and review it on Apple PodcastsContact us by email: hello@somewhereonearth.coSend us a voice note: via WhatsApp: +44 7486 329 484Find a Story + Make it News = Change the World

Somewhere on Earth: The Global Tech Podcast
Do we need new laws to control AI? Also the Prix Ars Electronica 2024 winner speaks to SOEP

Somewhere on Earth: The Global Tech Podcast

Play Episode Listen Later Jul 16, 2024 35:03


Send us a Text Message.Do we need new laws to control AI?Will current legislation be sufficient to control the development of AI? How is AI  affecting our human rights? Is AI good enough to draft legal submissions? Does automation bias make us want to trust the technology more than we should?  These are just some the of questions SOEP is asking international human rights attorney Susie Alegre.  She's just published her latest book “Human Rights, Robot Wrongs” and will navigate us through the impact of AI on human rights and our interaction with machines. "Smoke and Mirrors" - the Prix d'Electronica 2024Beatie Wolfe, pioneering artist and composer, has been awarded the Prix Ars Electronica "Golden Nica" for her work "Smoke and Mirrors". She's used NASA data on methane emission.  Probably the most prestigious Media Arts Award in the world Beatie represents 60 years of decades of NASA climate data – in this case rising methane levels, set alongside advertising slogans deployed by Big Oil companies to question climate change during this time.  Beatie has also beamed her music into space, weaving her second album into an NFC-enabled jacket.  SOEP discusses her piece which reflects the interface between art, music, and technology.The programme is presented by Gareth Mitchell and the studio expert is Ghislaine Boddington.More on this week's stories:Human Rights, Robot WrongsBeatie Wolfe - Prix Ars Electronica 2024 winnerEveryday AI: Your daily guide to grown with Generative AICan't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.Listen on: Apple Podcasts SpotifySupport the Show.Editor: Ania LichtarowiczProduction Manager: Liz Tuohy Recording and audio editing : Lansons | Team Farner For new episodes, subscribe wherever you get your podcasts or via this link:https://www.buzzsprout.com/2265960/supporters/newFollow us on all the socials: Join our Facebook group Instagram Twitter/X If you like Somewhere on Earth, please rate and review it on Apple PodcastsContact us by email: hello@somewhereonearth.coSend us a voice note: via WhatsApp: +44 7486 329 484Find a Story + Make it News = Change the World

Off Air... with Jane and Fi
Making noises getting off the sofa...

Off Air... with Jane and Fi

Play Episode Listen Later Jun 17, 2024 55:21


Mystic Jane is defending her predictions after a weekend of football. More to come on that front... They also discuss milestone birthdays, pro and cons of France and the lunch hour. Plus, Fi speaks to leading human rights barrister Susie Alegre about her book ‘Human Rights, Robot Wrongs'. Our next book club pick has been announced! 'Missing, Presumed' is by Susie Steiner. If you want to contact the show to ask a question and get involved in the conversation then please email us: janeandfi@times.radio Follow us on Instagram! @janeandfiPodcast Producer: Eve SalusburyExecutive Producer: Rosie Cutler Hosted on Acast. See acast.com/privacy for more information.

FUTURES Podcast
Human Rights & Robot Wrongs w/ Dr. Susie Alegre

FUTURES Podcast

Play Episode Listen Later May 22, 2024 31:43


Human rights lawyer Dr. Susie Alegre shares her insights into the threat artificial intelligence poses to human creativity, the importance of the Universal Declaration of Human Rights (UDHR) in safeguarding freedom of thought, and applying existing laws to regulate the development and deployment of emerging technologies. Dr. Susie Alegre is a leading international human rights lawyer and Associate at Garden Court Chambers. She has been a legal pioneer in digital human rights, in particular the impact of artificial intelligence on the human rights of freedom of thought and opinion. She is also a Senior Research Fellow at the University of Roehampton, and a Senior Fellow at CIGI. This episode was recorded in front of a live audience for an event in partnership with Engage Works. ABOUT THE HOST Luke Robert Mason is a British-born futures theorist who is passionate about engaging the public with emerging scientific theories and technological developments. He hosts documentaries for Futurism, and has contributed to BBC Radio, BBC One, The Guardian, Discovery Channel, VICE Motherboard and Wired Magazine. CREDITS Producer & Host: Luke Robert Mason Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast Follow Luke Robert Mason on Twitter at @LukeRobertMason Subscribe & Support the Podcast at http://futurespodcast.net

Intelligence Squared
Being Human in an AI World, with Susie Alegre

Intelligence Squared

Play Episode Listen Later May 10, 2024 37:42


Artificial intelligence is no longer a figment of our imagination a plot pulled from the pages of science fiction. Recent rapid advances mean it is now seeping into ever more aspects of our daily lives. Leading human-rights barrister Susie Alegre has been analysing the concept of what it means to be human within a digital world for years. Her latest book, Human Rights, Robot Wrongs, focuses on where the spirit of humanity will find itself in a near future almost certainly defined by human-like yet empathy-free algorithms made from ones and zeroes. Joining Alegre in conversation for this episode is Head of Programming for Intelligence Squared, Conor Boyle. We are sponsored by Indeed. Go to Indeed.com/IS for £100 sponsored credit. If you'd like to become a Member and get access to all of our longer form interviews and Members-only content, just visit intelligencesquared.com/membership to find out more. For £4.99 per month you'll also receive: - Full-length and ad-free Intelligence Squared episodes, wherever you get your podcasts - Bonus Intelligence Squared podcasts, curated feeds and members exclusive series - 15% discount on livestreams and in-person tickets for all Intelligence Squared events - Our member-only newsletter The Monthly Read, sent straight to your inbox ... Or Subscribe on Apple for £4.99: - Full-length and ad-free Intelligence Squared podcasts - Bonus Intelligence Squared podcasts, curated feeds and members exclusive series ... Already a subscriber? Thank you for supporting our mission to foster honest debate and compelling conversations! Visit intelligencesquared.com to explore all your benefits including ad-free podcasts, exclusive bonus content and early access. ... Subscribe to our newsletter here to hear about our latest events, discounts and much more. https://www.intelligencesquared.com/newsletter-signup/ Learn more about your ad choices. Visit podcastchoices.com/adchoices

How To Academy
Susie Alegre - Protecting Freedom of Thought in the Digital Age

How To Academy

Play Episode Listen Later Jun 6, 2023 29:36


Susie Alegre is a leading human rights barrister and a pioneer of digital human rights in the age of AI and big tech. In her book Freedom to Think, she explores the basis upon which we have a right to our own ideas and opinions - and how we can protect that right in an age of digital surveillance and ever more advanced forms of propaganda. She spoke to us about the history of thought control, connecting the dots between Socrates, Galileo, witch trials and the contemporary world. Learn more about your ad choices. Visit megaphone.fm/adchoices

ai freedom digital age socrates galileo alegre protecting freedom susie alegre
Tech Mirror
Tech & Democracy: Brain Surgery in a Taxi

Tech Mirror

Play Episode Listen Later Apr 24, 2023 43:07


In this episode of Tech Mirror, Johanna speaks to Nitin Pai, co-founder and director of the Takshashila Institution, an independent think tank and school of public policy based in Bengaluru. In a thought-provoking conversation, the pair discuss the complex relationship between the tech industry and government, the global contest between open and closed information orders, hacking of minds, and how we can use technology to strengthen 21 century democracy.   Relevant Links:  The Takshashila Institution https://takshashila.org.in/  Takshashila Report - An Open Tech Strategy for India (A Working Draft) https://takshashila.org.in/research/an-open-tech-strategy-for-india Freedom to Think: The Long Struggle to Liberate Our Minds by Susie Alegre: https://www.goodreads.com/en/book/show/60450548  Conceptualising Information Warfare https://notes.nitinpai.in/In+no+particular+order/Conceptualising+Information+Warfare   Technology & Policy: GCPP https://school.takshashila.org.in/gcpp-technology-policy  Follow:  Nitin Pai on Twitter: @acorn  Takshashila on Twitter: @TakshashilaInst  Tech Policy Design Centre on Twitter: @TPDesignCentre 

The Actionable Futurist® Podcast
S5 Episode 12: Susie Alegre on Generative AI, ChatGPT and the freedom to think

The Actionable Futurist® Podcast

Play Episode Listen Later Apr 20, 2023 26:30


Does Generative AI restrict our freedom to think if we ask it to do everything for us? To answer this question, I spoke with international Human Rights Lawyer, Author and Speaker and author of  “Freedom to Think: The Long Struggle to Liberate our Minds.” Susie Alegre.We last spoke to Susie on the podcast in 2022, before the Generative AI tool ChatGPT had been released, so in this show, Susie provided an update on her views on ChatGPT and AI in general.We looked at the current trends in AI, and Susie provided some advice for the big tech companies about how they take the next steps with these powerful tools.In a recent interview, she said: “If artificial intelligence doesn't know the answer, it simply makes up plausible response, but it automates the prejudices of our societies and delivers them with the confidence of a crypto salesman”.She asks the question we're all asking - Chat GPT: What is it for?To end the podcast, she provided three actionable tips on how to best use Generative AI tools:1. think about why you're using it instead of using your own brain or your own time2. Don't believe a word it says; double check possibly through Google or maybe go to a library 3. Play with it. But don't trust it.This show provides a fascinating update to our previous podcast and is very timely.More on Susie can be found on her website.You can purchase her book here.Listen to our first podcast here.Your Host: Actionable Futurist® & Chief Futurist Andrew GrillFor more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com Andrew's Social ChannelsAndrew on LinkedIn@AndrewGrill on Twitter @Andrew.Grill on InstagramKeynote speeches hereAndrew's upcoming book

The Evolving Leader
Freedom to Think with Susie Alegre

The Evolving Leader

Play Episode Listen Later Apr 19, 2023 49:41


This week on the Evolving Leader podcast, co-hosts Jean Gomes and Scott Allender talk to human rights barrister Dr Susie Alegre. Susie is a legal pioneer in digital human rights, in particular the impact of artificial intelligence on the human rights of freedom of thought and opinion. Without a moment's pause, many of us will share our most intimate thoughts with the largest tech companies and in doing so make it possible for them to categorise us and potentially jump to troubling conclusions about who we are. In her new book Susie argues that only by recasting our human rights for the digital age can we safeguard our futures.‘Freedom to Think: The Long Struggle to Liberate our Minds” (Susie Alegre, 2022) 0.00 Introduction3.18 Can you tell us about your career and what's led you to your current focus?5.20 You argue that the online environment undermines our independence of opinion and in your book you illustrate this by starting with a brief history of legal freedoms to both holding beliefs and their expressions. 9.44 I'd like to focus on this manipulation. It's hard to keep up with what's happening in terms of the speed and number of platforms that are spreading ideas. How do we balance the fact that this is happening with the rights to form our own thoughts?14.18 This whole area must be incredibly challenging. Can you give us a sense of what you face in trying to move legislation around like this?18.10 How do you feel about your own experience of being manipulated on-line?19.47 Can we turn to AI and how technology is now thinking that it can infer what our inner thoughts and feelings are? 24.01 What are thoughts on big tech company's approach to ethics?26.06 How do you think organisations in the tech space are going to give the application of human rights more teeth?30.05 What are your thoughts on how the Chinese and Russian governments are wielding influence over their populations?33.55 If we take the GDPR digital services act as an example, we can see that it's a tricky balancing act to introduce legislation to achieve those goals and engage the public and commercial sectors. Can we do a better job in capturing the public's imagination in these things?37.34 What are the implications for leaders and organisations as they increasingly become dependent on digital and social technologies to prosper?40.10 What reaction have you had across the political spectrum to your ideas?42.14 You talk about how nobody wants to be manipulated and nobody thinks that are being manipulated. How do people get more honest and take more inventory in the ways that perhaps they are being manipulated?45.41 So thinking about the freedom to think for younger people, what advice might you give them?47.34 What are the next set of challenges for you? What are you working on at the moment?Social: Instagram           @evolvingleader LinkedIn             The Evolving Leader Podcast Twitter               @Evolving_Leader The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an OutsideREGISTER NOW forHOW TO BUILD YOUR MINDSET with Jean Gomes26 April 2023, 17:00 (GMT +1)

Carnegie Council Audio Podcast
From Another Angle: Freedom of Thought, with Susie Alegre

Carnegie Council Audio Podcast

Play Episode Listen Later Mar 21, 2023 33:49


In this first episode, host Hilary Sutcliffe explores . . . our freedom to think from another angle. We might feel that what goes on in our heads remains in our heads, but international human rights lawyer Susie Alegre explores the surprising ways that our innermost thoughts are being exposed and manipulated through the deployment of artificial intelligence (AI). She explains how what is often seen as the most fundamental human right, our freedom of thought, is being eroded; what this means in practice, and what we can do to protect what goes on in our minds. Alegre is the author of an award-winning book, Freedom to Think: The Long Struggle to Liberate Our Minds. You can read her article "Freedom of Thought is a Human Right" in Wired's "World in 2023" issue or browse her extensive broadcasting and writing on this subject on her website.

The Seen and the Unseen - hosted by Amit Varma
Ep 318: The Liberal Nationalism of Nitin Pai

The Seen and the Unseen - hosted by Amit Varma

Play Episode Listen Later Feb 27, 2023 332:55


The task of nation-building did not end with our founders, and does not stop at our politicians. It's up to us to build the India we want to see. Nitin Pai joins Amit Varma in episode 318 of The Seen and the Unseen to talk about his life, his learnings and his liberal nationalism. (FOR FULL LINKED SHOW NOTES, GO TO SEENUNSEEN.IN.) Also check out: 1. Nitin Pai on his own website, Mint & Mastodon . 2. The Nitopadesha -- Moral Tales for Good Citizens. 3. The archives of The Acorn, Nitin Pai's blog. And its current avatar. 4. Nitin Pai's ideas, notes and current research and teaching. 5. The Takshashila Institution. 6. Seven Tenets of Indian Nationalism -- Nitin Pai. 7. In support of a liberal nationalism -- Nitin Pai. 8. A republic - if we can keep it -- Nitin Pai. 9. Saving the Nation From Nationalists -- Nitin Pai. 10. The real problem is that we have too little republic -- Nitin Pai. 11. The operating system of liberal democracy needs a major upgrade -- Nitin Pai. 12. Social harmony is a matter of national interest -- Nitin Pai. 13. Liberal democracies must protect their citizens' minds from being hacked -- Nitin Pai. 14. Understanding Foreign Policy — Episode 63 of The Seen and the Unseen (w Nitin Pai). 15. Russia, Ukraine, Foreign Policy -- Episode 268 of The Seen and the Unseen (w Pranay Kotasthane and Nitin Pai). 16. The City and the City — China Miéville. 17. The State of Our Economy -- Episode 252 of The Seen and the Unseen (w Puja Mehra and Mohit Satyanand). 18. The Tragedy of Our Farm Bills — Episode 211 of The Seen and the Unseen (w Ajay Shah). 19. Who We Are and How We Got Here — David Reich. 20. Early Indians — Tony Joseph. 21. Early Indians — Episode 112 of The Seen and the Unseen (w Tony Joseph). 22. The Moral Arc: How Science Makes Us Better People — Michael Shermer. 23. History of European Morals — WEH Lecky. 24. The Expanding Circle: Ethics, Evolution, and Moral Progress — Peter Singer. 25. How the BJP Wins — Prashant Jha. 26. The BJP's Magic Formula — Episode 45 of The Seen and the Unseen (w Prashant Jha). 27. Caste, Capitalism and Chandra Bhan Prasad — Episode 296 of The Seen and the Unseen. 28. Episodes of The Seen and the Unseen w Pranay Kotasthane: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. 29. Rohini Nilekani Pays It Forward -- Episode 317 of The Seen and the Unseen. 30. Samaaj, Sarkaar, Bazaar : A citizen-first approach — Rohini Nilekani. 31. The Crowd: A Study of the Popular Mind — Gustave le Bon. 32. Crowds and Power — Elias Canetti. 33. EO Wilson on Amazon, Wikipedia and Britannica. 34. Narendra Modi takes a Great Leap Backwards — Amit Varma (on Modi, Mao and locusts). 35. FAQ: Why Anna Hazare is wrong and Lok Pal a bad idea -- Nitin Pai. 36. Sadanand Dhume on Twitter -- and this podcast! 37. Social media is an existential threat to civilisation -- Nitin Pai. 38. Reframing the social media policy debate -- Nitin Pai. 39. The coming regulation of social media is an opportunity for India -- Nitin Pai. 40. The Double ‘Thank-You' Moment — John Stossel. 41. Thinking Fast and Slow — Daniel Kahneman. 42. Human — Michael S Gazzaniga. 43. The Interpreter — Amit Varma. 44. The Elephant in the Brain -- Kevin Simler and Robin Hanson. 45. Freedom to Think -- Susie Alegre. 46. Addiction by Design: Machine Gambling in Las Vegas — Natasha Dow Schüll. 47. The Importance of the 1991 Reforms — Episode 237 of The Seen and the Unseen (w Shruti Rajagopalan and Ajay Shah). 48. The Forgotten Greatness of PV Narasimha Rao — Episode 283 of The Seen and the Unseen (w Vinay Sitapati). 49. The Life and Times of Montek Singh Ahluwalia — Episode 285 of The Seen and the Unseen. 50. The original Takshashila. 51. Understanding Gandhi. Part 1: Mohandas — Episode 104 of The Seen and the Unseen (w Ram Guha). 52. Understanding Gandhi. Part 2: Mahatma — Episode 105 of The Seen and the Unseen (w Ram Guha). 53. Hind Swaraj — MK Gandhi. 54. Nikita -- Elton John. 55. The Importance of Cities — Episode 108 of The Seen and the Unseen (w Reuben Abraham & Pritika Hingorani). 56. The Gentle Wisdom of Pratap Bhanu Mehta -- Episode 300 of The Seen and the Unseen. 57. The Arthashastra -- Kautilya 58. On Exactitude in Science — Jorge Luis Borges. 59. Emergent Ventures. 60. Friedrich Hayek on Wikipedia, Britannica, Stanford Encyclopedia of Philosophy and Econlib. 61. Milton Friedman on Amazon, Wikipedia, Britannica and Econlib. 62. Arshia Sattar and the Complex Search for Dharma -- Episode 315 of The Seen and the Unseen. 63. Every Act of Government Is an Act of Violence — Amit Varma. 64. The Generation of Rage in Kashmir — David Devadas. 65. Counterinsurgency Warfare — David Galula. 66. We Won't Need To Fight A War If We Can Win The Peace — Amit Varma. 67. Kashmir and Article 370 -- Episode 134 of The Seen and the Unseen (w Srinath Raghavan). 68. Think the Unthinkable (2008) -- Vir Sanghvi. 69. Independence Day for Kashmir (2008) -- Swaminathan S Anklesaria Aiyar. 70. The Anti-Defection Law — Episode 13 of The Seen and the Unseen (w Barun Mitra). 71. Our Parliament and Our Democracy — Episode 253 of The Seen and the Unseen (w MR Madhavan). 72. Abby Philips Fights for Science and Medicine — Episode 310 of The Seen and the Unseen. 73. Why Read the Classics? — Italo Calvino. 74. History Of Western Philosophy -- Bertrand Russell. 75. Ideas: A History from Fire to Freud -- Peter Watson. 76. Arthashastra -- Kautilya (translated by Shama Shastri). 77. The Upanishads. 78. The Mahabharata -- translated by Bibek Debroy. 79. Brihatkatha, Kathasaritsagara, Panchatantra and Hitopadesha. 80. Charvaka and Jayarāśi Bhaṭṭa. 81. Tattvopaplavasiṃha -- Jayarāśi Bhaṭṭa. 82. The Hitchhiker's Guide to the Galaxy -- Douglas Adams. 83. Catch 22 -- Joseph Heller. 84. Commanding Hope -- Thomas Homer-Dixon. 85. Paul Auster, David Mitchell, Haruki Murakami, Ryu Murakami and Terry Pratchett on Amazon. 86. Piercing -- Ryu Murakami. 87. 2021 - The Year in Fiction -- Nitin Pai. 88. Bhimsen Joshi, Kishore Kumar, Hemant Kumar, Radiohead, Norah Jones, Louis Armstrong, Nina Simone, Himesh Reshammiya and Yehudi Menuhin on Spotify. 89. Take Five -- The Dave Brubeck Quartet. Check out Amit's online course, The Art of Clear Writing. And subscribe to The India Uncut Newsletter. It's free! Episode art: ‘The Bigger Picture' by Simahina.

The Actionable Futurist® Podcast
S4 Episode 24: International Human Rights Lawyer, Author and Speaker Susie Alegre on the Freedom to Think

The Actionable Futurist® Podcast

Play Episode Listen Later Dec 14, 2022 47:30


As an international lawyer, author and speaker,  Susie has worked on some of the most challenging legal and political issues of our time including human rights and security, combating corruption in the developing world, protecting human rights at borders, the human rights impact of climate change on small island states, privacy, cybersecurity, disinformation, data protection and neurotech. Her experience includes both judicial, and oversight roles in the UK and internationally.She has particular expertise on tech ethics and neuroscience through the lens of human rights, bringing both legal skills and a background in philosophy to the key ethical questions of our time.Her new book “Freedom to Think: The Long Struggle to Liberate our Minds” is a fascinating read and is available from Waterstones, Amazon, Australia: Trade Paperback, and Hardback.More on Susie can be found on her website.Your Host: Actionable Futurist® Andrew GrillFor more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com Andrew's Social ChannelsAndrew on LinkedIn@AndrewGrill on Twitter @Andrew.Grill on InstagramKeynote speeches hereAndrew's upcoming book

Kids Law
How laws can protect the impact of the internet on our thoughts and views

Kids Law

Play Episode Listen Later Sep 6, 2022 17:24


In this episode, Alma- Constance and Lucinda discuss the role of the internet and how it can affect the way we think and how laws can protect the way we can express our thoughts and views online.They speak to Susie Alegre, international human rights barrister at Doughty Street Chambers and author of Freedom to Think. She is a pioneer in looking at digital human rights and her book explores the impact of artificial intelligence on the human rights to freedom of thought and opinion.She tells us about: ·            why we need laws to protect how people think and express their views online·            How technology can affect what we think and form views·            The impact on children and young peopleWhen Susie was 10 years old,  she loved writing reading and talking and interviewing her pony!Alma-Constance and Lucinda would love to hear from you. If you have any questions, ideas about a topic or someone you'd like us to interview, please contact us through the website, www.kidslaw.info or through social media on Twitter, Facebook, and Instagram @KidsLawInfo You can also email us: kidslaw@spark21.orgPlease subscribe, rate, and share with your friends!References and Resourceshttps://susiealegre.comFreedom to Think -The long struggle to liberate our mindshttps://www.doughtystreet.co.uk/barristers/susie-alegre-associatehttps://5rightsfoundation.com/our-work/child-online-protection/https://www.youtube.com/watch?v=YUyLDpyzoJkhttps://www.cigionline.org/static/documents/PB_no.165.pdf

The Data Diva E93 - Susie Alegre and Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

Play Episode Listen Later Aug 16, 2022 38:40 Transcription Available


Debbie Reynolds “The Data Diva” talks to Susie Alegre, International Human Rights Lawyer, Author of Freedom to Think: The Long Struggle to Liberate our Minds. We discuss her background initially in philosophy and then human rights law, Cambridge Analytica and algorithmic power, the advantage of a philosophic background in critical thinking, her concerns about privacy in technology, the Metaverse and its effect on behavior, emotional AI, new sensors are gathering more significant and revealing data, the false analogy of Internet as a library as the search shows us each different results, what is the social cost of Data Privacy abuse, reducing the negative impact on people, consider the potential harm of technology first, the UK Algorithmic Transparency Framework, simplicity of information important due to its volume,  and her hope for Data Privacy in the future.Support the show

Dark Mode Podcast
#16 - Human rights in the digital (mis)information age - Susie Alegre

Dark Mode Podcast

Play Episode Listen Later Aug 8, 2022 70:46


In this episode on Dark Mode we host Susie Alegre who is an international human rights lawyer, author of ‘Freedom to Think' and keynote speaker. The most fascinating part of Susie's thought leadership, is her expertise around the impact of technology and AI on the rights to freedom of thought and opinion, particularly given the rise of high-tech and the acceleration of a hyper-connected world. ✍

The Rights Track
Human rights in a digital world: pause for thought

The Rights Track

Play Episode Listen Later Jul 22, 2022 25:32


In Episode 9 of Series 7, Todd is joined again by Ben Lucas, Director of 3DI at the University of Nottingham, funders of this series. Together they reflect on some of the key themes and ideas to emerge from Series 7 of The Rights Track about human rights in a digital world.   Transcript Todd Landman  0:01   Welcome to The Rights Track podcast, which gets the hard facts about the human rights challenges facing us today. In series seven, we've been discussing human rights in a digital world. I'm Todd Landman. And in the last episode of this fantastic series, I'm delighted to be joined for the second time by Ben Lucas, Managing Director of 3DI at the University of Nottingham, a hub for world class data science research and funders for this series of our podcast. Ben helped kick off series seven at the end of last year talking about some of the challenges and opportunities created in a data driven society and the implications for our human rights. Today, he's here to help us reflect on some of the key themes that have emerged from this series. So welcome, Ben, it's great to have you on this final episode of The Rights Track. Ben Lucas  0:46   Great to be here. Thanks very much. Todd Landman  0:48   So last night, we were at a launch event for INFINITY, which is an inclusive financial technology hub being launched here at the University of Nottingham, we had a bucolic setting at the Trent bridge, cricket ground, which I say was quite historic. But some of the messages I heard coming out of that event last night, really gave me hope for the promise of digital with respect, particularly to helping people who are currently excluded from financial technologies or finance more generally. And the ever, you know, sort of problem of people getting credit ratings getting access to finance, I wondered if you could just reflect on what was shared last night around the the positive story that could be told around using technology to give people access to hard to find finance? Ben Lucas  1:29   Yeah, absolutely. So I think the central issue with financial inaccessibility is really the fact that people get trapped in this really bad cycle, and perhaps don't have savings, and then you lean more on credit options, for example. And then you become more and more dependent, if you like on credit options. Equally, there are also folks who are excluded from accessing credit completely or at an affordable rate. In the first instance, which obviously changes very much the quality of life, let's say that they're able to enjoy the things they're able to purchase, and so on. So really, the mission of projects like INFINITY, which is focusing very much on this idea of inclusive financial technology, is trying to boost accessibility to everything from tools that help people save to tools that help people spend to a breaking that some of these negative cycles that cause people to end up in not so great financial situations. And yeah, it's really leveraging and learning from, you know, all the wonderful developments in, you know, things like analytics and new financial services, products, especially those that are app based, that we use in the rest of the financial services world, but applying them for good, basically, so very much consistent with this data for good message that we've been speaking about in this series. Todd Landman  2:51   Right that's really interesting. So it's a data driven approach to understanding the gaps and inequalities in a modern society that does have the data infrastructure and technological infrastructure to give people access. But really the data driven approach lowers the barriers to entry for those folks. And I was quite struck by that there was a colleague there from Experian, which is a credit rating agency talking about the millions of people who either don't have online bank accounts don't have access to the right kinds of technologies, and don't have the kind of credit rating that gives them access to the lower priced financial products out there, which in sort of ordinary terms means they're paying a much higher interest rate to borrow money than people that do have a credit rating. So one solution was to use data analytics and a data driven approach to understand their position to boost their credit rating in a way that would give them access to cheaper finance. Did I get that right? Ben Lucas  3:40   Yeah, that's exactly right. I mean, the central thing in financial services and lending is obviously managing their risk exposure with any individual consumer, but then also across, you know, their entire consumer portfolio. And I think, you know, one of the big opportunities in the inclusive FinTech space slash probably what we're going to see going forward is credit rating agencies and credit rating support products, looking for other variables or indicators that, you know, can really paint a clearer picture of individual consumers, and perhaps even say, well, actually, there's not so much risk with this consumer because there is other factors that the usual you know, bog standard algorithm doesn't pick up on, and maybe we don't have that risk exposure, maybe we can offer them, you know, financial products or lending products at a better rate, you know, that colleagues spoke also about Experian's boost product, for example, and I won't go into an advertisement for that, but yet a really interesting example of how by sort of extending the available data and what we do with that, you know, it's possible to sort of calibrate and tailor solutions that are a win win that reduce the risk for the credit provider, but give additional consumers more accessibility. And I think the other big piece just to detail briefly, within data driven and financial research, you know, some of the work that colleagues in the INFINITY team have been doing around, you know, helping to understand that an aggregate and in a privacy preserving way, where perhaps people are making not so great financial decisions. So being able to, you know, hopefully in the future help flag you know privacy protecting way to consumers when they're not making great decisions, which can be everything from wasteful over the top expenses to things like you know, too much gambling or unhealthy eating, for example. So certainly a very, very exciting space. Todd Landman  5:33   No, it's really fascinating, and it resonates well with many of the themes we've heard in this series of The Rights Track. So I'm going to just think about putting these things into groupings or clutches of perspectives if I may, so that you made reference this idea of data for good and of course, we had some guests on the podcast this series, including Sam Gilbert, who talked about the ability for digital transformation and data driven approaches to unearth previously unknown factors and public health benefits, and it could be social justice benefits and other benefits from leveraging data that don't normally talk to each other in a data analytic way. Wendy Betts told us about using really preserving the chain of evidence using visual imagery, but that date stamp timestamp location stamped and then preserving the metadata that sits behind an image for verification for the investigation of human rights abuse and human rights crimes. Amrit Dhir showed us in the United States how his organisation Recidiviz uses data from prisons to actually bring greater sense of justice to prisoners, as well as parolees. And finally, Diane Coyle, the world famous economist not only reflected on the many economic transformations that have happened with the digital disruption, but also made the case for universal access to online life and being on the grid almost as a basic human right, in the ways that access to information access to health care, access to services need to be provided. And certainly during COVID-19, we've learned that many people were excluded from those services precisely because they didn't have the right internet connection, or at least cannot afford to have the right kind of internet connection. So I just wondered what your general reflections are on that general theme of data for good. And what can you tell us about what you think listening to the guests that we've had during this series? Ben Lucas  7:21   Yeah, I mean, I really liked the way that Sam sort of sets the scene in his book, Good Data; An Optimist's Guide to our Digital Future. I think that nobody, of course likes to have their privacy compromised, at an individual level. But the reality is, when we look at, you know, the things we can do when we have data at scale across, you know, large populations, there's a lot that can be achieved, whether that's in something like inclusive FinTech, whether that's in protecting human rights by combating modern slavery, whether that's to do with health data in a system like the NHS. Yeah, I don't think anybody likes to have their privacy compromised, obviously, at that individual level. But if there's a sort of way to communicate that greater good message, I'm not trying to encourage people to willingly give away their data for free, quite the opposite. But I think that's the sort of big debate the both commercial and academic data scientists, you know, that's really the arena in which we work. Because there are a lot of benefits to be had. When we think about sort of data at scale. Equally, we need to protect, you know, individuals and communities. I think, you know, it's really great in this series to hear about, you know, things like eyeWitness up and Recidiviz and some of these platforms that I think are managing that really well and really getting that good out of the data. Yeah, I think that's been really nice. There's a lot we can say also, on the subject of, I think this is more of a frontier thing. But artificial intelligence in particular, which came up a few times, which I think is going to be the next well already is actually the next big frontier in terms of talking about, you know, transparency and fairness, especially because we're applying these tools to these large datasets. Todd Landman  9:04   Right. And I also came across a very interesting project and another group here at the University of Nottingham. It's within the Nottingham University Business School. And it's a neo-demographic lab or N/Lab, which works on you know, big data science projects around harnessing unknown information from pre existing datasets. And there was a partnership with OLIO, which is an app that allows people to trade food that they're not going to need so surplus food sits in people's houses, other people need food. So this app allows people to share food across the app, and to actually make best use almost the circular economy, if you will, in sharing food. Now, quite apart from the pragmatics and the practicalities of sharing food between households. Of course, the app collects data on who needs food and who has food, and that then allows the geo-mapping of food poverty within particular districts and jurisdictions within the United Kingdom. Can you say a bit more about that project and does this fit within the category of data for good?  Ben Lucas  10:03   Absolutely. I mean, that's an absolutely fantastic piece of work, you know. And obviously, the purpose of that platform and all that work is to look at both combating food inaccessibility and food poverty, on the one hand, and on the other, combating food waste. So really, yeah, absolutely a fantastic example, as far as data for good and also doing the right thing by people in society. I think it is also a great example of this idea that we can, you know, log data from sharing platforms, and really whatever platform in an ethical way, you know, in the work those that colleagues at N/Lab are doing, you know, so it's all privacy preserved data. It's possible to get a, you know, useful enough geotagged picture of how the sharing is taking place, such that it can be understood at a network level, but it's not giving away, you know, exact locations, it has no identifiers of who's linked to it. But even just with that sort of network exchange level data, you know, it really tells a very interesting story about how this system works. And, you know, as you said, I mean, this is very much in the peer to peer sharing economy space, which is a relatively new idea. So it's also from an academic point of view, very important and very useful to be doing research to understand these entirely, relatively new kinds of systems. Todd Landman  11:26   So essentially, because the heat map that that project produced was for a belief Haringey Council in Greater London, and I guess, you know, knowing what I know about data, this could be scaled up for all jurisdictions, the United Kingdom. And beyond that the heat map tells you areas of food poverty, but also could inform government as to where to put resource and where dare I say levelling up funding could be targeted to help those most in need. Ben Lucas  11:53   Yeah, absolutely. I mean, as I understand it, that works, you know, been incredibly useful for the platform and how it's looking to grow and continue to be successful. But yeah, absolutely. That's really another key thing here is the value these platforms have for policymakers for government, indeed. Todd Landman  12:08   Great. So we've had the data for good story, I now turn our attention to the data for bad story, because we had some guests that were very suspicious, sceptical and were critical of this burst and proliferation and digital transformation and the production of data second by second day by day, week, by week, year by year and two of our guests had actually different perspective on this, so Susie Alegre has this fantastic new book out with Atlantic books, she called Freedom of Thought. And what she was really concerned about was not only the history of analogue ways in which people's freedom of thought had been compromised, but also the digital ways in which freedom of thought might be compromised by this digital revolution. And for her, her concern, really is that there are unwitting or witting ways in which people's thought patterns might be manipulated through AI and machine learning. And we use popular examples of consumerism, consumer platforms, such as Amazon and other shopping platforms where not only does one get bombarded by advertisements, but actually gets suggestions for new things to buy based on patterns of spend in the past. And there is cross referencing between platforms. And I think Sam Gilbert also addressed this thing about this micro targeting and cross referencing. So if I search for something on one platform, it shows up on another one, when I'm sort of, you know, at least expecting it to do so. A bought some shoe laces the other day, they came to the house within a day. So I had that lovely customer experience. And yet, when I went on to a CNN website to look at the news headlines, the first ad that popped up was for shoe laces. So can you say a bit more about the unease that people have around these sharing platforms and the worry that our thoughts are being manipulated by this new technology? Ben Lucas  13:45   Yeah, I think this idea of freedom of thought or, you know, illusion of decision freedom is a really important one, when we're talking about the internet, and especially, you know, one can imagine, you know, as was evidenced with the Cambridge Analytical scandal back a few years ago, you know, this becomes especially dangerous when we're talking about political messaging. I think it's important that we, as users of the internet, approach the internet with a healthy degree of scepticism being a bit, you know, cautiously analytical, and occasionally taking a step back and thinking about what the implications of our behaviour online, including simply consuming content consuming information really are. The reality is most of if not all of the online platforms that we use be that social media, ecommerce, or whatever. They are designed to achieve immersion. They're designed to keep you spending more time and if you're spending time in the wrong kind of echo chambers, or if you're getting exposed to messages from bad actors. You hear these stories of people going down all sorts of terrible rabbit holes and things and this is how conspiracy theories and so forth proliferate online. Yeah, but certainly even just for the regular internet user, we all definitely need to be thinking about where is information coming from? Is it from reliable sources? Is the intent good? And do we indeed have that decision making freedom? I think is the really important thing, or is someone trying to play with us? Todd Landman  15:13   Well, it's a really interesting answer. And it links very nicely to our episode with Tom Nichols, because he was saying that there's this tendency towards narcissism. And that's, you know, certainly during COVID, people had more time inside, they had more time to dedicate to being online. But at the same time, the rabbit holes that you're worrying about really raised too high relief. And so that retreat into narcissism, the idea that if you're going to post something, you're only going to post something negative, critical and maybe sowing division by posting those critical comments. But you also in your answer talked about the power of particular individuals. And I guess, I have to address the question of Twitter in two ways. So Tom made this observation of Twitter is this sort of, you know, you have now have 240 characters to, you know, vent your spleen online and criticise others, but also that's powerful platform to mobilise people. And I say this in two ways. The first is that the revelations from the January 6 committee investigating the events that led up to the insurrection against the US Capitol was putting a lot of weight this week on just the number of followers that former President Trump had, and a single tweet in December where he said, you know, come to the Capitol on January 6, it will be wild. And then there were an array of witnesses paraded in front of the committee, from far right groups from the Oathkeepers, and other groups of that nature, who were saying, but actually, we saw this as a call to arms. So there was a nascent organising taking place, but there's almost this call to arms issued by a single tweet to millions of followers that really was, you know, the spark that lit the fire and wonder if you might just reflect on that. Ben Lucas  16:50   Yeah, I think for anyone currently also trying to keep up with slash decipher the story in the news about Elon Musk, putting in an offer to buy Twitter, which has now fallen through, I would use that lens to sort of explore this because one of the goals that I think he was seeking to achieve in taking over Twitter was really opening up its potential for free speech further. But yeah, for anybody sort of observing. That's a really tricky one. Because sometimes when the speech is, well, I mean, that there should be free speech. But people should be saying, you know, hopefully nice things within that freedom, and not denying the rights of others and not weaponizing free speech to stir up trouble. I think it's really, you know, we touched on this in the first episode of the series as well, the really big question with social media is, who's the editor in chief? Is it everybody? Or is it nobody, and which is the better format?  Todd Landman  17:42   Yeah, and we talked about that unmediated expression and unmediated speech and that Martin Scheinin, as well, as Tom Nichols talked about how traditional media organisations have had that mediating function, and the editorial function, which is lost when you have an open platform in the way that Twitter has, even though they did in the end, deplatform the former President. But I want to get back to that. I mean, you know, the task of the January 6 committee is not only to say we think there's a causal link between this tweet and people doing things, but they will also need to demonstrate the intentionality of the tweet in and of itself. And I think that's a major concern, because there's certainly ambiguity in the language saying, you know, come to the Capitol, it's going to be wild doesn't necessarily convert into a mass uprising with weapons and an insurrection. So there's a tall order of, I would say, legal proof, above reasonable doubt that needs to be established, were one to go down that legal route. But if we look at Elon Musk, I mean, here's one person who's exceptionally wealthy in the world who can buy an entire platform. And the concern that many people have is can one individual have that much power to acquire something that powerful, and we don't know if the deals fallen through, because there are some legal wranglings going on at the moment about whether he could actually withdraw at this late stage in the purchase process. But be that as it may, I wonder if you might just reflect on this ability for a very wealthy single individuals take control of a platform as powerful as Twitter. Ben Lucas  19:10   So I think it's a really complicated one, it's really one of the most complicated questions within the social media space, you know, because these platforms are ultimately businesses. There's a founder, there's a CEO, there's a board, there's that leadership, and hopefully accountability and responsibility. It is really a tough one, you know, one wonders about a future where, you know, in the same way, you've got the Open AI Foundation, for example, or you've got, you know, other truly sort of open peer to peer kind of platforms. If we think about how the internet is or technology is trying to decentralise things like finance in the future, wonders if there's sort of an alternative model that could solve some of these problems. I think the narrative so to speak specifically about Elon Musk that he's been putting forward, was really just to open up Twitter even further taking that sort of laissez faire kind of approach and just you know, letting free speech just sort itself out. And again, free speech is and can be a good thing. But sadly, when people engineer these kind of messages to avoid legal accountability, but are implying, you know, some sort of stirring up of trouble, when people engage in narcissistic sort of messaging when people engage in putting forward, you know, campaigns, you know, engineering very, very strong emotions, like fear and anger, obviously, that can get out of control very, very quickly. The reality is, I'm not qualified to come up with the solution. And I, sadly, I don't know who is. Yeah. Todd Landman  20:36   Well, that's interesting, because we have some guests that were suggesting a solution. And if I listened to you speak about the Elon Musk agenda to open up in a laissez faire way, it's almost the invisible hand of the information market, you know, if we go back to economics, and one tenant of economics at least has been that the invisible hand sort of guides markets, and the pricing and equilibrium that comes from supply and demand produces a regulatory outcome that is beneficial for the most people most of the time, it's a somewhat naive view, because there's always winners and losers and economic transactions. So counter to this idea of the invisible hand of the information market, we had quite an interesting set of thoughts from Martin Scheinin, and from Susie Alegre on the need for regulation. And that really does take us back to the beginning of this series of The Rights Track where you made the observation that tech is advancing more quickly than the regulatory frameworks are being promulgated that there's this lag, if you will, between the regulatory environment and the technological environment. So I wonder just for your final reflections, that really what both Martin Scheinin and Susie Alegre are saying that if tech is neutral, we need to go back to ethics, morality law and a human rights framework to give us the acceptable and reasonable boundary conditions with which all this activity needs to be thought about.  Ben Lucas  21:56   Yeah, exactly. I mean, it really does come down to, you know, well constructed regulation, which is obviously complicated, especially when, you know, most major social media platforms have a global footprint. So it's then how to ensure consistency across the markets they operate in. I think a lot of the regulatory frameworks are kind of there for the offline world. And the main thing, yeah, that we were sort of getting at in the first episode of this series is really that because technology moves so fast, because these platforms grew so quickly, you know, there are laws to stop people, no one can just go into the town square and start, you know, hurling obscenities, you know, in public, but for some reason, you know, it happens millions and millions of times a day on social media platforms. So I think, yeah, regulation really is key here. But the other thing is, I would say the people that misuse, the definition and excuse of free speech, should actually really look up the definition of free speech again. Todd Landman  22:57   Well, it's this idea of doing no harm. You know, I think I mentioned this notion of a Hippocratic Oath, if you will, for the digital world that you can engage but do no harm. And what people conceive and perceive as harm, of course, is open to interpretation. But that's a general kind of impulse behind this. And you know, this distinction between the offline world and the online world is also really, really important. So Tom Nichols invites us to maybe get off the grid occasionally go back into our community, say hi to our neighbours, volunteer for things and experience humanity face to face in the offline world a bit more than were experiencing in the online world. And of course, the appeal to morality, ethics, law and the human rights framework is going back to you know, basic philosophy, basic conceptions of rights, basic conceptions of law, to make sure that, you know, our offline world thoughts can be applied to our online world behaviours. So, you know, these are super deep insights. And as the world progresses, as technology progresses, as the interconnections between human beings progress in ways that we've seen over the last several decades, through the medium of digital transformation, and this ever expanding digital world, it does make us pause at this moment to say that actually reflect on human dignity, human value, integrity, and accountability and responsibility for the kinds of things that we do both within the offline world and the online world. And you've given us much to think about here Ben certainly across the many episodes of this series, you kicked us off with this great, you know, offline - online regulation versus tech dichotomy that we all face. We've heard from so many people, evangelising the virtues of the digital world but also raising significant concerns about the harm that can come from that digital world if we allow it to run unchecked. So for now, it's just my job to thank you Ben for coming back on this final episode, giving us a good wrap up set of reflections on what you've heard across the series. And thank you ever so much for joining us today on this episode of The Rights Track. Ben Lucas  25:02   Thanks so much. Christine Garrington  25:04   Thanks for listening to this episode of The Rights track podcast which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find a full transcript of this episode on the website at www.rightstrack.org together with useful links to content mentioned in the discussion. Don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.  

Freedom Matters
Meaningful Motivation – Sharath Jeevan

Freedom Matters

Play Episode Listen Later Jul 7, 2022 28:10


In this episode, we welcome Sharath Jeevan OBE, one of the world's leading experts on intrinsic motivation, direction and potential. He is the Executive Chairman of Intrinsic Labs and author of groundbreaking smart thinking book Intrinsic. In this episode we discuss why we are experiencing a crisis of motivation, where external motivators like money and status fall down, the value of intrinsic motivators, autonomy, mastery and purpose, and how we all have the opportunity to reset the direction of our lives for the better. This episode is part of our mini-series on 'Self' where we explore how our technology impacts some of the most important aspects of being human. In this series we speak with Krista Tippett, creator of On Being, Susie Alegre, human rights lawyer and author of Freedom to Think, L M Sarcasas, renowned commentator on technology & society, Casey Swartz, author of Attention, A Love Story, Jillian Horton, MD, author of We Are All Perfectly Fine and Sharath Jeevan OBE, motivation expert and author of Intrinsic. To find out more about Sharath: https://www.intrinsic-labs.com/ To read Intrinsic: https://www.amazon.com/Intrinsic-re-ignite-inner-drive-rewards-based-ebook/dp/B08B4HP1F6 Host and Producer: Georgie Powell https://www.sentientdigitalconsulting.com/ Music and audio production: Toccare https://spoti.fi/3bN4eqO

RSA Events
Can we be free in the age of the internet?

RSA Events

Play Episode Listen Later Jun 23, 2022 51:56


We unthinkingly grant internet companies access to our homes, relationships, and most private thoughts. This information is used to mould our realities, influencing our everyday choices and actions – from who we date to how we vote. How can we safeguard our freedom of thought in an age when our minds are for sale?As human rights lawyer Susie Alegre explores, this is a new frontier in an age-old struggle: the powerful have always sought to influence how we think, and these latest tools for doing so threaten our mental freedom like never before. She examines the long history of the effort to liberate our minds, and lays out how we can recast our human rights for the digital age to protect our most fragile and fundamental freedoms. #RSAFreedomBecome an RSA Events sponsor: https://utm.guru/ueembDonate to The RSA: https://utm.guru/udNNBFollow RSA Events on Instagram: https://instagram.com/rsa_events/Follow the RSA on Twitter: https://twitter.com/RSAEventsLike RSA Events on Facebook: https://www.facebook.com/rsaeventsoff... 

internet rsa susie alegre
Freedom Matters
Freedom to Breathe – Jillian Horton

Freedom Matters

Play Episode Listen Later Jun 9, 2022 30:51


This week, we are in conversation with Jillian Horton. Jillian is an award-winning medical educator, writer, musician and podcaster. She completed a residency and a fellowship in internal medicine at the University of Toronto and has held the post of Associate Dean and Associate Chair of that department. For sixteen years, she cared for thousands of patients in an inner-city hospital. During that time, she had three sons and mentored hundreds of students. She now leads the development of new programs related to physician wellness and won the 2020 AFMC–Gold Humanism Award. In this episode we reflect on themes from her book, We are All Perfectly Fine, and discuss: how language can define how we feel the spectrum from coping to thriving the science behind mindfulness and why it is so powerful This episode is part of our mini-series on 'Self' where we explore how our technology impacts some of the most important aspects of being human. Episodes so far in the series include Krista Tippett, creator of On Being, Susie Alegre, human rights lawyer and author of Freedom to Think, and L M Sarcasas, renowned commentator on technology & society. Upcoming episodes are with Casey Swartz, author of Attention, A Love Story, and Sharath Jeevan OBE, motivation expert and author of Intrinsic. Our goal: to help all our listeners to think more critically about the role of technology in our lives, and how it shapes who we are. To read We Are All Perfectly Fine: https://www.harpercollins.ca/author/cr-195536/jillian-horton/ Host and Producer: Georgie Powell https://www.sentientdigitalconsulting.com/ Music and audio production: Toccare https://spoti.fi/3bN4eqO

Freedom Matters
Reframing our Reality – Michael Sacasas

Freedom Matters

Play Episode Listen Later May 26, 2022 33:23


This week we welcome Michael Sacasas, author of The Convivial Society, a newsletter about technology and society. Michael is the associate director of the Christian Study Center of Gainesville and has written for The New Atlantis, The New Inquiry, Comment Magazine, and Real Life Magazine. He is also the author of a forthcoming book 41 Questions: Technology and the Moral Life (Avid Reader Press). In this thought provoking episode we invite all our listeners to reconsider the place of technology in their own lives. We discuss: the role of technology in our lives and how the way that we use tools shapes our experience of the world, and by extension our existence attention, it's importance and its various forms why Twitter for Michael is a deal with the devil how, even out of the Metaverse, technology reframes our reality the three most important questions everyone should ask of the way that they use technology If you enjoyed our previous conversations with Oliver Burkeman, Nicholas Carr and Krista Tippett, then this episode is for you. This episode is part of our series on "Self", where we explore how our technology impacts some of the most important aspects of being human. Recent episodes include Krista Tippett, creator on On Being, Susie Alegre, human rights lawyer and author of 'Freedom to Think' and upcoming episdoes include Jillian Horton MD, and author of 'We are All Perfectly Fine', Casey Swartz, author of "Attention, A Love Story" and Sharath Jeevan OBE, motivation expert and author of "Intrinsic". Our goal: to help all our listeners to think more critically about the role of technology in our lives, and how it shapes who we are. More from Michael: The Convivial Society https://theconvivialsociety.substack.com/ Host and Producer: Georgie Powell https://www.sentientdigitalconsulting.com/ Music and audio production: Toccare https://spoti.fi/3bN4eqO

ODPA Data Protection Teabreak
The Bijou Lecture with Susie Alegre

ODPA Data Protection Teabreak

Play Episode Listen Later May 24, 2022 33:24


This is the audio transcription of The Bijou Lecture (2022) with Susie Alegre. Read more at: www.odpa.gg/project-bijou/the-bijou-lecture-2022/

Response-ability.Tech
Why Human Rights Law is AI Ethics With Teeth. With Susie Alegre.

Response-ability.Tech

Play Episode Listen Later May 23, 2022 30:33 Transcription Available


Our guest today is Susie Alegre. Susie is an international human rights lawyer and author. We're in conversation about her book, Freedom To Think: The Long Struggle to Liberate Our Minds (Atlantic Books, 2022). Susie talks about freedom of thought in the context of our digital age, human rights, surveillance capitalism, emotional AI, and AI ethics.Susie explains why she wrote the book and why she thinks our freedom of thought is important in terms of our human rights in the digital age. We explore what freedom of thought is ("some people talk about it as mental privacy") and the difference between an absolute right and a qualified right, and why absolute rights are protected differently.Susie shares some historical examples including witch trials as well as the work of Ewen Cameron, a Scottish psychiatrist in Canada, who experimented on ordinary people without their consent to explore ways to control the human mind. Facial recognition technology is a modern attempt to get inside our heads and predict such things as our sexual orientation. Susie explains why researchers shouldn't be experimenting with facial recognition or emotional AI: you're “effectively opening Pandora's box”.Susie explains the difference between surveillance advertising, which uses data captured about our inner lives that is sold and auctioned on an open market, in order to manipulate us as individuals, and targeted advertising.Over the past few years there's been a great deal of focus on ethics and Susie suggests we need to move away from the discussion of ethics “back to the law, specifically human rights law”. She explains that human rights law is being constantly eroded, and says “one way of reducing the currency of human rights law is refocusing on ethics”. Ethics are simply a “good marketing tool” used by companies.The inferences being made about us, the data profiling, the manipulation means it's practically impossible to avoid leaving traces of ourselves, it's beyond our personal control, and privacy settings don't help. In her book Susie suggests that by looking at digital rights (data and privacy protection) in terms of freedom of thought, "the solutions become simpler and more radical". It's a point that Mary Fitzgerald, in her review of Susie's book in the Financial Times, suggested was a "unique contribution" to the debates about freedoms in the digital age, and that "reframing data privacy as our right to inner freedom of thought" might capture "the popular imagination" in a way that other initiatives like GDPR have failed to do. Susie explains for us how this approach would work. Follow Susie on Twitter @susie_alegre, and check out her website susiealegre.com. Read the full transcript.  Read the conversation as a web article.Watch the interview on our YouTube channel.

Freedom Matters
Freedom to Think – Susie Alegre

Freedom Matters

Play Episode Listen Later May 12, 2022 32:30


This week, we welcome Susie Alegre, a leading human rights barrister in the internationally renowned Doughty Street Chambers. She has been a legal pioneer in digital human rights, in particular the impact of artificial intelligence on the human rights of freedom of thought and opinion. Her book, Freedom to Think, charts the history and importance of our most basic human right: freedom of thought. From Galileo to Nudge Theory to Alexa, Susie explores how the powerful have always sought to get inside our heads, influence how we think and shape what we buy. Providing a bold new framework to understand how our agency is being gradually undermined, Freedom to Think is a groundbreaking and vital charter for taking back our humanity and safeguarding our reason in the technological age. In this fascinating episode we discuss: How human rights underpin what it means to be human and why the right to the freedom of thought should be protected at all costs How historically this right has come under threat, but never more so that today, when the threat of surveillance capitalism means our minds are read every single minute We discuss how to stay cognisant of how technology is affecting our freedom of thought And future strategies to keep us safe. This episode is part of our mini-series on 'Self' where we explore how our technology impacts some of the most important aspects of being human. Over the coming weeks we will speak with Krista Tippett, creator of On Being, Susie Alegre, human rights lawyer and author of Freedom to Think, Jillian Horton MD, physician and author of We are All Perfectly Fine, Casey Swartz, author of Attention, A Love Story, L M Sarcasas, renown commentator on technology & society, and Sharath Jeevan OBE, motivation expert and author of Intrinsic. Our goal: to help all our listeners to think more critically about the role of technology in our lives, and how it shapes who we are. To find out more about Susie: https://susiealegre.com/ Purchase Freedom to Think in the US Purchase Freedom to Think in the UK Susie talks about Magic Sauce: https://applymagicsauce.com/demo And also Privacy International's Twitter Bot: https://twitter.com/privacyint/status/1148506707150200833 Host and Producer: Georgie Powell https://www.sentientdigitalconsulting.com/ Music and audio production: Toccare https://spoti.fi/3bN4eqO

The Rights Track
Liberating our minds in a digital world: how do we do it?

The Rights Track

Play Episode Listen Later May 11, 2022 31:47


In episode 6 of Series 7 of The Rights Track, we're joined by Susie Alegre, an international human rights lawyer and associate at Doughty Street Chambers specialising in digital rights. Susie's work focuses in particular on the impact of technology and AI on the rights to freedom of thought and opinion. Her recently published book - Freedom to Think: The Long Struggle to Liberate Our Minds – explores how the powerful have always sought to influence how we think and what we buy. And today we are asking her how do we liberate our minds in a modern digital world?    Transcript Todd Landman  0:01  Welcome to the Rights Track podcast which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in a digital world. I'm Todd Landman, in the sixth episode of the series, I'm delighted to be joined by Susie Alegre. Susie is the international human rights lawyer and associate the Doughty Street Chambers specialising in digital rights, in particular the impact of technology and artificial intelligence on the rights to freedom of thought and opinion. Her recently published book - Freedom to Think; The Long Struggle to Liberate our Minds - explores how the powerful have always sought to influence how we think and what we buy. And today we're asking her, how do we liberate our minds in a modern digital world? So Susie it's great to have you on this episode of the Rights Track. Welcome. Susie Alegre  0:47  Thank you so much for having me. I'm very excited to be here. Todd Landman  0:49  So I love the book - Freedom to Think - I've read it cover to cover. In fact, I read it probably in two days, because it's such a compelling read. And I guess my first question for you is, why is the freedom to think broadly understood belief, expression, speech, religion, thought, why is all of that so critical to us as human beings? Susie Alegre  1:10  I think the way that I've looked at it in the book is really dividing those elements up a little bit. So what I focused on in the book is freedom of thought and opinion and what goes on inside our heads, as opposed to the more traditional discussions that we have around freedom of speech. And one of the reasons for that is that while freedom of speech has consequences and responsibilities, and freedom of speech can be limited, that freedom in our inner worlds to think whatever we like to practice our thoughts and opinions and decide whether or not there's something we should share, is what allows us to really develop and be human. And the right to freedom of thought and opinion, along with belief and conscience, insofar as we practice that inside our heads is something that's protected absolutely in international human rights law, which I think reflects its importance. And when you consider other absolute rights and human rights law, like the prohibition on torture, or the prohibition on slavery, the right to freedom of thought inside your head alongside those other rights, really gets to the heart of human dignity, and what it means for us to be humans. Todd Landman  2:24  Yes and so in protecting those rights, we are giving people agency because I was caught really captured by one thing you just said there about, we choose what we want to share. So a lot of us can have a million thoughts a second, but we don't share all of them. Although in the current era, it seems that people are sharing pretty much everything that they're thinking. But we'll get to that in a minute. I'm just curious about this idea of agency that, you know, you choose what to share, you also choose what not to share. And that element of choice is fundamental to being human. Susie Alegre  2:53  Absolutely. And what the right to freedom of thought, well certainly a key element is right to freedom of thought and freedom of opinion, is what's called freedom in the forum internal that's inside, you know, in our inner lives, it's not what we then choose to do, or say in the outer world. And having that inner space, it's really important for us to be able to develop who we are, you know, I'm sure all of us have had thoughts that we wouldn't particularly like to be recorded. And I don't know if you've seen the recent drama Upload, which. Todd Landman  3:28  I have not. Susie Alegre  3:29  Well it's worth a look, because I was watching one of the episodes where it was about people being unable effectively to shut off their thoughts or their thoughts were being live streamed if you like. And I mean, you can only imagine the horror of that, you know, that was a comedy. A similar story played out in a short story by Philip K. Dick, The Hood Maker, which was a situation where you had people who were able to read other people's thoughts, and the only way that you could protect yourself from this mind reading was to wear a hood. And so protecting your thoughts from mind reading was really seen as an act of rebellion and effectively made unlawful and that I think shows just how important this space is. It is if you like the absolute core of privacy. So privacy becomes like a gateway right to that central core of who we are, and how we decide who we're going to be. Todd Landman  4:27  I like this idea of a gateway right - that's really cool. Now, in the book, you have this really the first part is quite a deep dive into history. I mean, you go right back to Socrates, you worked your way through Galileo, you work your way through people that challenge the status quo, through freedom of thought, whether it was scientific practice, or religious belief or any kind of thought, but what are some of the high points of this history and shall we say the analogue attempts to control people's thoughts? Susie Alegre  4:53  Yeah, as you say, I looked right back and and Socrates is if you like, a classic example of a martyr for freedom of thought. One of the interesting things as well about Socrates is that we don't have anything written down by Socrates, because Socrates was himself very suspicious of the written word and what that did for humans ability to debate. But what he did do was absolutely question the status quo. And he delighted in creating arguments that would undermine Greek democracy at the time. But one of the reasons why we all know the name of Socrates and remember, Socrates, is because Socrates was effectively judged by his peers, and forced to take his own life by Hemlock because of his scurrilous ideas, and his attempts to twist the minds of young Athenians and to question the gods. So while Socrates might be sort of seen as an example of a champion of freedom of thought and freedom of speech, it was very clear that at that time in history, you didn't really have freedom of speech, because it ultimately landed up with a death sentence. Some of the other areas I looked at were people like Galileo and questioning whether the sun and the universe travelled around the Earth or the other way around, and that really landed him in house arrest. So really, again, questioning the status quo of the church, and certainly religions through the centuries have been one of the prime movers in curtailing freedom of thought and freedom of religion, if you'd like. Todd Landman  6:32  Yeah, in my world, the Galileo story is a kind of clash between observational data and belief. Susie Alegre  6:38  Yeah, absolutely, absolutely. But again, it sounds like one of those arguments of you know, well, you can have your own opinion and every opinion is sort of questions, but in another century, and in that century, you'll end up under house arrest, when you challenge the beliefs of the status quo and of the powers that be. Todd Landman  6:56  Yes, we see that being played out today, in the scepticism around science, whether one takes an extreme view about for example, being a flat earther. Or if there's doubt about scientific discovery, scientific development, the way in which countries respond to the COVID crisis, the hesitancy around vaccines, masks mandates, that kind of general scepticism around science, is also one where sure, there's freedom of thought, belief and opinion. But then there's also tested peer reviewed scientific evidence for the best thing we think we can possibly do under times of great uncertainty. Susie Alegre  7:31  Absolutely. And that area is a prime area where you see the difference between freedom of thought and opinion and freedom of speech and expression. So where you have sort of COVID conspiracy theories, if you like spreading through social media or spreading really proven false information that can harm people. You know, there is then a legitimate reason to restrict that expression and the spread of that expression, to protect public health. Doesn't mean that people can't still think those things. But there really have to be limitations on how those expressions are spread, when they are absolutely damaging to public health or to other people's rights. Todd Landman  8:18  Yes, exactly. And I don't think you covered this in the book. But I just want to push you a little bit. You mentioned about Socrates written word not being written down. But with the invention of the printing press historically, how had that changed freedom, expression, thought, belief? What's the role of that technological advance in your understanding of the history of this idea? Susie Alegre  8:39  Well, the printing press just really accelerated the way that information could be shared, it effectively accelerated the impact of expression, if you'd like. And interestingly, actually, I was asked recently, to compare regulation of the printing press and of printing around that time and how long it took to get serious regulation as compared to trying to regulate the internet today. And I said, rather flippantly, well, people were arrested, and books were burned. That was how regulation worked initially in response to the massive impact of the printing press. And while I was being flippant when I thought about it afterwards, well actually, that is how they tried to regulate the printing press. And one of the reasons I looked back at the past of freedom of thought in the ways that we didn't really have freedom of thought historically. To me, that was important because it showed what a sea change, having human rights law has been for us as human beings. So you know, people may complain about cancel culture, but certainly in the UK cancel culture very rarely involves actually being put in prison. Certainly it doesn't involve being told to drink hemlock or certainly not being obliged to drink hemlock. Human rights have really put the brakes on the ability of the powers that be to control us. But they've also put an obligation to protect us from each other. Todd Landman  10:13  And there's a certain duality then because if I think about what you just said, the powers that be, let's translate that into the rise of the modern state, as it were. And you draw on reading some, you know, quite regularly through the book you draw on Orwell's 1984. You draw on Arendt's Origins of Totalitarianism you draw on Huxley's Brave New World. So why did you draw on those sources? It seems to be you're alluding to the power of the state, the power of control, all those sorts of aspects. And yet, in order for human rights to work, we still need the power of the state. So there's two sides of the coin problem that we face in this quest to regulation. Susie Alegre  10:52  Absolutely. And drawing on those sources, in particular, in particular, Orwell and Huxley. I mean, perhaps because I'm a bit of a masochist, I spent the start of lockdown reading 1984. And just marvelling at how prescient it was, and how accurately it portrayed the developments of technology in our life. The Speak Write machine, the way that Winston Smith is employed to rewrite history, if you like, sort of creating in real time, disinformation in 1984, was somehow a real surprise to me having not read it since 1984, was just how accurately prescient it was. And similarly, reading Brave New World and the consumerism and the use of distraction as a means of social control, rather than the oppressive jackboot that you see in 1984. And seeing the ways that potentially commercial enterprises and a light touch can be used to have an equally corrosive and problematic effects on our societies. So the reflections of the images of Huxley and Orwell in particular was so stark that I felt that I had to use them because it seemed that rather than taking those as a warning from the 20th century, we've taken them as a template for the development of technology and consumerism in our lives. Todd Landman  12:23  So I suppose that really allows me now to segue nicely into your concerns over the digital world and how this digital world relates to human rights. And I guess my entry point is this famous line you have in the book where you say, you know, I told my daughter, she can't have Alexa. And she asked me why. And I said, you can't have an Alexa because it steals your dreams, and sells them to other people. Talk me through that. Talk me through your fears and worries around Alexa and what that means for the broader digital problem that we face. Susie Alegre  12:52  Yeah, Alexa is certainly a case in point. And as I'm sure anyone else with children has had the experience, your child comes home and their friends have got whatever technology it is, in this case, Alexa, and I know several people, several families where the kids do have Alexa in their bedroom. So you will always get these arguments as well sounds so has it so it must be great. For me the idea of Alexa the idea of actively choosing to bring a listening device into your home, that is constantly listening to what is going on in your home and sharing that with you have no idea who using that information in ways that you have no real idea how that's going to land up is something so astonishing. You know, having spent years working on human rights and counterterrorism, and also most recently, working in oversight on interception of communications, and how sort of allergic people or if you like, and quite rightly, to state intrusions to the idea that the state might be bugging your home, to then actually pay money and to let a private actor come in and listen to everything that's going on in your home for profit, just to me seems really astonishing. And yet somehow, it's become so normalised that as I said, I know lots of people who do have Alexa and are delighted to have Alexa. Plenty of people in the lockdowns suddenly sending around videos from their Ring cameras outside their doors, but this idea of constant control constant monitoring of our lives for someone else's profit. To me seems like something that is an really fundamental shift and something that we should all be really concerned about. Todd Landman  14:51  Now you're in addition to the Alexa example you're also very concerned about, shall we say the unregulated or the unleashing of and I will use the generic term algorithms in the digital world? So why are these algorithms problematic? From your perspective? What do they do? How do they affect people? Or is it a way that they're affecting people? And people don't even know? And is it that ignorance of the effect that concerns you? Or is it just the development of algorithms in the first place that concerns you? Susie Alegre  15:20  Now, I mean, algorithms are digital tools, if you like. So it's not the algorithm itself. There are two things really well, there are many. But let's start with two. One is the ability to understand why an algorithm is operating in the way it's operating. So an algorithm is effectively told to take information and translate that information into a conclusion or into an action, but understanding exactly what information is taken, how that information is being weighted, and then how a decision if you like, as being taken and what impact that decision will have, is often not very clear. And so where an algorithm based on huge amounts of data, for example, is being used to decide whether or not you might be fraudulently requesting benefits, for example, in the benefits system, that raises really serious concerns, because the outcome of not getting benefits or the outcome of being flagged as a fraud risk, has a really, really seriously detrimental impact on an individual life. Todd Landman  16:29  Yes. And you also give examples of credit rating. So if typically, somebody wants to get a mortgage in the UK, the mortgage company will say, well, we're gonna run a credit check on you. And they might go to one of the big data providers, that gives you a score. And that score is a function of how many credit cards you have any loans, you might have had any late payments you might have had on a loan or a mortgage in the past. And in the absence of a particular number. The company may reserve the right to say, you can't have a mortgage and I think you give the personal examples of your own struggles setting up a bank account after having lived abroad. Susie Alegre  17:03  Yeah. Todd Landman  17:04  Talk us through some of that. Susie Alegre  17:05  Yeah, absolutely. So as you say, I talk a bit in the book about returning from Uganda, where ironically, I've been working as a diplomat for European Union on anti-corruption. And I came back to the UK to work as an ombudsman in the Financial Ombudsman Service. But when I applied for a bank account, I was suddenly told that I couldn't have the bank account. Because the computer said no, effectively. The computer had clearly decided that because I was coming from Uganda or whatever other information had been weighed up against me, I was too much of a risk to take. The fact that I had been fully vetted as an ombudsman, and that the money that would be going through that bank account was going to be salary from the Financial Ombudsman Service was not enough to outweigh whatever it is the algorithm had decided against me. Eventually, I was able to open an account a few months later. But one of the interesting things then working as an ombudsman was that I did come across cases where people had had their credit score downgraded because the computer said so and where the business was unable to explain why that had happened. I mean, from an ombudsman perspective, I was in a position to decide what's fair and reasonable in all circumstances of a case. In my view, it's very difficult to say that a decision is fair and reasonable if you don't know how that decision has been reached. But those kinds of decisions are being made about all of us all the time, every day in different contexts. And it's deeply concerning that we're not often able to know exactly why a decision has been taken. And in many cases, we may find it quite difficult to even challenge those decisions or know who to complain to.  Todd Landman  17:14  Yeah and this gets back to core legal principles of fairness, of justice, of transparency of process and accountability of decision making. And yet all of that is being compromised by, let's say, an algorithm, or as you say, in the book, the computer says no. Susie Alegre  18:49 Completely and I think one of the key things to bear in mind that even the drafters have the right to freedom of thought and opinion in the International Covenant on Civil and Political Rights, discuss the fact that inferences about what you're thinking or what your opinions are about, can be a violation of the right even if they're incorrect. So when you find the algorithm, making inferences about how risky a person you are, whether or not the algorithm is right, it may still be violating your right to keep your thoughts and opinions to yourself. You know, you should only be judged on what you do and what you say, not on what somebody infers about what's going on in your inner life. Todd Landman  19:50  Not on what you might be thinking. Susie Alegre  19:52  Exactly. Absolutely. Absolutely. Todd Landman  19:54  Right now, we've had a couple of guests on previous episodes that I would put broadly speaking in the camp of the 'data for good' camp. And when I read your book, I feel like I'm gonna broadly put you in the camp of 'data for bad'. And that might be an unfair judgement. But is there data for good here? I mean, because, you know, you cite the sort of surveillance capitalism literature, you have some, you know, endorsements from authors in that tradition. But if I were to push you, is there a data for good story that could be told nevertheless? Susie Alegre  20:23  I think there might be in public data. So for example, in the US, and I don't know if they are included in your guests, but there's data for black lives. And they've done really interesting work from public data, you know, flagging where there are issues of racial and systemic injustice. So that kind of work, I think, is very important. And there is a distinction between public data and private data, although how you draw that distinction is a really complicated question. But in terms of our personal data, one of the things that I think is important in looking at how to address these issues, is about setting the lines for the things that you can never do. And what I hope is that if you set down some barriers, some very, very clear lines of what can never ever be done with data. Then you will find technology, particularly technology related to data, and that includes the use of AI interpreting and working with data will develop in a different direction, because at the moment, the money is in extracting as much personal information as you can out of every single one of us and selling them. Todd Landman  21:40  And the degree of the extraction of that information is both witting and unwitting. So you also make the point in the book, if somebody signs up for a Facebook account, they just hit agree to the terms and conditions. But actually the time it takes to read the terms and conditions could be two or three days to get through to the fine print. And so people are just saying yes, because they want this particular account with not actually knowing the degree to which the sharing their personal information. Is that correct? Susie Alegre  22:06  Absolutely. And the other problem was the terms and conditions is that if you don't like them, what exactly you're going to do about it? Particularly if you're looking at terms and conditions to be able to access banking or access the National Health Service. If you don't like the terms and conditions, how exactly are you going to push back. But that point that you've made as well about the consent button, there's also an issue around what are called dark patterns. So the way that technology is designed, and that our online experience is designed to nudge us in certain directions. So if you're asked to agree the terms and conditions, the easiest thing is to hit the big green button that says I consent. Again, we see it with cookies, you know, often you've got a simple option where you hit I consent, or there's a complicated option, where you can manage your cookie settings and go through a couple of different layers in order to decide how much you want to be tracked online. And so that is clearly pushing you in the direction in time poor life experience, to hit the easiest option and just consent. Todd Landman  23:16  I feel that everybody you know, I read through Flipboard, which is a way of aggregating news sources from around the world by topic. And I sort of follow politics and law and international events, music and various other things. But every news story open up because of GDPR I get a pop up screen that says accept cookies, manage cookies. And I always say accept because I want to read the story. But what I'm actually doing is telling the world I've read this story, is that right? Susie Alegre  23:43  Yeah, absolutely. The cookies question as well as one where, actually, why should we be being tracked in all of our activities? All of our interests? And as you say, you know, telling the world that you've read this article is partly telling the world what you're interested in and what you're thinking about, not just that you've read this article in an abstract sense, you know, it's telling the world about your interests. One of the things that is also disturbing that people often don't realise is that it's not just what you read. It's even things that you may hover over and not click on that are equally being tracked. And it's not just on the page where you're reading the article. It's about being tracked all around your online activity being tracked with your phone being tracked, where you are not just what you're looking at on the phone. It's so granular, the information that's being taken, that I think very few of us realise it and even if you do realise that as individuals, we can't really stop it. Todd Landman  24:52  And I think for that reason I take a little bit of comfort because I wasn't targeted by Cambridge Analytica. I probably played some of the games on Facebook, you know the personality test stuff, but I never got ads as far as I was concerned that were being, you know, foisted upon me by the Cambridge Analytica approach. I use that as, let's say, a metaphor. But I know that there was micro-targeting based on certain profiles, because there was an attempt to leverage voters who had never voted before, or voters who were predisposed to in particular vote to vote for certain things. But again, it's that unwitting sort of profile that you build by the things that you hover over or the things that you'd like or the things that you at least read and accept that button on cookies. And of course, we now know that that microtargeting actually might have had a, you know, a significant impact on the way in which people viewed particular public policy issues. Susie Alegre  25:41  Completely, and I mean, I don't know whether I was or was not targeted by Cambridge Analytica or similar, around that time around 2016/2017. I don't know if you've come across a Who Targets Me, which is a plugin that you can put onto your browser to find out particularly around election times, who is targeting you. And I have to say that when I very briefly joined a political party for a couple of months, I signed off my membership after a couple of months, because I discovered that they were targeting me and people in my household through this, who targets me plugin. So even though theoretically, as a member, I was already going to vote for them. But that information was being used to pollute my online environment, as far as I'm concerned, which was a bit of an own goal, I imagine for them. Todd Landman  26:32  So that really does bring us to the question of what is to be done. So you know, I was waiting in the book for sort of what's the regulatory answer, and you do give some good practical suggestions on a way forward, because there is this challenge, particularly where we need services, you know, we do need mortgages, we need access to health care, we need public information, we need all the benefits that come from the digital world. But at the same time, we need to protect ourselves against the harms that digital world can bring to us. So what are the sort of three or four major things that need to happen to maybe mitigate against the worst forms of what you're worried about in the book? Susie Alegre  27:10  Well, one of the difficulties in the book was coming up with those things, if you like, what are the key things that we need to stop, and particularly in an atmosphere where we are seeing regulation happening, rapidly trying to play catch up, we've just seen the Digital Services Act in the European Union being agreed, we have the Online Safety Bill on the table in the UK, in Chile, we've seen in the last year legislation around neuro rights being introduced. And so it's a very fast paced environment. So trying to come up with suggestions that go to the heart of it while recognising the complexity and also recognising that it's in a huge state of flux. I wanted to really highlight the things that I think are the core of how we've got here and the core, very obvious things that we should not be doing. The first one of those is surveillance advertising. And that is advertising that is based on information, granular information, like we've been talking about about our inner lives, including how we're feeling potentially at any single moment in order to decide what images what messages we should be delivered. And whether those are political messages, whether that is commercial messages, whether it's just trying to drag us into gambling, when we're having a bad moment online. All of those kinds of things are part of this surveillance advertising ecosystem. And while surveillance advertising isn't the whole problem, I think that surveillance advertising is the oil that is driving this machine forward. If you don't have surveillance advertising, there isn't so much money in gathering all of this information about us. Because that information is valuable because it can sell us stuff, whether it's selling us a political candidate, or whether it's selling us a particular pair of socks tomorrow. And so surveillance advertising, I think is the key. And I think banning surveillance advertising would be the single most effective way to start change. Another thing that I think could make a real sea change in the way tech develops is recommender algorithms. And again, the things that are being recommended to us the way that we receive our information, whether that is on Netflix, whether that is on new services, potentially, very personalised recommendations of information are a way of distorting how we think and how we see the world based on information about our emotional states information about our psychological vulnerabilities, a whole raft of things that could lead to that. That I think is a real vehicle for social control. And so you may want occasionally, or even always, to have somebody suggesting what you should watch, when you're feeling tired, you don't want to make a decision yourself and you're happy to just be given whatever it is. But recommender algorithms and that kind of personalization of information feeds should never ever be the default. At the moment for most of us that is the situation. When we open up our laptops. When we open up social media, when we look at our phones, we're being given a curated personalised experience without necessarily realising it. So addressing that, and making sure that personalization is not the automatic choice would make a really big difference. Todd Landman  30:53  It's just an amazing set of insights. You've taken us from Socrates to socks here today. And it's been an incredible journey listening to you and so much to think about and so many unresolved issues. And when I listen to you, and I read your book, you know, I feel like I should get off the grid immediately, and put my hood on because I don't want anyone reading my mind and I don't want anyone selling me socks. But for now, Susie, it was just great to have you on this episode of the Rights Track and thanks ever so much. Susie Alegre  31:20  My pleasure. Thank-you so much for having me. Christine Garrington  31:23  Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find a detailed show notes on the website at www.RightsTrack.org. And don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.    

Freedom Matters
On Being & Freedom – Krista Tippett

Freedom Matters

Play Episode Listen Later Apr 28, 2022 38:23


This week we welcome Krista Tippett. Krista is a Peabody Award-winning broadcaster, a National Humanities Medalist, a New York Times bestselling author and the creator and host of On Being, a world renown podcast which sets out to explore the immensity of our human lives. In this extended episode, we discuss the importance of questions - how asking the right questions and accepting that there may never be an answer, can help us to know ourselves better, whilst enabling society to grow. We discuss the role of technology in our rush for answers, media's role in the portrayal of society and just how Krista, through her career in exploring humanity has come to understand herself. Krista grew up in a small town in Oklahoma, attended Brown University, and became a journalist and diplomat in Cold War Berlin. She then lived in Spain and England before seeking a Master of Divinity at Yale University in the mid-1990s. Emerging from that, she saw a black hole where intelligent public conversation about the religious, spiritual, and moral aspects of human life might be and came to launch On Being, a weekly NPR show, to fill this hole. In 2014, the year after she took On Being into independent production, President Obama awarded Krista the National Humanities Medal for “thoughtfully delving into the mysteries of human existence.” On Being with Krista Tippett airs on more than 400 public radio stations across the U.S. and is distributed by WNYC Studios. The podcast has been played/downloaded more than 350 million times. == This is the first episode in our new mini-series on 'Self', where we explore how our technology impacts some of the most important aspects of being human. Over the coming weeks we will speak with Krista Tippett, creator on On Being, Susie Alegre, human rights lawyer and author of Freedom to Think, Jillian Horton MD, doctor and author of We are All Perfectly Fine, Casey Swartz, author of Attention, A Love Story, LM Sacasas, renowned commentator on technology & society, and Sharath Jeevan OBE, motivation expert, and author of Intrinsic. Our goal: to help all our listeners to think more critically about the role of technology in our lives, and how it shapes who we are. Learn more about Krista and the On Being Project: https://onbeing.org/ Listen to the On Being show and podcast: https://onbeing.org/series/podcast/ Host and Producer: Georgie Powell https://www.sentientdigitalconsulting.com/ Music and audio production: Toccare https://spoti.fi/3bN4eqO

Seriously…
Under the Influence

Seriously…

Play Episode Listen Later Dec 10, 2021 29:14


Philosopher and author James Garvey examines the rise of behavioural science at the heart of our politics and its key role during the pandemic. There was a large amount of attention paid to the government's slogan during the Covid crisis that politics would 'follow the science'. But not just branches of the natural sciences, like epidemiology, medicine and virology. Our national politics is also being informed to an unprecedented degree by behavioural science – taking advice from experts with a remarkable understanding of human motivation, decision-making and action; how to steer whole populations from one mode of behaviour to another in a crisis, not only for medical purposes but also as a tool for government and social order. The Behavioural Insights Team was called to action and the Independent Scientific Pandemic Insights Group on Behaviours (SPI-B) convened, reporting directly to SAGE who reported to No.10. James Garvey, who has written on the history of persuasion, explores the deep and ever-more powerful relationship between politics, government and behavioural science. What are the key ideas here and where did this alliance come from - what have been its strengths and weaknesses? James asks whether behavioural science techniques are being used to circumvent more traditional routes of democracy, such as manifestos, public debate and even our political consent. But also how behavioural insight is illuminating problems governments have found difficult or even intractable in the past, upturning older models of the public, benefiting the whole. He explores how online and digital technology might be used to amplify these techniques. Is this a pivotal moment for our politics? Contributors include Brooke Rogers, chair of the Cabinet Office Behavioural Science Expert Group and co-chair of the Independent Scientific Pandemic Insights Group on Behaviours (SPI-B), behavioural economist Cass Sunstein (who co-authored the bestseller ‘Nudge'), public health psychologist and member of SPI-B Chris Bonell, lawyer Susie Alegre, who specialises on freedom of thought and digital rights, author Peter Pomerantsev, who writes about propaganda and political influence, economist Shaun Hargreaves-Heap, social psychologist and SPI-B advisor Stephen Reicher and David Halpern, Chief Executive of the Behavioural Insights Team. Presenter: James Garvey Producer: Simon Hollis A Brook Lapping production for BBC Radio 4

Better Human Podcast
49 - Serving you up on the internet

Better Human Podcast

Play Episode Listen Later Jun 29, 2021 25:45


We are increasingly used to the internet serving us the things it thinks we need. But what is the risk for our privacy if our data is being harvested and used to 'personalise' the experience? In the final episode of the mini-series we are joined by Dr Elif Kuskonmaz of the University of Portsmouth. This podcast is part of a mini-series co-hosted with Susie Alegre, international human rights barrister, Associate at Doughty Street Chambers and Research Fellow at the University of Roehampton

Better Human Podcast
49 - Are internet algorithms a problem for human rights?

Better Human Podcast

Play Episode Listen Later Jun 9, 2021 24:46


This week we speak to Lorna Woods, Professor of Internet Law at Essex University, about how algorithms on the internet are 'personalising' the content we see and how this impacts on the rights to privacy and freedom of expression. This podcast is part of a mini-series co-hosted with Susie Alegre, international human rights barrister, Associate at Doughty Street Chambers and Research Fellow at the University of Roehampton

Better Human Podcast
47 - Are algorithms making us less creative?

Better Human Podcast

Play Episode Listen Later Apr 19, 2021 38:01


Can a computer judge a creative writing competition? Do automatically curated newsfeeds help or hinder free expression? How does creativity interact with rights protections? A fascinating and timely discussion with Brendan de Caires of Pen Canada, hosted by barristers Adam Wagner and Susie Alegre. Show notes at www.betterhumanpodcast.com

creative algorithms caires susie alegre pen canada
Making Common Ground
Susie Alegre on social media, freedom of thought and how our anxieties are exploited

Making Common Ground

Play Episode Listen Later Feb 18, 2021 42:21


In episode seven of Making Common Ground, Alfred Landecker fellow Cat Neilan talks to human rights lawyer Susie Alegre about the risks that social media pose to our freedom of thought - and how late night doom scrolling can be used to manipulate whether you vote 

Response-ability.Tech
Freedom of Thought in the Digital Age. With Susie Alegre

Response-ability.Tech

Play Episode Listen Later Sep 16, 2020 41:55


Our guest in this week's episode is Susie Alegre. Susie is an international human rights barrister and consultant with extensive experience working in international development and human rights policy and practice. She is talking in the FinTech stream at the conference on 9 October.Susie describes international human rights law as “ethics with teeth”. She shares some of her career highlights and what sparked her interest in digital technology.We move on to discuss her interest in freedom of thought, and why it is so relevant in our 21st century digital society, in which behavioural micro-targeting is a growing enterprise.Susie also talks about why algorithmic decisions need to be explicable in the context of financial services, and gives us a sneak peak into her talk at the conference.We loved talking to Susie and we hope you enjoy the episode.To find out more about Susie, visit https://susiealegre.com.Susie's article, Rethinking Freedom of Thought for the 21st Century, discussed in the podcast.Susie is interviewed on the Better Human show. Forum Internum: Freedom of Thought (BBC Sounds). To catch Susie's talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter.