Data Ethics, Artificial Intelligence, robotics researcher
POPULARITY
Die Potsdamer Konferenz für Nationale Cybersicherheit am HPI gibt es im Juli zum Nachhören. Jede Woche ein spannendes Thema der Konferenz als Podcast-Folge. Diesmal geht es um die Künstliche Intelligenz und um die Konsequenzen, die sich daraus für die Cybersicherheit ergeben. Prof. Sandra Wachter, Expertin für Technologie und Regulierung, moderiert das Panel zum Thema.
Artificial Intelligence and Generative AI are changing our lives and society as a whole from how we shop to how we access news and make decisions.Are current and traditional legal frameworks and new governance strategies able to guard against the novel risks posed by new systems?How can we mitigate AI bias, protect privacy, and make algorithmic systems more accountable?How are data protection, non-discrimination, free speech, libel, and liability laws standing up to these changes?A lecture by Sandra Wachter recorded on 11 October 2023 at Barnard's Inn Hall, LondonThe transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/technology-lawGresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/Website: https://gresham.ac.ukTwitter: https://twitter.com/greshamcollegeFacebook: https://facebook.com/greshamcollegeInstagram: https://instagram.com/greshamcollegeSupport the show
Ob Kreditzusage, Wohnungssuche oder in der Medizin: KI-Systeme werden immer leistungsfähiger – ohne Regeln für deren Einsatz, wie Datenethikerin Sandra Wachter kritisiert. Werden wir virtuell diskriminiert? Wie verändert das unsere Gesellschaft?Sandra Wachter im Gespräch mit Martin Mairwww.deutschlandfunkkultur.de, ZeitfragenDirekter Link zur Audiodatei
This special edition is cohosted by Prof. Dr. Barbara Prainsack and is an event organised by the Research Platform: Governance of Digital Practices. We are very proud to welcome Prof. Dr. Sandra Wachter for an invited speach and debate. ----------- „When technology disrupts the law. AI decides who gets a loan, who gets to go to university and who gets a job. Yet, these systems are often opaque and biased, and we have very little understanding how and why they make these life changing decisions. We need to ask ourselves is the law equipped to deal with these legal challenges? ” ----------- Prof. Dr. Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law. Sandra Wachter will also give the keynot of this term's "Semesterfrage" on the very same day - "Was macht Digitalisierung mit der Demokratie?". Links: https://digigov.univie.ac.at/ https://www.oii.ox.ac.uk/people/profiles/sandra-wachter/ https://politikwissenschaft.univie.ac.at/en/about-us/staff/prainsack/ https://kalender.univie.ac.at/einzelansicht/?tx_univieevents_pi1[id]=30153 https://www.youtube.com/watch?app=desktop&v=JUriFFaTqW0
We all know tech companies collect a lot of data about us, and sell it to third parties. And it seems that nowadays, there isn't much we can't track. So is there a tipping point when it comes to how much intimate data – about our health, conversations, dating habits – tech ought to be able to access? Danielle Citron makes the case for treating data protection as a civil rights issue, and Sandra Wachter discusses the risks of algorithmic groups and discrimination.
Unsere Bewegungen mit der Computer-Maus reichen aus, um Rückschlüsse auf Erkrankungen wie Alzheimer zu ziehen. Was sollen und dürfen unsere Daten im Internet über uns verraten? Ein Gespräch mit einer der wichtigsten Forscherinnen rund um die Datenethik in Zeiten von Twitter, Facebook und Co. Sandra Wachter wurde mit knapp 30 Jahren Professorin am Oxford Internet Institute - aus Versehen, wie sie sagt: «Eigentlich kamen die Dinge immer auf mich zu, ohne dass ich es wollte.» Vorbild war ihre Grossmutter, die als eine der ersten Frauen an der Technischen Universität in Wien zugelassen wurde. Heute forscht die Juristin dort, wo am schärfsten über die komplexen Fragen des Internets nachgedacht wird, denn «wir müssen verstehen, welche Identität uns das Internet gibt. Die Technologie soll uns nicht diskriminieren, sondern dienen.»
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford. Sandra's work lies at the intersection of law and AI, focused on what she likes to call “algorithmic accountability”. In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they're created. We also explore why factors like the lack of oversight lead to poor self-regulation, and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon. The complete show notes for this episode can be found at twimlai.com/go/521.
Will there ever be equality in machine learning technology or will our cultural biases continue to be reflected in algorithms? Dr. Sandra Wachter from the Oxford Internet Institute argues in her latest research that data bias is unavoidable because of the current bias within western culture. How we now try and negate that bias in AI is critical if we are ever to ensure that this technology meets current legislation like EU non-discrimination law. She’s on the programme to discuss how we make real progress in AI equality. This research has come from the Oxford Internet Institute, whose new Director is also on the show – Professor Victoria Nash tells us of her plans in the new role. EdTech in Malawi A programme which allows seven year olds to have three lessons a week on ipads in Malawi is narrowing the learning gap between girls and boys. With an average class size of around 60 pupils with one teacher, young girls are often left behind and drop out of formal education, but with this individual approach many more are staying on in school. The programme is so successful it is now being rolled out to hundreds of schools, with the hope of going nationwide. Director for Education, Youth and Sports Lucia Chidalengwa of Education, Youth and Sports in Malawi’s Ntcheu district explains why this approach is so successful. Online learning via your games console With COVID cases rising in many countries and some regions even facing a third wave of the pandemic, many children around the world will continue to learn remotely – but what if there is no computer or laptop for them to use at home? How about converting a games console into an online school workstation? Reporter Chris Berrow shows you how to do it by powering up his games console and getting online to learn. (Image: Getty images:) The programme is presented by Gareth Mitchell with expert commentary from Bill Thompson. Studio Manager: Giles Aspen Producers: Emil Petrie and Ania Lichtarowicz
President Trump has given the Chinese-owned video-sharing app TikTok a deadline to sell off its US operations, or else he will have it shut down in the country. Microsoft and Oracle have been rumoured to be interested. Russell Brandom of tech site The Verge tells Ed Butler that the extent of what's on offer is over-hyped. But Jason Davis, associate professor of entrepreneurship at Insead says a US-only version of the app would still have considerable merit. In any case, Sandra Wachter, associate professor at the Oxford Internet Institute, says the threat President Trump thinks TikTok represents won't go away simply by shaving off its US operations. Producer: Edwin Lane (Picture Credit: Getty Images.)
Berkman Klein Center for Internet and Society: Audio Fishbowl
Fairness and discrimination in algorithmic systems are globally recognized as topics of critical importance. To date, the majority of work in this area starts from an American regulatory perspective defined by the notions of ‘disparate treatment’ and ‘disparate impact.’ But European legal notions of discrimination are not equivalent. In this talk, Sandra Wachter, Visiting Professor at Harvard Law School and Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, robotics and Internet Regulation at the Oxford Internet Institute (OII) at the University of Oxford, examines EU law and jurisprudence of the European Court of Justice concerning non-discrimination and identifies a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness. Wachter discusses the evidential requirements for bringing a claim under EU non-discrimination law and propose a statistical test as a baseline to identify and assess potential cases of algorithmic discrimination in Europe.
Interview zu ethischen Fragen der sogenannten Künstlichen Intelligenz.Sandra Wachter ist Professorin in Oxford und beschäftigt mit maschinengestützen Entscheidungen und ethischen Fragen der "künstlichen Intelligenz". Philip Banse spricht mit ihr über Kontrolle maschinengestützter Entscheidungen (wer ist verantwortlich, wenn etwas schief geht?) und das Konzept der "Counterfactual Explanations" (pdf), mit dem maschinelle Entscheidungen nachvollziehbarer werden sollen. Es kommen auch einige Eurer Fragen zur Sprache, etwa nach Technik-Forschung, die von großen Technik-Firmen mitfinanziert wird.
Sandra Wachter, Oxford Internet Institute, gives the fifth talk in the first Ethics in AI seminar, held on November 11th 2019.
Sandra Wachter, Oxford Internet Institute, gives the fifth talk in the first Ethics in AI seminar, held on November 11th 2019.
In the final episode of our series, we’re looking back at the themes we’ve discussed so far, and forward into the likely development of AI. Professor Peter Millican will be joined by Professor Gil McVean, to further investigate how big data is transforming healthcare, by Dr Sandra Wachter, to discuss her recent work on the need for a legal framework around AI, and also by Professor Sir Nigel Shadbolt on where the field of artificial intelligence research has come from, and where it’s going. To conclude, Peter will be sharing some of his views on where humanity is heading with AI, when you’ll also hear from his final guest, Azeem Azhar, host of the Exponential View podcast. Futuremakers will be taking a short break now, but we’ll be back with series two in the new year, when we’ll be taking on another of society’s grand challenges: building a sustainable future. Before then we’ll also be publishing a special one-off episode on Quantum Computing and the global opportunities, and risks, it could present. To read more about some of the key themes in this episode, you can find Sandra Wachter’s recent papers below. - A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3248829 - Explaining Explanations in AI: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3278331 - Counterfactual explanations without opening the black box: automated decisions and the GDPR: https://jolt.law.harvard.edu/assets/articlePDFs/v31/Counterfactual-Explanations-without-Opening-the-Black-Box-Sandra-Wachter-et-al.pdf
Our lives are increasingly shaped by automated decision-making algorithms, but do those have in-built biases? If so, do we need to tackle these, and what could happen if we don’t? Join our host, philosopher Peter Millican, as he explores this topic with Dr Sandra Wachter, a lawyer and Research Fellow at the Oxford Internet Institute, Dr Helena Webb a Senior Researcher in the Department of Computer Science, and Dr Brent Mittelstadt, a philosopher also based at the Oxford Internet Institute.
In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal and ethical implications of Big Data, AI, and robotics as well as governmental surveillance, predictive policing, and human rights online. Her current work deals with the ethical design of algorithms, including the development of standards and methods to ensure fairness, accountability, transparency, interpretability, and group privacy in complex algorithmic systems.You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here).Show Notes0:00 - Introduction2:05 - The rise of algorithmic/automated decision-making3:40 - Why are algorithmic decisions so opaque? Why is this such a concern?5:25 - What are the benefits of algorithmic decisions?7:43 - Why might we want a 'right to explanation' of algorithmic decisions?11:05 - Explaining specific decisions vs. explaining decision-making systems15:48 - Introducing the GDPR - What is it and why does it matter?19:29 - Is there a right to explanation embedded in Article 22 of the GDPR?23:30 - The limitations of Article 2227:40 - When do algorithmic decisions have 'significant effects'?29:30 - Is there a right to explanation in Articles 13 and 14 of the GDPR (the 'notification duties' provisions)?33:33 - Is there a right to explanation in Article 15 (the access right provision)?37:45 - Is there any hope that a right to explanation might be interpreted into the GDPR?43:04 - How could we explain algorithmic decisions? Introducing counterfactual explanations47:55 - Clarifying the concept of a counterfactual explanation51:00 - Criticisms and limitations of counterfactual explanations Relevant LinksSandra's profile page at the Oxford Internet InstituteSandra's academia.edu page'Why a right to explanation does not exist in the General Data Protection Regulation' by Wachter, Mittelstadt and Floridi'Counterfactual explanations without opening the black box: Automated decisions and the GDPR' by Wachter, Mittelstadt and RussellThe General Data Protection RegulationArticle 29 working party guidance on the GDPRDo judges make stricter sentencing decisions when they are hungry? and a Reply #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal … More Episode #36 – Wachter on Algorithms, Explanations and the GDPR
Im MoTcast Interview mit Gastgeber Ingo Stoll spricht Dr. Sandra Wachter vom renomierten Oxford Internet Institute (OII) über digitale Ethik, künstliche Intelligenz und worauf es ankommt, wenn Algorithmen entscheiden. Shownotes unter http://www.masters-of-transformation.org/motcast/042/