Podcasts about algorithmic accountability

  • 19PODCASTS
  • 21EPISODES
  • 30mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 8, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about algorithmic accountability

Latest podcast episodes about algorithmic accountability

Six Pixels of Separation Podcast - By Mitch Joel
SPOS #961 – Sandra Matz On Algorithms, Psychology And Human Behavior

Six Pixels of Separation Podcast - By Mitch Joel

Play Episode Listen Later Dec 8, 2024 64:51


Welcome to episode #961 of Six Pixels of Separation - The ThinkersOne Podcast. Sandra Matz is one of those rare individuals who sits at the intersection of academic rigor and cultural relevance. As a computational social scientist with a background in psychology and computer science, Sandra studies human behavior by uncovering the hidden relationships between our digital lives and our psychology. Her goal is to make data relatable, and help individuals and businesses make better and more ethical decisions. As the David W. Zalaznick Associate Professor of Business at Columbia Business School, Sandra has dedicated her career to understanding the hidden connections between human behavior and the data trails we leave behind. Over the last 10 years, she has published over 50 academic papers in the world's leading peer review journals. In her new book, Mindmasters - The Data-Driven Science Of Predicting And Changing Human Behavior, Sandra dives into how big data is not just a tool for understanding us but also for influencing our decisions - sometimes in ways that are empowering, other times in ways that are downright chilling. As someone who has always been fascinated by the promise and perils of technology, this conversation hit close to home. Sandra's perspective is nuanced: she's as much a champion of the transformative potential of algorithms in areas like mental health and financial well-being as she is a critic of their misuse for manipulation. Our conversation ranges from her conflicted feelings about the power of psychological targeting to her hope that these tools can help individuals lead happier, more balanced lives. What struck me most was her candor about the fine line between helpful nudges and invasive manipulation. Sandra is not just theorizing about these issues; she's actively shaping the conversation around them. If you're grappling with questions about the role of AI and algorithms in our lives - whether as a force for good or something we need to be deeply wary of - this episode will give you plenty to think about. Enjoy the conversation... Running time: 1:04:51. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. Check out ThinkersOne. or you can connect on LinkedIn. ...or on Twitter. Here is my conversation with Sandra Matz. Mindmasters - The Data-Driven Science Of Predicting And Changing Human Behavior. Follow Sandra on LinkedIn. This week's music: David Usher 'St. Lawrence River'. Chapters: (00:00) - Introduction to Computational Social Science. (03:00) - The Conflict of Technology and Psychology. (06:13) - Understanding Psychological Targeting. (08:58) - The Intimacy Economy vs. The Attention Economy. (11:52) - The Dangers of Data Privacy. (15:09) - The Impact of Google Searches on Personal Life. (17:56) - Mass Surveillance and Data Collection. (20:57) - The Role of Regulation in Data Privacy. (24:07) - The Algorithmic Accountability. (26:49) - Synthetic Data and Its Implications. (30:09) - The Future of AI and Human Creativity. (33:01) - The Role of Algorithms in Society. (36:08) - The Importance of Perspective in AI. (41:59) - The Challenge of Transparency in Algorithms. (44:46) - Grassroots Movements and Algorithm Accountability. (47:46) - The Future of AI and Human Interaction. (51:05) - Conclusion and Reflections on Technology.

Venture Capitalist's Daily
Biden orders AI safety measures; Algorithmic Accountability Act passes House

Venture Capitalist's Daily

Play Episode Listen Later Oct 30, 2023 26:29


Generated by Tailor.Get your own personalized daily podcast! Sign up for freeIn this episode, we discuss the latest developments in the AI industry, including President Biden's executive order to address AI safety and security, the Algorithmic Accountability Act that mandates impact assessments on AI systems, and JSW Ventures' 57% internal rate of return on its investment in beauty retailer Purplle. We also cover Skyroot Aerospace's recent funding round and its plans for small satellite launch vehicles. Tune in for the latest news and insights in the world of AI. Music: Mosaic [Electro] by Hardcore Scm. Licensed under: http://creativecommons.org/licenses/by/3.0/News articles cited in this episode:- Korea Investment Partners closes $60M Southeast Asia fund https://techfundingnews.com/korea-investment-partners-closes-60m-southeast-asia-fund/- What You Missed From Our WTF Conference; Biden's AI Executive Order Unveiled https://www.theinformation.com/articles/what-you-missed-from-our-wtf-conference-bidens-ai-executive-order-unveiled- AI Summit a ‘moment of critical importance' for UK http://startupsmagazine.co.uk/article-ai-summit-moment-critical-importance-uk- GW Experts Available: Biden Administration Unveils Highly-Anticipated Executive Order on AI http://www.newswise.com/articles/view/801677/?sc=rsla- Democratizing AI With a Codeless Solution https://www.marktechpost.com/2023/10/30/democratizing-ai-with-a-codeless-solution/- Biden looks to get jump on AI with sweeping executive order https://www.cbsnews.com/news/biden-ai-artificial-intelligence-executive-order/- Biden releases AI executive order directing agencies to develop safety guidelines https://www.theverge.com/2023/10/30/23914507/biden-ai-executive-order-regulation-standards- Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/- Joe Biden's Sweeping New Executive Order Aims to Drag the US Government Into the Age of ChatGPT https://www.wired.com/story/joe-bidens-executive-order-ai-us-government-chatgpt/- Q+A: Woman claims she was discriminated against by AI technology and had to get her white boyfriend to help her https://www.dailymail.co.uk/news/article-12688549/Q-Woman-claims-discriminated-against-AI-technology-white-boyfriend-help-her.html?ns_mchannel=rss&ns_campaign=1490&ito=1490- Biden's Sweeping AI Executive Order Calls for Standards to ‘Mitigate Harms' to Workers Posed by Artificial-Intelligence Tech https://variety.com/2023/digital/news/biden-white-house-ai-executive-order-1235772868/- Sweeping White House executive order takes aim at AI's toughest challenges https://www.engadget.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655.html?src=rss- 3 Keys To Longevity In The Startup World https://www.forbes.com/sites/forbestechcouncil/2023/10/30/3-keys-to-longevity-in-the-startup-world/- Ranjan Pai invests in beauty etailer Purplle; Skyroot gets Temasek boost https://economictimes.indiatimes.com/tech/newsletters/tech-top-5/manipal-group-invests-in-purplle-skyroot-raises-27-million-from-temasek-others/articleshow/104829112.cms

Newsroom Robots
Uli Köppen: Algorithmic Accountability, Generative AI and Automation in Journalism at Germany's Bayerischer Rundfunk (Bavarian Broadcasting)

Newsroom Robots

Play Episode Listen Later Aug 23, 2023 39:48


Uli Köppen, head of the AI + Automation Lab at German Public Broadcaster Bayerischer Rundfunk, joins Nikita Roy to discuss how BR's newsroom has integrated AI across its entire news cycle. Uli shares her team's work on algorithmic accountability, AI strategy, generative AI experiments, and their experience integrating AI in the newsroom.Uli also co-leads BR Data, the newsroom's investigative data team. The award-winning team at BR Data is pioneering the future of AI in journalism, drawing upon the experience of journalists, coders, and product developers to specialize in investigative data stories, interactive storytelling, and experimentation with AI. In 2019, she spent a year at Harvard and MIT as a Nieman Fellow, focusing on algorithmic accountability, machine bias, and automation in journalism. She also participated in the Online News Association's Women's Leadership Accelerator in 2022.Tune in to learn about advanced AI-driven media from one of Europe's leading voices in the field. Hosted on Acast. See acast.com/privacy for more information.

The Radical AI Podcast
More than a Glitch, Technochauvanism, and Algorithmic Accountability with Meredith Broussard

The Radical AI Podcast

Play Episode Listen Later Mar 22, 2023 64:27


In this episode, we discuss Meredith Broussard's influential new book, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech – published by MIT Press.   Meredith is a data journalist, an associate professor at the Arthur L. Carter Journalism Institute of New York University, a research director at the NYU Alliance for Public Interest Technology, and the author of several books, including “More Than a Glitch” (which we cover in this episode) and “Artificial Unintelligence: How Computers Misunderstand the World.” Her academic research focuses on artificial intelligence in investigative reporting and ethical AI, with a particular interest in using data analysis for social good.     Full show notes for this episode, including the link to buy Meredith's new book, can be found at Radicalai.org.

Publicly Sited
Media, Technology and Culture 09 (2nd Edition): Algorithmic Technologies

Publicly Sited

Play Episode Listen Later Nov 30, 2021 29:14


There is now widespread awareness of, suspicion about, and even opposition to 'algorithms'. As widespread as the multiplicity of situations and domains in which these mysterious entities seem to be making more and more decisions: around welfare payments; university places; travel routes; and police patrol routes. Algorithms are also pervasive in media and communications. They build you customised magazines with news from several sources, help inform what movies you watch, the posts you see in your social media feeds, the way a matchmaking website pairs you with others, not to mention all that advertising and direct marketing. Media today are personalised, whether we want them to be or not. And we are becoming more than a little worried about these algorithmic agents that seem to make all this personalisation possible. Their computational decision making, their capacities at deep learning: so hidden; so obscure. In this episode, we think about the growing role of algorithms in shaping contemporary media cultures, from the early rise of apps and personalised ‘filter bubbles' to the rather ordinary recommendation systems we rely on today. We also grapple with growing concerns for how deep structural biases around race, class, gender and sexuality are embedded into and reinforced by the way algorithms – such as those enabling facial recognition technologies – actually work. But we will also ask: what if the politics of algorithms is not just about prying these black boxes open, revealing their internal biases and perhaps correcting them? Instead, might it be that we need to understand the problematic social and cultural conditions from which these algorithms and associated technologies sprout up, get nurtured and grow? Thinkers Discussed: Eli Pariser (The Filter Bubble: What the Internet is Hiding From You); Blake Hallinan and Ted Striphas (Recommended for You: The Netflix Prize and the Production of Algorithmic Culture); Raymond Williams (Keywords); Daniela Varela Martinez's and Anne Kaun (The Netflix Experience: A User-Focused Approach to the Netflix Recommendation Algorithm); Safiya Umoja Noble (Algorithms of Oppression: How Search Engines Reinforce Racism); Ruha Benjamin (Race After Technology: Abolitionist Tools for the New Jim Code); Fabio Chiusi (Automating Society); Axel Bruns (Are Filter Bubbles Real?); Frank Pasquale (The Black Box Society: The Secret Algorithms That Control Money and Information); Taina Bucher (If...Then: Algorithmic Power and Politics); Mike Ananny and Kate Crawford (Seeing Without Knowing: Limitations of the Transparency Ideal and its Application to Algorithmic Accountability).

AI with AI
Pet Shop Bots: BEHAVIOR

AI with AI

Play Episode Listen Later Sep 10, 2021 35:03


Andy and Dave discuss the latest in AI news and research, including: 0:46: The GAO releases a more extensive report on US Federal agency use of facial recognition technology, including what purposes. 3:24: The US Department of Homeland Security Science and Technology Directorate publishes its AI and ML Strategic Plan, with an implementation plan to follow. 5:39: Ada Lovelace Institute, AI Now Institute, and Open Government Partnership publish a global study on Algorithmic Accountability for the Public Sector, which focuses on accountability mechanisms stemming from laws and policy. 9:04: Research from North Caroline State University shows that the benefits of autonomous vehicles will outweigh the risks, with proper regulation. 13:18: Research Section Introduction 14:24: Researchers at the Allen Institute for AI and the University of Washington demonstrate that artificial agents can learn generalizable visual representation during interactive gameplay, embodied within an environment (AI2-THOR); agents demonstrated knowledge of the principles of containment, object permanence, and concepts of free space. 19:37: Researchers at Stanford University introduce BEHAVIOR (Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments), which establishes benchmarks for simulation of 100 activities that human often perform at home. 24:02: A survey examines the dynamics of research communities and AI benchmarks, suggesting that hybrid, multi-institution, and persevering communities are the ones more likely to improve state-of-the-art performance, among other things. 28:54: Springer-Verlag makes Representation Learning for Natural Language Processing available online. 32:09: Terry Sejnowski and Stephen Wolfram publish a three-hour discussion on AI and other topics. Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video

Dobcast
Roboto News 27.08.21

Dobcast

Play Episode Listen Later Aug 27, 2021 2:59


Roboto News 13.08.21. - Netflix testea sus juegos móviles en Polonia. - Se viene el "apagón" de Twitch. - El Ada Lovelace Institute, el AI Now Institute y Open Government Partnership publicaron "Algorithmic Accountability for the Public Sector". Si te interesa el impacto que tiene la tecnología en nuestras sociedades: www.amenazaroboto.com. Te invitamos a seguirnos en @AmenazaRoboto en Twitter e Instagram.

Publicly Sited
Media, Technology and Culture 09: Algorithmic Technologies

Publicly Sited

Play Episode Listen Later Mar 13, 2021 27:14


There is now widespread awareness of, suspicion about, and even opposition to 'algorithms'. As widespread as the multiplicity of situations and domains in which these mysterious entities seem to be making more and more decisions: around welfare payments; university places; travel routes; and police patrol routes. Algorithms are also pervasive in media and communications. They build you customised magazines with news from several sources, help inform what movies you watch, the posts you see in your social media feeds, the way a matchmaking website pairs you with others, not to mention all that advertising and direct marketing. Media today are personalised, whether we want them to be or not. And we are becoming more than a little worried about these algorithmic agents that seem to make all this personalisation possible. Their computational decision making, their capacities at deep learning: so hidden; so obscure. In this episode, we think about the growing role of algorithms in shaping contemporary media cultures, from the early rise of apps and personalised ‘filter bubbles' to the rather ordinary recommendation systems we rely on today. We also grapple with growing concerns for how deep structural biases around race, class, gender and sexuality are embedded into and reinforced by the way algorithms – such as those enabling facial recognition technologies – actually work. But we will also ask: what if the politics of algorithms is not just about prying these black boxes open, revealing their internal biases and perhaps correcting them? Instead, might it be that we need to understand the problematic social and cultural conditions from which these algorithms and associated technologies sprout up, get nurtured and grow? Thinkers Discussed: Eli Pariser (The Filter Bubble: What the Internet is Hiding From You); Blake Hallinan and Ted Striphas (Recommended for You: The Netflix Prize and the Production of Algorithmic Culture); Raymond Williams (Keywords); Daniela Varela Martinez's and Anne Kaun (The Netflix Experience: A User-Focused Approach to the Netflix Recommendation Algorithm); Safiya Umoja Noble (Algorithms of Oppression: How Search Engines Reinforce Racism); Ruha Benjamin (Race After Technology: Abolitionist Tools for the New Jim Code); Axel Bruns (Are Filter Bubbles Real?); Frank Pasquale (The Black Box Society: The Secret Algorithms That Control Money and Information); Taina Bucher (If...Then: Algorithmic Power and Politics); Mike Ananny and Kate Crawford (Seeing Without Knowing: Limitations of the Transparency Ideal and its Application to Algorithmic Accountability).

DataOps Podcast
Algorithmic Accountability in the age of Big Data, Bias and Racism

DataOps Podcast

Play Episode Listen Later Jul 20, 2020 26:03


Join Banjo and Victoria as they have a conversation with Ayodele a Data Scientist on how biases are embedded into big data and how it can lead towards systemic racism. We discuss strategies to bring awareness and how to bring about changes through algorithmic accountability.

The AI Experience
Episode 013: Algorithmic Bias

The AI Experience

Play Episode Listen Later May 29, 2020 32:22


In this episode, Lloyd talks about ICED(AI)'s pledge to provide $100,000 worth of pro-bono consulting hours to small businesses and individuals who have been impacted by discrimination and social unrest. The conversation then focuses on the ways in which human biases seep into algorithmic decision making, as well as the impact of this phenomenon on society. You can read more about the pledge here: https://bit.ly/ICEDAI_Pledge Episode Guide: 2:02 - The ICED(AI) Pledge 3:20 - Intro to Algorithmic Bias 6:40 - Racial Bias in Healthcare Algorithms 10:18 - Biased Training Data 12:10 - The Orwellian Future of Facial Recognition 14:51 - Algorithmic Accountability 16:04 - Disparate Algorithmic Impact 18:04 - The Curious Case of Car Insurance 19:56 - Effective Altruism & Challenging Assumptions 24:29 - A North Korean Anecdote 29:44 - The Need for Compassion & Empathy More Info: Visit us at aiexperience.org Brought to you by ICED(AI) Host - Lloyd Danzig

Internationalism
The World In You Panels, Season 1 | JECIL 2020 Edition | Episode 7

Internationalism

Play Episode Listen Later Mar 24, 2020 45:12


Presenting The World In You Panels, where you will find some of the most impeccable Panels on various collateral and contemporary issues of International Law and Relations. In Episode 7, we discuss The Gaps in Algorithmic Accountability & Policing in Developing Asian Economies: The Politico-economic Perspective with Mr Kunal Mandal, Chief Knowledge Architect, GyaanSpace. Abhivardhan moderated the session. The talk is a part of The Juris En Conference on International Law, 2020. Internationalism, available at: Website: internationalism.co.in ____________________________________________ Also at: Facebook | Anchor.fm | Instagram | Twitter LinkedIn | Apple Podcasts | Google Podcasts Spotify | Breaker | Castbox | Pocket Casts RadioPublic | Stitcher

Ethics of AI in Context
Frank Pasquale, Judicial Bias, Algorithmic Accountability, and Legal Innovation

Ethics of AI in Context

Play Episode Listen Later Dec 26, 2019 13:36


Studies suggesting judicial bias have made machine learning and artificial intelligence alluring to many. There is a great deal of marketing and disruptive technology in the field of legal innovation, however there is a parallel movement which seeks to achieve algorithmic accountability, and actively critiques these innovations. How do we reconcile the two? Frank Pasquale University of of Maryland Law Legal Ethics in the Age of Law & Tech, Centre for Ethics, University of Toronto, March 24, 2017

Inside PR
Algorithmic Accountability and Privacy – Inside PR 543

Inside PR

Play Episode Listen Later Jul 14, 2019 16:19


we consider the implications of an Algorithmic Accountability Act, rebalancing the freedom of companies to capture and use our data with our right to informed consent. Plus: Protect your privacy against hidden cameras during your next business trip.

privacy inside pr algorithmic accountability
The Silicon Valley Insider Show with Keith Koo
Government I.T. - Keeping up with the Pace : Sean O'Kelly and Randy Kowalski, Former State of Illinois I.T. Execs

The Silicon Valley Insider Show with Keith Koo

Play Episode Listen Later Jul 5, 2019 38:13


On this week's Silicon Valley Insider Keith's special guest are former State of Illinois I.T. Executives: Sean T. O'Kelly, Chief Information Officer (CIO) of the State of Illinois for Financial and Professional Regulation and Randy Kowalski, Deputy Director of Entrepreneurship, Technology and Innovation and now Principal and Co-Founder of the Illinois Smart State and Region Association. Sean, Randy discuss with Keith how they started their careers in the private sector and how they transitioned to the public sector which led to the drafting of the proposed Illinois Smarter State Initiative. With their deep industry experience, Sean and Randy bring a unique perspective to the opportunities and challenges municipalities the size and scale of Illinois and Chicago face with trying to keep up the rapid pace of technology change and adoption. Topics discussed on the show are 5G, Unmanned Aerial Vehicles (UAV / Drones) , Human (Citizen) Centered Design and Algorithmic Accountability (using A.I.). In light of San Francisco's ban on facial recognition technology for law enforcement, Sean and Randy explain how governments prioritize keeping their citizens' information safe. On the Cyber-Tip of the week, Keith discusses the latest in the rise in successful ransomware attacks targeting municipalities and cities, such as the recent Riviera Beach, FL & Lake City, FL, and the court system of Georgia and the ransom payments by the cities in Florida. Keith gives advice on how any organization, including cities, should secure themselves. On the Pivot, Keith is once again joined by Sean T. O'Kelly and Randy Kowalski on how cities adapt and change to technology shifts. Tune in to hear more! www.svin.biz Listen Fridays 1-2pm on 1220AM KDOW Silicon Valley | San Francisco Listen and subscribe to the "Silicon Valley Insider" Podcast ahead of time to make sure you don't miss this show. First airing is 1-2pm on 1220AM KDOW Download the podcast at 2pm Friday's For questions or comments, email: info@svin.biz Be sure to subscribe and listen to the podcast. You can also listen to past podcasts here: Castbox: https://castbox.fm/channel/The-Silicon-Valley-Insider-Show-with-Keith-Koo-id1100209?country=us iTunes: https://itunes.apple.com/us/podcast/the-silicon-valley-insider-show/id1282637717?mt=2 Android, Spotify (and iTunes): https://omny.fm/shows/the-silicon-valley-insider-show Email us at info@svin.biz or find us here: www.svin.biz https://stitchengine.drishinfo.com/index.jsp?sId=15540&source=sh Artificial Intelligence, AI, Blockchain, Big Data, Data Analytics, Cyberrisk, Information security, VC, Venture Capital, Angel Investments, Fundraising, Capital Raising, Investor, Human Rights, Technology for Good, UN SDGs, Emerging Technology, #Patreon

Water & Music
Episode 8 (ft. Eugene Kan + Charis Poon): The case for greater creative and algorithmic accountability in the music industry

Water & Music

Play Episode Listen Later Jun 25, 2019 75:12


MAEKAN's Eugene Kan and Charis Poon join this episode to discuss the key ideas in their essay "The Modern Creator's Paradigm," which argues that continuing cultural innovation will require greater creative and critical accountability from artists, platforms and consumers combined. We discuss the undeniable gatekeeping of music streaming platforms, the positive implications of raising financial barriers to creation and consumption, how Lil Nas X's career would not exist without multiple forms of cultural critique and how the future of media can escape endless news cycles and the psychology of "never getting caught up." At the end, we discuss Taylor Swift's latest music video, the role of merch bundles in establishing artists' cultural influence (and chart placement) and the departure of several key execs from YG Entertainment.

Philosophical Disquisitions
Episode #41 - Binns on Fairness in Algorithmic Decision-Making

Philosophical Disquisitions

Play Episode Listen Later Jul 12, 2018


In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show notes0:00 - Introduction 1:46 - What is algorithmic decision-making? 4:20 - Isn't all decision-making algorithmic? 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate 12:02 - Limitations of the COMPAS debate 15:22 - Other examples of unfairness in algorithmic decision-making 17:00 - What is discrimination in decision-making? 19:45 - The mental state theory of discrimination 25:20 - Statistical discrimination and the problem of generalisation 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination 34:40 - Algorithmic typecasting: Could we all end up like William Shatner? 39:02 - Egalitarianism and algorithmic decision-making 43:07 - The role that luck and desert play in our understanding of fairness 49:38 - Deontic justice and historical discrimination in algorithmic decision-making 53:36 - Fair distribution vs Fair recognition 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making?  Relevant LinksReuben's homepage Reuben's institutional page  'Fairness in Machine Learning: Lessons from Political Philosophy' by Reuben Binns 'Algorithmic Accountability and Public Reason' by Reuben Binns 'It's Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making' by Binns et al 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

re:publica 18 - Alle Sessions
OpenSCHUFA - Crowdsourcing algorithmic accountability

re:publica 18 - Alle Sessions

Play Episode Listen Later May 3, 2018 25:03


Walter Palmetshofer, Lorenz Matzat Es gibt wenig private Unternehmen, deren Entscheidungen so großen Einfluss auf das Leben so vieler Bürgerinnen und Bürger in Deutschland haben wie die SCHUFA Holding AG. Egal, ob es um einen Handyvertrag, eine Mietwohnung oder einen Baukredit geht - fast immer ist es der Score der Schufa, der über hop oder topp entscheidet. Obwohl die Schufa öffentlicher Kontrolle unterliegt, sind die Ergebnisse dieser Kontrollen geheim - doch nicht nur das: Auch wie die zu ihren Scores kommt, darf die SCHUFA weitgehend geheim halten. Der Rechtsstreit einer Betroffenen auf Auskunft scheiterte in allen Instanzen bis zum BGH. Kann man mit Methoden von Crowdsourcing und Algorithmic Accountability Reporting dafür sorgen, dass mehr Geheimnisse der Schufa gelüftet werden und wir erfahren, auf welcher Grundlage sie ihre nahezu unbegrenzte Macht ausübt? Bei der rp18 werden wir erste Ergebnisse unserer Recherche.Kampagne OpenSCHUFA vorstellen und sie mit der Community diskutieren. Hintergrund: Die Frage, welchen Einfluss automatisierte Systeme auf unsere Rechte und Freiheiten haben, ist in den vergangenen Jahren intensiv diskutiert worden. Allein: Es fehlt die Evidenz. Weltweit gibt es noch immer zu wenige Beispiele dafür, wie derartige Systeme in der Praxis tatsächlich untersucht wurden. Mit dem Projekt Datenspende zur Bundestagswahl 2017 hat AlgorithmWatch gezeigt, dass man auch komplexen Systemen von außen auf den Zahn fühlen kann. Das ist aufwändig und erfordert viel Phantasie, aber es ist möglich. OpenSCHUFA ist ein Beispiel dafür.  

re:publica 18 - Politics & Society
OpenSCHUFA - Crowdsourcing algorithmic accountability

re:publica 18 - Politics & Society

Play Episode Listen Later May 3, 2018 25:03


Walter Palmetshofer, Lorenz Matzat Es gibt wenig private Unternehmen, deren Entscheidungen so großen Einfluss auf das Leben so vieler Bürgerinnen und Bürger in Deutschland haben wie die SCHUFA Holding AG. Egal, ob es um einen Handyvertrag, eine Mietwohnung oder einen Baukredit geht - fast immer ist es der Score der Schufa, der über hop oder topp entscheidet. Obwohl die Schufa öffentlicher Kontrolle unterliegt, sind die Ergebnisse dieser Kontrollen geheim - doch nicht nur das: Auch wie die zu ihren Scores kommt, darf die SCHUFA weitgehend geheim halten. Der Rechtsstreit einer Betroffenen auf Auskunft scheiterte in allen Instanzen bis zum BGH. Kann man mit Methoden von Crowdsourcing und Algorithmic Accountability Reporting dafür sorgen, dass mehr Geheimnisse der Schufa gelüftet werden und wir erfahren, auf welcher Grundlage sie ihre nahezu unbegrenzte Macht ausübt? Bei der rp18 werden wir erste Ergebnisse unserer Recherche.Kampagne OpenSCHUFA vorstellen und sie mit der Community diskutieren. Hintergrund: Die Frage, welchen Einfluss automatisierte Systeme auf unsere Rechte und Freiheiten haben, ist in den vergangenen Jahren intensiv diskutiert worden. Allein: Es fehlt die Evidenz. Weltweit gibt es noch immer zu wenige Beispiele dafür, wie derartige Systeme in der Praxis tatsächlich untersucht wurden. Mit dem Projekt Datenspende zur Bundestagswahl 2017 hat AlgorithmWatch gezeigt, dass man auch komplexen Systemen von außen auf den Zahn fühlen kann. Das ist aufwändig und erfordert viel Phantasie, aber es ist möglich. OpenSCHUFA ist ein Beispiel dafür.  

Matteo Flora
Quale "Algorithmic Accountability" per l'Intelligenza Artificiale?

Matteo Flora

Play Episode Listen Later May 17, 2017 4:45


In un mondo sempre più dominato dagli algoritmi sentite sempre più parlare di "trasparenza degli Algoritmi" o di "Algorithmic Accountability": gli algoritmi vanno depositati e validati per salvaguardare la concorrenza.Ma se non fosse fisicamente possibile analizzare una algoritmo? E no, non è fantascienza, è esattamente quello che succede con l'Intelligenza Artificiale e il machine learning, dove siamo in presenza di una "scatola nera".Come adattarsi ad un modello di questo tipo? Ne parlo in uno dei video "complessi" :)

Matteo Flora
Quale "Algorithmic Accountability" per l'Intelligenza Artificiale?

Matteo Flora

Play Episode Listen Later May 17, 2017 4:45


In un mondo sempre più dominato dagli algoritmi sentite sempre più parlare di "trasparenza degli Algoritmi" o di "Algorithmic Accountability": gli algoritmi vanno depositati e validati per salvaguardare la concorrenza.Ma se non fosse fisicamente possibile analizzare una algoritmo? E no, non è fantascienza, è esattamente quello che succede con l'Intelligenza Artificiale e il machine learning, dove siamo in presenza di una "scatola nera".Come adattarsi ad un modello di questo tipo? Ne parlo in uno dei video "complessi" :)

Vulnerable By Design
1. The Papers: Algorithmic Accountability

Vulnerable By Design

Play Episode Listen Later Jan 1, 1970 20:05


How are public sector bodies accountable for using algorithmic systems? We look at a recent report.