POPULARITY
Wenn ChatGPT die Wahl-O-Mat Fragen zur Europawahl beantwortet, wählt sie die Grünen (oder Volt oder die Tierschutzpartei). Woran liegt das? Das Video untersucht dafür den Machine Bias, Model Decay und AI alignment. Die verwendete Version war 4o. Zum genauen Prompt siehe den angepinnten Kommentar. Zum Wahl-O-Mat: https://www.wahl-o-mat.de/europawahl2... Das erwähnte Buch "Schummeln mit ChatGPT": https://www.amazon.de/exec/obidos/ASI... https://www.amazon.de/exec/obidos/ASI... Video: Parteien im Strategie-Check: • Parteien im Strategie-... Video: Zensur durch ChatGPT: • Zensur durch ChatGPT: ... Wieso ist ChatGPT so links? • Warum ist ChatGPT so l... ►WEITERE INFORMATIONEN VON TEAM RIECK: In diesem Video wurden zentrale Konzepte wie Maschine Bias, Model Decay und AI Alignment diskutiert. Diese Konzepte sind von entscheidender Bedeutung für die Entwicklung und den Einsatz von Künstlicher Intelligenz (KI) in sensiblen Bereichen wie der Politik. Im Folgenden möchte ich einige weitere relevante Konzepte einführen und vertiefen, um ein umfassenderes Verständnis der Herausforderungen und Überlegungen zu bieten. -Data Drift Data Drift beschreibt die Veränderung der Datenverteilung über die Zeit. Diese Veränderungen können durch neue Trends, veränderte Nutzerverhalten oder externe Faktoren wie politische Ereignisse verursacht werden. Ein Modell, das auf historischen Daten trainiert wurde, kann an Genauigkeit verlieren, wenn die aktuellen Daten signifikant abweichen. In der politischen Analyse kann Data Drift bedeuten, dass ein Modell, das auf Daten aus früheren Wahlen basiert, nicht mehr zuverlässig ist, wenn sich die Wählerpräferenzen oder gesellschaftlichen Prioritäten ändern. -Feedback Loop Ein Feedback Loop entsteht, wenn die Ausgaben eines KI-Systems zurück in das System gespeist werden und zukünftige Eingaben beeinflussen. Dies kann in sozialen Medien auftreten, wo Empfehlungen das Nutzerverhalten beeinflussen, was wiederum die Trainingsdaten verändert. Dies kann zu verstärkten Verzerrungen und einer Polarisierung führen, da das System immer extremer werdende Inhalte bevorzugt. In politischen Kontexten können solche Loops die öffentliche Meinung und das politische Klima unvorhersehbar und potenziell destabilisieren. -Concept Drift Concept Drift bezieht sich auf Veränderungen in den zugrunde liegenden Beziehungen in den Daten, die das Modell zu lernen versucht. In der Politik können sich diese Beziehungen durch neue politische Bewegungen, Gesetzesänderungen oder gesellschaftliche Entwicklungen ändern. Ein Modell, das auf veralteten Konzepten basiert, wird diese neuen Dynamiken nicht korrekt erfassen und somit ungenaue oder irrelevante Vorhersagen treffen. -Ethical AI Ethical AI befasst sich mit der Entwicklung und dem Einsatz von KI-Systemen, die ethische Prinzipien und Werte berücksichtigen. Dies ist besonders wichtig, um sicherzustellen, dass KI-Entscheidungen fair, transparent und verantwortungsvoll sind. In politischen Anwendungen müssen Modelle sicherstellen, dass sie keine bestimmten Gruppen systematisch benachteiligen und dass ihre Entscheidungen nachvollziehbar und gerecht sind. Dies fördert das Vertrauen der Öffentlichkeit in die KI und minimiert das Risiko von Missbrauch und Diskriminierung. Diese Konzepte helfen, die Herausforderungen zu verstehen und Strategien zu entwickeln, um faire, robuste und vertrauenswürdige KI-Systeme zu schaffen, die den dynamischen Anforderungen unserer Gesellschaft gerecht werden. :) ►WEITERES VON CHRISTIAN RIECK: *Die 36 Strategeme der Krise: ○Print: https://www.amazon.de/exec/obidos/ASI... ○Kindle: https://www.amazon.de/exec/obidos/ASI... *Schummeln mit ChatGPT: ○https://www.amazon.de/exec/obidos/ASI... ○https://www.amazon.de/exec/obidos/ASI... *Digni-Geld - Einkommen in den Zeiten der Roboter: ○Print: http://www.amazon.de/exec/obidos/ASIN... ○Ebook: http://www.amazon.de/exec/obidos/ASIN... ○YouTube: https://www.youtube.com/c/ProfRieck?s... ○Instagram: / profrieck ○Twitter: / profrieck ○LinkedIn: / profrieck #profrieck #chatgpt #künstlicheintelligenz
Wenn ChatGPT die Wahl-O-Mat Fragen zur Europawahl beantwortet, wählt sie die Grünen (oder Volt oder die Tierschutzpartei). Woran liegt das? Das Video untersucht dafür den Machine Bias, Model Decay und AI alignment. Die verwendete Version war 4o. Zum genauen Prompt siehe den angepinnten Kommentar. Zum Wahl-O-Mat: https://www.wahl-o-mat.de/europawahl2... Das erwähnte Buch "Schummeln mit ChatGPT": https://www.amazon.de/exec/obidos/ASI... https://www.amazon.de/exec/obidos/ASI... Video: Parteien im Strategie-Check: • Parteien im Strategie-... Video: Zensur durch ChatGPT: • Zensur durch ChatGPT: ... Wieso ist ChatGPT so links? • Warum ist ChatGPT so l... ►WEITERE INFORMATIONEN VON TEAM RIECK: In diesem Video wurden zentrale Konzepte wie Maschine Bias, Model Decay und AI Alignment diskutiert. Diese Konzepte sind von entscheidender Bedeutung für die Entwicklung und den Einsatz von Künstlicher Intelligenz (KI) in sensiblen Bereichen wie der Politik. Im Folgenden möchte ich einige weitere relevante Konzepte einführen und vertiefen, um ein umfassenderes Verständnis der Herausforderungen und Überlegungen zu bieten. -Data Drift Data Drift beschreibt die Veränderung der Datenverteilung über die Zeit. Diese Veränderungen können durch neue Trends, veränderte Nutzerverhalten oder externe Faktoren wie politische Ereignisse verursacht werden. Ein Modell, das auf historischen Daten trainiert wurde, kann an Genauigkeit verlieren, wenn die aktuellen Daten signifikant abweichen. In der politischen Analyse kann Data Drift bedeuten, dass ein Modell, das auf Daten aus früheren Wahlen basiert, nicht mehr zuverlässig ist, wenn sich die Wählerpräferenzen oder gesellschaftlichen Prioritäten ändern. -Feedback Loop Ein Feedback Loop entsteht, wenn die Ausgaben eines KI-Systems zurück in das System gespeist werden und zukünftige Eingaben beeinflussen. Dies kann in sozialen Medien auftreten, wo Empfehlungen das Nutzerverhalten beeinflussen, was wiederum die Trainingsdaten verändert. Dies kann zu verstärkten Verzerrungen und einer Polarisierung führen, da das System immer extremer werdende Inhalte bevorzugt. In politischen Kontexten können solche Loops die öffentliche Meinung und das politische Klima unvorhersehbar und potenziell destabilisieren. -Concept Drift Concept Drift bezieht sich auf Veränderungen in den zugrunde liegenden Beziehungen in den Daten, die das Modell zu lernen versucht. In der Politik können sich diese Beziehungen durch neue politische Bewegungen, Gesetzesänderungen oder gesellschaftliche Entwicklungen ändern. Ein Modell, das auf veralteten Konzepten basiert, wird diese neuen Dynamiken nicht korrekt erfassen und somit ungenaue oder irrelevante Vorhersagen treffen. -Ethical AI Ethical AI befasst sich mit der Entwicklung und dem Einsatz von KI-Systemen, die ethische Prinzipien und Werte berücksichtigen. Dies ist besonders wichtig, um sicherzustellen, dass KI-Entscheidungen fair, transparent und verantwortungsvoll sind. In politischen Anwendungen müssen Modelle sicherstellen, dass sie keine bestimmten Gruppen systematisch benachteiligen und dass ihre Entscheidungen nachvollziehbar und gerecht sind. Dies fördert das Vertrauen der Öffentlichkeit in die KI und minimiert das Risiko von Missbrauch und Diskriminierung. Diese Konzepte helfen, die Herausforderungen zu verstehen und Strategien zu entwickeln, um faire, robuste und vertrauenswürdige KI-Systeme zu schaffen, die den dynamischen Anforderungen unserer Gesellschaft gerecht werden. :) ►WEITERES VON CHRISTIAN RIECK: *Die 36 Strategeme der Krise: ○Print: https://www.amazon.de/exec/obidos/ASI... ○Kindle: https://www.amazon.de/exec/obidos/ASI... *Schummeln mit ChatGPT: ○https://www.amazon.de/exec/obidos/ASI... ○https://www.amazon.de/exec/obidos/ASI... *Digni-Geld - Einkommen in den Zeiten der Roboter: ○Print: http://www.amazon.de/exec/obidos/ASIN... ○Ebook: http://www.amazon.de/exec/obidos/ASIN... ○YouTube: https://www.youtube.com/c/ProfRieck?s... ○Instagram: / profrieck ○Twitter: / profrieck ○LinkedIn: / profrieck #profrieck #chatgpt #künstlicheintelligenz
In many countries, police use software that supposedly helps prevent crimes before they're committed. Proponents say this makes cities safer. Critics say it leads to increased discrimination. How does it work?
In the second episode of this mini-series on the Future of Technology, we will hear from Vint Cerf, Vice President & Chief Internet Evangelist at GOOGLE, and widely known as one of the “Fathers of the Internet,” and Matt Hutson, Contributing Writer at THE NEW YORKER. They will walk us through the challenges and opportunities that Machine Learning presents, and what the future may hold for this technology. What is Machine Learning? How does it differ from AI? What are the limits of simulating human discourse? How can we detect machine-made mistakes and judge the confidence with which a computer reaches its conclusions?Vinton G. Cerf, Vice President & Chief Internet Evangelist, GOOGLEIn this role, he is responsible for identifying new enabling technologies to support the development of advanced, Internet-based products and services from Google. He is also an active public face for Google in the Internet world.Widely known as one of the “Fathers of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. In December 1997, President Clinton presented the U.S. National Medal of Technology to Cerf and his colleague, Robert E. Kahn, for founding and developing the Internet. Kahn and Cerf were named the recipients of the ACM Alan M. Turing award in 2004 for their work on the Internet protocols. In November 2005, President George Bush awarded Cerf and Kahn the Presidential Medal of Freedom for their work. The medal is the highest civilian award given by the United States to its citizens. In April 2008, Cerf and Kahn received the prestigious Japan Prize.Cerf is a recipient of numerous awards and commendations in connection with his work on the Internet.Cerf holds a Bachelor of Science degree in Mathematics from Stanford University and Master of Science and Ph.D. degrees in Computer Science from UCLA.Matthew Hutson, Contributing Writer, THE NEW YORKERMatthew Hutson is a freelance science writer in New York City and a Contributing Writer at The New Yorker. He also writes for Science, Scientific American, The Wall Street Journal, and other publications, and he's the author of “The 7 Laws of Magical Thinking.” Thanks for listening! Please be sure to check us out at www.eaccny.com or email membership@eaccny.com to learn more!
A New York law is forcing a rethink on bias in AI. What are the practical outcomes of this type of legislation and how do you remove bias in AI? Plus the Supreme Court agrees to hear a case about the liability of platforms from hosting objectionable content.Starring Tom Merritt, Sarah Lane, Andrea Jones-Rooy, Roger Chang, Joe.Link to the Show Notes. Become a member at https://plus.acast.com/s/dtns. Hosted on Acast. See acast.com/privacy for more information.
A New York law is forcing a rethink on bias in AI. What are the practical outcomes of this type of legislation and how do you remove bias in AI? Plus the Supreme Court agrees to hear a case about the liability of platforms from hosting objectionable content. Starring Tom Merritt, Sarah Lane, Andrea Jones-Rooy, Roger Chang, Joe, Amos MP3 Download Using a Screen Reader? Click here Multiple versions (ogg, video etc.) from Archive.org Follow us on Twitter Instgram YouTube and Twitch Please SUBSCRIBE HERE. Subscribe through Apple Podcasts. A special thanks to all our supporters–without you, none of this would be possible. If you are willing to support the show or to give as little as 10 cents a day on Patreon, Thank you! Become a Patron! Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme! Big thanks to Mustafa A. from thepolarcat.com for the logo! Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit Send to email to feedback@dailytechnewsshow.com Show Notes To read the show notes in a separate page click here!
Itoro is an absolute star and I loved our conversation and learning about Itoro and her journey to data and data science. Her journsy has been an amazing one to where she is now. She started out as an engineer and through her curiosity embarked on changing her career and getting into data and specifically becoming a data scientist! She currently works at the Principality Building Society and leads a team of data scientists and data analysts. When she's not doing her day job she's also doing a PhD at Kings and carries out research on such topics as Ethical AI, Machine Learning Fairness, Machine Bias and Explainable AI! Engineering, Statistics and Computer science. As a Stem Ambassador Itoro is deeply passionate about empowering women in STEM Please do like, share, comment and I hope you enjoy listening to Itoro. Behind every data leader there is a person and here is Itoro unplugged.
Episode Description As the conversation around AI continues, Professor Cynthia Rudin, Computer Scientist and Director at the Prediction Analysis Lab at Duke University, is here to discuss interpretable machine learning and her incredible work in this complex and evolving field. To begin, she is the most recent (2021) recipient of the $1M Squirrel AI Award for her work on making machine learning more interpretable to users and ultimately more beneficial to humanity. In this episode, we explore the distinction between explainable and interpretable machine learning and how black boxes aren't necessarily “better” than more interpretable models. Cynthia offers up real-world examples to illustrate her perspective on the role of humans and AI, shares takeaways from her previous work which ranges from predicting criminial recidivism to predicting manhole cover explosions in NYC (yes!). I loved this chat with her because, for one, Cynthia has strong, heavily informed opinions from her concentrated work in this area, and secondly, because Cynthia is thinking about both the end users of ML applications as well as the humans who are “out of the loop,” but nonetheless impacted by the decisions made by the users of these AI systems. In this episode, we cover: Background on the Squirrel AI Award – and Cynthia unpacks the differences between Explainable and Interpretable ML. (00:46) Using real-world examples, Cynthia demonstrates why black boxes should be replaced. (04:49) Cynthia's work on the New York City power grid project, exploding manhole covers, and why it was the messiest dataset she had ever seen. (08:20) A look at the future of machine learning and the value of human interaction as it moves into the next frontier. (15:52) Cynthia's thoughts on collecting end-user feedback and keeping humans in the loop. (21:46) The current problems Cynthia and her team are exploring—the Roshomon Set, optimal sparse decision trees, sparse linear models, causal inference, and more. (32:33) Quotes from Today's Episode “I've been trying to help humanity my whole life with AI, right? But it's not something I tried to earn because there was no award like this in the field while I was trying to do all of this work. But I was just totally amazed, and honored, and humbled that they chose me.”- Cynthia Rudin on receiving the AAAI Squirrel AI Award. (@cynthiarudin) (1:03) “Instead of trying to replace the black boxes with inherently interpretable models, they were just trying to explain the black box. And when you do this, there's a whole slew of problems with it. First of all, the explanations are not very accurate—they often mislead you. Then you also have problems where the explanation methods are giving more authority to the black box, rather than telling you to replace them.”- Cynthia Rudin (@cynthiarudin) (03:25) “Accuracy at all costs assumes that you have a static dataset and you're just trying to get as high accuracy as you can on that dataset. [...] But that is not the way we do data science. In data science, if you look at a standard knowledge discovery process, [...] after you run your machine learning technique, you're supposed to interpret the results and use that information to go back and edit your data and your evaluation metric. And you update your whole process and your whole pipeline based on what you learned. So when people say things like, ‘Accuracy at all costs,' I'm like, ‘Okay. Well, if you want accuracy for your whole pipeline, maybe you would actually be better off designing a model you can understand.'”- Cynthia Rudin (@cynthiarudin) (11:31) “When people talk about the accuracy-interpretability trade-off, it just makes no sense to me because it's like, no, it's actually reversed, right? If you can actually understand what this model is doing, you can troubleshoot it better, and you can get overall better accuracy.“- Cynthia Rudin (@cynthiarudin) (13:59) “Humans and machines obviously do very different things, right? Humans are really good at having a systems-level way of thinking about problems. They can look at a patient and see things that are not in the database and make decisions based on that information, but no human can calculate probabilities really accurately in their heads from large databases. That's why we use machine learning. So, the goal is to try to use machine learning for what it does best and use the human for what it does best. But if you have a black box, then you've effectively cut that off because the human has to basically just trust the black box. They can't question the reasoning process of it because they don't know it.”- Cynthia Rudin (@cynthiarudin) (17:42) “Interpretability is not always equated with sparsity. You really have to think about what interpretability means for each domain and design the model to that domain, for that particular user.”- Cynthia Rudin (@cynthiarudin) (19:33) “I think there's sometimes this perception that there's the truth from the data, and then there's everything else that people want to believe about whatever it says.”- Brian T. O'Neill (@rhythmspice) (23:51) “Surveys have their place, but there's a lot of issues with how we design surveys to get information back. And what you said is a great example, which is 7 out of 7 people said, ‘this is a serious event.' But then you find out that they all said serious for a different reason—and there's a qualitative aspect to that. […] The survey is not going to tell us if we should be capturing some of that information if we don't know to ask a question about that.”- Brian T. O'Neill (@rhythmspice) (28:56) Links Squirrel AI Award: https://aaai.org/Pressroom/Releases/release-21-1012.php “Machine Bias”: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Users.cs.duke.edu/~cynthia: https://users.cs.duke.edu/~cynthia Teaching: https://users.cs.duke.edu/~cynthia/teaching.html
On this episode, it's all about ethics in AI! We'll be sharing different stories about how AI is being used, what the pitfalls are, and who in the field is trying to make changes. We chat to the next generation of AI experts to understand how their institutions are preparing them(or not) to use AI ethically. Surya Mattu, a data scientist who was part of the Pulitzer nominated Propublica investigation “Machine Bias”, talks to us about the report that jumpstarted a global conversation. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (Machine Bias) To understand the current landscape of Ethics and AI we spoke to one of the most prominent advocates for inclusion and diversity in the field, Dr. Timnit Gebru. Dr. Gebru is a Research Scientist in the Ethical AI team at Google and founder of Black in AI (@black_in_ai). How has our world has come to associate the assistance of AI with women? Dr. Myriam Sweeney, assistant professor of Library and Information Studies at the University of Alabama helps us navigate this. Hear a reenactment of the 1920's play Rossum's Universal Robot by Czech playwright Karel Capek, who coined the term robot(acted by Morgan Sweeney and Matt Goldberg). Lastly, Dr. Kirk Bansak highlights the possibilities using of AI for good, including to help place refugees in the best possible host communities. AI guides: https://www.wired.com/story/guide-artificial-intelligence/ https://towardsdatascience.com/ai-machine-learning-deep-learning-explained-simply-7b553da5b960
On this episode, it’s all about ethics in AI! We’ll be sharing different stories about how AI is being used, what the pitfalls are, and who in the field is trying to make changes. We chat to the next generation of AI experts to understand how their institutions are preparing them(or not) to use AI ethically. Surya Mattu, a data scientist who was part of the Pulitzer nominated Propublica investigation “Machine Bias”, talks to us about the report that jumpstarted a global conversation. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (Machine Bias) To understand the current landscape of Ethics and AI we spoke to one of the most prominent advocates for inclusion and diversity in the field, Dr. Timnit Gebru. Dr. Gebru is a Research Scientist in the Ethical AI team at Google and founder of Black in AI (@black_in_ai). How has our world has come to associate the assistance of AI with women? Dr. Myriam Sweeney, assistant professor of Library and Information Studies at the University of Alabama helps us navigate this. Hear a reenactment of the 1920’s play Rossum’s Universal Robot by Czech playwright Karel Capek, who coined the term robot(acted by Morgan Sweeney and Matt Goldberg). Lastly, Dr. Kirk Bansak highlights the possibilities using of AI for good, including to help place refugees in the best possible host communities. AI guides: https://www.wired.com/story/guide-artificial-intelligence/ https://towardsdatascience.com/ai-machine-learning-deep-learning-explained-simply-7b553da5b960
Suzanne Willett is a comedian, producer and playwright with a Masters degree in electrical engineering and an MFA in playwrighting. Jacob Louchheim is an actor, singer and director with a degree in Theatrical Performance from SUNY Purchase College. They met the fall intensive class at SITI Company where they studied the Suzuki Method and Viewpoint. They took that training and are now working together at Silver Glass Productions creating "Life" along with fellow director Broderick Merritt Ballantyne. They’re looking at the Prometheus Effect, Human Identity and Machine Bias through movement and physical theatre. For information about Silver Glass Productions, visit: http://www.silverglassprods.org For information about Suzanne Willett, visit: http://www.suzannewillett.com/ For information about Jacob Louchheim, visit: https://jacoblouchheim.com/ Attribution: ----more---- Logo: Ritzy Remix font by Nick Curtis - www.nicksfonts.com Music and Soundcello_tuning by flcellogrl / Licence: CC BY 3.0freesound.org/people/flcellogrl/sounds/195138/ Flute Play C - 08 by cms4f / Licence: CC0 1.0freesound.org/people/cms4f/sounds/159123/ "Danse Macabre - Violin Hook" Kevin MacLeod (incompetech.com) / Licence: CC BY 3.0 LicensesCC BY 3.0 - creativecommons.org/licenses/by/3.0/CC0 1.0 - http://creativecommons.org/publicdomain/zero/1.0/
Follow the Data: On this week’s episode of Track Changes, tech journalist Adrianne Jeffries sits down with us to talk about The Markup, a new non-profit data driven newsroom. She talks about the importance of using data to combat bias and about how using data in journalism can bring about greater change. She also addresses the shake up that happened at The Markup in its beginnings and tells us about her personal podcast Underunderstood. Links: - The Markup - Propublica - Machine Bias by Propublica - HUD Sues Facebook Over Housing Discrimination by Propublica - New York Times - Buzzfeed - Yelp is Screwing Over Restaurants By Quietly Replacing Their Phone Numbers - ReadWrite - TechCrunch - New York Observer - Motherboard - The Outline
Hva er Machine Bias? Og hvordan skaper man datadrevne tjenester og produkter? I denne episoden av #LØRN snakker Silvija med Senior Data Scientist i Making Waves, Hanne-Torill Mevik, om hva maskinlæring er og det spennende og skumle ved AI.— Machine Bias er problematikken med at man ikke har teknikker for å forstå hva som skjer inni den svarte boksen, og uheldige konsekvenser av å bruke teknologi man ikke kan forklare i tjenester som har direkte innvirkning på folks liv, forteller hun.Dette lørner du: AIMachine BiasMaskinlæring See acast.com/privacy for privacy and opt-out information.
On this episode of AI Australia, we’re excited to be speaking with Kendra Vant. Kendra is currently the principal data scientist at SEEK, and has had an extensive and diverse career in AI/ML and data science (among other things) across insurance, banking, telecommunications, government, gaming, the airline industry, and the job board market. With SEEK being one of the leading Australian companies in the field of AI, Kendra really is someone to pay attention to as the AI landscape unfolds. We’re honoured to have had the opportunity to speak with her about the past, present, and future of AI for the Australian technology community. We discuss a wide range of topics, including: What is involved in Kendra’s role as principal data scientist at SEEK Some of the results Kendra and her team have seen by applying AI to their job search functions How Kendra got into the world of data science and software engineering The role of ethics in AI, and how it plays into the work Kendra is doing with her team at SEEK Some of the issues with bias in the hiring process, and where Kendra sees the main opportunities are for removing bias using AI and ML The importance of keeping humans in the loop when it comes to AI initiatives. This helps humans keep machine bias at bay, and vice versa How SEEK goes about finding tried and true machine learning algorithms, implementing them, and scaling them. Rather than being the research and development ground for new algorithms, they are more focused on making tested algorithms scale better Kendra’s rule of thumb for data scientists and engineers working together - how many engineers per data scientist, what kind of engineers, etc. How Kendra views the state of data science and machine learning adoption and usage. There’s a lot of hype, and Kendra helps us cut through a lot of that in explaining what’s really going on in the Australian business community Kendra’s thoughts and concerns on the National Health Record The best communities and conferences to be involved with as a data scientist or AI/ML enthusiast
We've transferred our biases to artificial intelligence, and now those machine minds are creating the futures they predict. But there's a way to stop it. In this episode we explore how machine learning is biased, sexist, racist, and prejudiced all around, and we meet the people who can explain why, and are going to try and fix it. --- • Show Notes: www.youarenotsosmart.com -- • The Great Courses: www.thegreatcoursesplus.com/smart -- • Squarespace: www.squarespace.com CODE: SOSMART -- • ZipRecruiter: www.ziprecruiter.com/NOTSOSMART See omnystudio.com/listener for privacy information.
In episode 22 of the audio guide at the intersection of tech, media, business and popular culture we speak with Assistant Professor of Digital Marketing Joanne Tombrakos of New York University about the need to re-educate marketers. What is big tech? Media or tech? And many think machines don't have bias. The issue? They're programmed by humans who do.Plus new music from Vi Mode Inc. Project and Seamus Haji & Mekkah.Connect with us on socials @DisruptiveFM#DisruptiveFM #dfm #Microsoft #BrandingStrategyInsider #iOgrapher
In episode 22 of the audio guide at the intersection of tech, media, business and popular culture we speak with Assistant Professor of Digital Marketing Joanne Tombrakos of New York University about the need to re-educate marketers. What is big tech? Media or tech? And many think machines don't have bias. The issue? They're programmed by humans who do.Plus new music from Vi Mode Inc. Project and Seamus Haji & Mekkah.Connect with us on socials @DisruptiveFM#DisruptiveFM #dfm #Microsoft #BrandingStrategyInsider #iOgrapher
Addison Snell and Michael Feldman discuss the pitfalls of machine bias in AI and a new study that predicts coming disruption to a substantial percentage of the workforce.
In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show notes0:00 - Introduction 1:46 - What is algorithmic decision-making? 4:20 - Isn't all decision-making algorithmic? 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate 12:02 - Limitations of the COMPAS debate 15:22 - Other examples of unfairness in algorithmic decision-making 17:00 - What is discrimination in decision-making? 19:45 - The mental state theory of discrimination 25:20 - Statistical discrimination and the problem of generalisation 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination 34:40 - Algorithmic typecasting: Could we all end up like William Shatner? 39:02 - Egalitarianism and algorithmic decision-making 43:07 - The role that luck and desert play in our understanding of fairness 49:38 - Deontic justice and historical discrimination in algorithmic decision-making 53:36 - Fair distribution vs Fair recognition 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making? Relevant LinksReuben's homepage Reuben's institutional page 'Fairness in Machine Learning: Lessons from Political Philosophy' by Reuben Binns 'Algorithmic Accountability and Public Reason' by Reuben Binns 'It's Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making' by Binns et al 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
We've transferred our biases to artificial intelligence, and now those machine minds are creating the futures they predict. But there's a way to stop it. In this episode we explore how machine learning is biased, sexist, racist, and prejudiced all around, and we meet the people who can explain why, and are going to try and fix it. See omnystudio.com/listener for privacy information.
In today's episode, host Jaye Pool reacts to the Charlottesville domestic terror attack, and the failure of presidential leadership in response. Jaye also seeks to encourage listeners that among the negativity, there are signs of light and hope for America's future. Citations: Angwin, Julia, Larson, Jeff, Mattu, Surya, and Lauren Kirchner. 2017. “Machine Bias.” Propublica. May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (August 20, 2017) Barry-Jester, Anna Maria, Casselman, Ben, and Dana Goldstein. 2015. “The New Science of Sentencing.” The Marshall Project. August 4. https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing#.FOYRaElsl (August 20, 2017) Jenkins, Jack. 2017. “Meet the Clergy Who Stared Down White Supremacists in Charlottesville.” ThinkProgress. August 16. https://thinkprogress.org/clergy-in-charlottesville-e95752415c3e/ (August 20, 2017) Mahler, Jonathan, and Steve Eder. 2016. “‘No Vacancies' for Blacks: How Donald Trump Got His Start, and Was First Accused of Bias." The New York Times. August 27. https://www.nytimes.com/2016/08/28/us/politics/donald-trump-housing-race.html?mcubz=0 (August 20, 2017) Thompson, Chrissie. 2016. “The Lawsuit Over Donald Trump's Cincy Apartments You May Hear More About.” Cincinnati.com. August 25. http://www.cincinnati.com/story/news/politics/elections/2016/08/25/discrimination-lawsuit-over-donald-trump-cincinnati-apartments/89269132/ (August 20, 2017) Music: Raga Rage composed by Noisy Oyster provided by freesoundtrackmusic.com Opus Number 1 composed by Derrick Deel and Tim Carleton
In our inaugural episode, Max and Phill discuss MongoDB ransomware, and the opacity of algorithms. Produced by Katie Jensen. Your browser does not support the audio element. Show Notes Improve Your Security: Port Scan Yourself How the machine ‘thinks’: Understanding opacity in machine learning algorithms, by Jenna Burrell (Big Data & Society, Jan 6th 2016) Machine Bias - Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner (Pro Publica, May 23rd 2016) Benjamin Walker’s Theory of Everything Macedonian Teens & Fake News Geohot’s self-driving car
0:45 - Introducing Carina C. Zona Website Personal Twitter Callback Women We So Crafty 2:10 - Coding consequences RubyConf 2015 Keynote: “Consequences of an Insightful Algorithm” Slides Code Newbies discussion 6:00 - Examples of consequences Flickr Deep Learning Google Photo 10:50 - Data quality theories 14:05 - Preventable Mistakes and Algorithmic Transparency 17:30 - Predictive Policing and Biased Data “The Reality of Crime-Fighting Algorithms” “Machine Bias” 22:07 - Coder Responsibility Mechanical Turk Google Crowdsource App “Social Network Nextdoor Moves To Block Racial Profiling Online” “raceAhead: How Nextdoor Reduced Racist Postings Using Empathy” 31:35 - Algorithm triggers Eric Meyer: “Inadvertent Algorithmic Cruelty” 37:20 - Fixing a mistake 40:15 - Trusting humans versus trusting machines Facebook Trending Topics Article on leaked documents Former contractor’s experience Trending topic mistakes 44:30 - Considering social consequences 47:30 - Confronting the uncomfortable 50:30 - Fitbit Example “How Data From Wearable Tech Can Be Used Against You In A Court Of Law” “This chicken breast has a surprisingly healthy heart rate, considering it’s dead” OSFeels 2016 Talk by Emily Gorcenski with chicken example Picks: 99 Bottles by Sandi Metz (David) Vivaldi Browser (Saron) Magnetic Sticky Notes (Saron) Oregon Shakespeare Festival (Sam) Ruby Remote Conf Recordings (Charles) Rails Remote Conf (Charles) Webinars (Charles) Books by Howard Zinn (Corina) On Food and Cooking by Harold McGee
0:45 - Introducing Carina C. Zona Website Personal Twitter Callback Women We So Crafty 2:10 - Coding consequences RubyConf 2015 Keynote: “Consequences of an Insightful Algorithm” Slides Code Newbies discussion 6:00 - Examples of consequences Flickr Deep Learning Google Photo 10:50 - Data quality theories 14:05 - Preventable Mistakes and Algorithmic Transparency 17:30 - Predictive Policing and Biased Data “The Reality of Crime-Fighting Algorithms” “Machine Bias” 22:07 - Coder Responsibility Mechanical Turk Google Crowdsource App “Social Network Nextdoor Moves To Block Racial Profiling Online” “raceAhead: How Nextdoor Reduced Racist Postings Using Empathy” 31:35 - Algorithm triggers Eric Meyer: “Inadvertent Algorithmic Cruelty” 37:20 - Fixing a mistake 40:15 - Trusting humans versus trusting machines Facebook Trending Topics Article on leaked documents Former contractor’s experience Trending topic mistakes 44:30 - Considering social consequences 47:30 - Confronting the uncomfortable 50:30 - Fitbit Example “How Data From Wearable Tech Can Be Used Against You In A Court Of Law” “This chicken breast has a surprisingly healthy heart rate, considering it’s dead” OSFeels 2016 Talk by Emily Gorcenski with chicken example Picks: 99 Bottles by Sandi Metz (David) Vivaldi Browser (Saron) Magnetic Sticky Notes (Saron) Oregon Shakespeare Festival (Sam) Ruby Remote Conf Recordings (Charles) Rails Remote Conf (Charles) Webinars (Charles) Books by Howard Zinn (Corina) On Food and Cooking by Harold McGee
0:45 - Introducing Carina C. Zona Website Personal Twitter Callback Women We So Crafty 2:10 - Coding consequences RubyConf 2015 Keynote: “Consequences of an Insightful Algorithm” Slides Code Newbies discussion 6:00 - Examples of consequences Flickr Deep Learning Google Photo 10:50 - Data quality theories 14:05 - Preventable Mistakes and Algorithmic Transparency 17:30 - Predictive Policing and Biased Data “The Reality of Crime-Fighting Algorithms” “Machine Bias” 22:07 - Coder Responsibility Mechanical Turk Google Crowdsource App “Social Network Nextdoor Moves To Block Racial Profiling Online” “raceAhead: How Nextdoor Reduced Racist Postings Using Empathy” 31:35 - Algorithm triggers Eric Meyer: “Inadvertent Algorithmic Cruelty” 37:20 - Fixing a mistake 40:15 - Trusting humans versus trusting machines Facebook Trending Topics Article on leaked documents Former contractor’s experience Trending topic mistakes 44:30 - Considering social consequences 47:30 - Confronting the uncomfortable 50:30 - Fitbit Example “How Data From Wearable Tech Can Be Used Against You In A Court Of Law” “This chicken breast has a surprisingly healthy heart rate, considering it’s dead” OSFeels 2016 Talk by Emily Gorcenski with chicken example Picks: 99 Bottles by Sandi Metz (David) Vivaldi Browser (Saron) Magnetic Sticky Notes (Saron) Oregon Shakespeare Festival (Sam) Ruby Remote Conf Recordings (Charles) Rails Remote Conf (Charles) Webinars (Charles) Books by Howard Zinn (Corina) On Food and Cooking by Harold McGee
Inspired by a recent ProPublica report on racial bias in an algorithm used to predict future criminal behavior, David and Tamler talk about the use of analytic methods in criminal sentencing, sports, and love. Should we use algorithms to influence decisions about criminal sentencing or parole decisions? Should couples about to get married take a test that predicts their likelihood of getting divorced? Is there something inherently racist about analytic methods in sports? Plus, David asks Tamler some questions about the newly released second edition of his book A Very Bad Wizard: Morality Behind the Curtain.LinksMachine Bias by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner [propublica.org]Mission Impossible: African-Americans & Analytics by Michael Wilbon [theundefeated.com]A Very Bad Wizard: Morality Behind the Curtain [amazon.com affiliate link to the Kindle version of 2nd edition. Eight new interviews. And an all-new foreword by Peez.]Paperback version of the 2nd edition (currently only available on the publisher's website) [routledge.com]
The data behind biased algorithms in criminal justice.