POPULARITY
Wie funktioniert eigentlich Künstliche Intelligenz und warum ist es fatal, wenn wir selbstlernender Software menschliche Vorurteile einprogrammieren? Das erklärt uns Kenza Ait Si Abbou Lyadini. Die Ingenieurin ist Expertin für Künstliche Intelligenz und arbeitet bei der Telekom als Senior Manager Robotics and Artifical Intelligence. Wenn ihr Fragen zu dieser Folge habt oder Anregungen, dann schreibt uns gerne: shelikestech@ndr.de Schreibt uns auch in unserer Umfrage, wie euch der Podcast gefällt. Was machen wir gut, was sollen wir besser machen? Die Umfrage dauert nur 10 Minuten ;-) https://umfrage-ndr.limequery.com/767564?lang=de Und hier sind die Show-Notes zur Folge #5 über KI: We need more diversity in Al development now! | Kenza Ait Si Abbou Lyadini | TEDxHamburg https://www.youtube.com/watch?v=5AgIduLQdqg Dokumentation “Face it” 2019 https://www.tagesspiegel.de/kultur/doku-face-it-im-kino-gesichtserkennung-fuer-alle/24695780.html Rassismus durch Google Algorithmus https://algorithmwatch.org/en/story/google-vision-racism/ Russische Demonstranten von Oppositionsdemos per Gesichtserkennung erkannt https://netzpolitik.org/2017/russische-demonstranten-per-gesichtserkennungs-software-identifiziert/ Verhaftung aufgrund von fehlerhafter Gesichtserkennung in Detroit https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig?t=1597237602629 Clearview.ai: Wenn Milliardäre Gesichtserkennung als Spionage-Spielzeug nutzen https://www.sueddeutsche.de/digital/clearview-gesichtserkennung-spionage-datenschutz-1.4835272 Clearview AI verweigert Zusammenarbeit mit deutscher Datenschutzaufsicht https://netzpolitik.org/2020/gesichtserkennung-clearview-ai-verweigert-zusammenarbeit-mit-deutscher-datenschutzaufsicht/ FindFace: Die Erkennungsmaschine aus Russland https://www.spiegel.de/netzwelt/web/findface-app-mit-gesichtserkennung-loest-hype-in-russland-aus-a-1092951.html PimEye: Eine polnische Firma schafft gerade unsere Anonymität ab https://netzpolitik.org/2020/gesichter-suchmaschine-pimeyes-schafft-anonymitaet-ab/ Fooling Image Recognition with Adversarial Learning - zwei Beispiele: https://www.youtube.com/watch?v=qPxlhGSG0tc&feature=emb_logo https://www.theregister.com/2017/11/06/mit_fooling_ai/ Störmuster durch Pixelmanipulationen https://www.heise.de/hintergrund/Pixelmuster-irritieren-die-KI-autonomer-Fahrzeuge-4852995.html http://sandlab.cs.uchicago.edu/fawkes/ https://www.nytimes.com/2020/08/03/technology/fawkes-tool-protects-photos-from-facial-recognition.html https://www.youtube.com/watch?v=AWrI0EuYW6A Apple Card: Weiblich, Ehefrau, kreditunwürdig? https://www.zeit.de/digital/datenschutz/2019-11/apple-card-kreditvergabe-diskriminierung-frauen-algorithmen-goldman-sachs/
Adversarial Learning is back from hiatus! Our guest is famous data scientist Josh Wills. We discuss why Josh is a famous data scientist, what it's like working at Slack, data science conferences, NLP's "imagenet moment", whether Joel should remove the MapReduce chapter from the 2nd edition of Data Science from Scratch, and which is the best Rush album. Please listen to it.
Andrew is Chief Analytics Officer for Analytics2Go, building apps ranging from customer experience to industrial and operational areas. He is Chair of the Apache Mahout machine learning library and co-host of the Adversarial Learning podcast. His work history includes consulting, Data Engineering, Apache Software Foundation, Lucidworks and Accenture.
Women in AI is a biweekly podcast from RE•WORK, meeting with leading female minds in AI, Deep Learning and Machine Learning. We will speak to CEOs, CTOs, Data Scientists, Engineers, Researchers and Industry Professionals to learn about their cutting edge work and technological advancements, as well as their impact on AI for social good and diversity in the workplace.
Adversarial Learning is back! In this long-delayed episode (thanks, technical difficulties) we are joined by data scientist Schaun Wheeler to discuss our favorite topic, data ethics. Highlights include: * Schaun's Medium post "An ethical code can’t be about ethics" * Do we need a "Hippocratic Oath" for data science * How to hire data scientists who won't steal people's kidneys * Why Joel has a Values Mug * The Manifesto for Data Practices * Is this all secretly a competency problem? * Skin in the Game * Are data ethics issues really just business ethics issues? Please listen to it! (More episodes coming soon!)
This week, we dive into machine learning bias and fairness from a social and technical perspective with machine learning research scientists Timnit Gebru from Microsoft and Margaret Mitchell (aka Meg, aka M.) from Google. They share with Melanie and Mark about ongoing efforts and resources to address bias and fairness including diversifying datasets, applying algorithmic techniques and expanding research team expertise and perspectives. There is not a simple solution to the challenge, and they give insights on what work in the broader community is in progress and where it is going. Timnit Gebru Timnit Gebru works in the Fairness Accountability Transparency and Ethics (FATE) group at the New York Lab. Prior to joining Microsoft Research, she was a PhD student in the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. The Economist and others have recently covered part of this work. She is currently studying how to take dataset bias into account while designing machine learning algorithms, and the ethical considerations underlying any data mining project. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the impact of racial bias in the data. Margaret Mitchell M. Mitchell is a Senior Research Scientist in Google's Research & Machine Intelligence group, working on artificial intelligence. Her research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence toward positive goals. Margaret's work combines machine learning, computer vision, natural language processing, social media, and insights from cognitive science. Before Google, Margaret was a founding member of Microsoft Research's “Cognition” group, focused on advancing artificial intelligence, and a researcher in Microsoft Research's Natural Language Processing group. Cool things of the week GPS/Cellular Asset Tracking using Google Cloud IoT Core, Firestore and MongooseOS blog GPUs in Kubernetes Engine now available in beta blog Announcing Spring Cloud GCP - integrating your favorite Java framework with Google Cloud blog Interview PAIR | People+AI Research Initiative site FATE | Fairness, Accountability, Transparency and Ethics in AI site Fat* Conference site & resources Joy Buolamwini site Algorithmic Justice Leaguge site ProPublica Machine Bias article AI Ethics & Society Conference site Ethics in NLP Conference site FACETS site TensorFlow Lattice repo Sample papers on bias and fairness: Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification paper Facial Recognition is Accurate, if You're a White Guy article Mitigating Unwanted Biases with Adversarial Learning paper Improving Smiling Detection with Race and Gender Diversity paper Fairness Through Awareness paper Avoiding Discrimination through Casual Reasoning paper Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings paper Satisfying Real-world Goals with Dataset Constraints paper Axiomatic Attribution for Deep Networks paper Monotonic Calibrated Interpolated Look-Up Tables paper Equality of Opportunity in Machine Learning blog Additional links: Bill Nye Saves the World Episode 3: Machines Take Over the World (includes Margaret Mitchell) site “We're in a diversity crisis”: Black in AI's founder on what's poisoning the algorithms in our lives article Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru TWiML & AI podcast Security and Safety in AI: Adversarial Examples, Bias and Trust with Mustapha Cisse TWiML & AI podcast How we can build AI to help humans, not hurt us TED PAIR Symposium conference Question of the week “Is there a gcp service that's cloud identity-aware proxy except for a static site that you host via cloud storage?” Answer between Mark & KF Cloud Identity-Aware Proxy site & docs Cloud Storage site & docs Hosting a Static Website on Cloud Storage site Google App Engine site & docs weasel repo Where can you find us next? Melanie will be at Fat* in New York in Feb. Mark will be at the Game Developer's Conference | GDC in March.
It's Back to School time at Adversarial Learning! topics of discussion include OUR SPONSOR: the Metis Demystifying Data Science Conference (at which Joel is speaking, please listen to it) Sudbury education John Holt times tables whether textbook piracy is the new stealing from the library Neil Tyson's "In School" cycle of tweets how to teach curiosity why math is a "hard" skill and people skils are "soft" skills when factorizing matrices is easy and dealing with people is hard whether and how our schools should be producing more data scientists Please listen to it.
This episode is from the Data Skeptic archives. I spoke to Ang Nuygen back in 2015 about his paper "Deep Neural Networks are Easily Fooled". This is another great example of Adversarial Learning so we wanted to re-release this episode for anyone that missed it or wants a refresher.