POPULARITY
with Audrey Watters | Episode 903 | Tech Tool Tuesday Are we racing toward an AI future without asking the right questions? Author and ed-tech critic Audrey Watters joins me to show teachers how to hit pause, get thoughtful, and keep classroom relationships at the center. Sponsored by Rise Vision Did you know the same solution that powers my AI classroom also drives campus-wide emergency alerts and digital signage? See how Rise Vision can save your school thousands: RiseVision.com/10MinuteTeacher Highlights Include Why “human first” still beats the newest AI tool: Audrey explains how relationships drive real learning. Personalized learning myths busted: How algorithmic “solutions” can isolate students. Practical guardrails for AI: Three reflection questions every teacher should ask before hitting “assign.”
In this powerful episode of For the Love of History, host TC is joined by scholar and author Dr. Anita Say Chan to explore the unsettling historical roots of modern data science and artificial intelligence. Drawing from her groundbreaking book Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future, Anita uncovers how today's predictive algorithms trace back to 19th-century eugenics. Yes, really. Statistical regression—the backbone of online recommendation engines—was developed by a eugenicist. And that's just the beginning. We unpack how algorithmic bias, data colonialism, and techno-eugenics operate in today's platforms—from Facebook's role in global violence to the AI industry's resistance to regulation. If you're curious about the intersections of technology, race, gender, and power, this is the episode you've been waiting for.
Today, we're stepping into one of the most urgent conversations in tech right now: bias in artificial intelligence.Tracey Spicer AM is a Walkley Award-winning journalist, author, and longtime activist for gender equity. In this episode, she unpacks the unseen biases coded into the technologies we use every day—and what happens when we leave them unchecked. Drawing on years of research for her latest book Man-Made, Tracey explores the ethical challenges and opportunities in AI development, and why we all have a role to play in shaping more equitable outcomes.In this episode, Tracey shares:How gender, race, age, and ability bias are embedded into AI systemsThe real-world impacts of biased tech—from hiring software to image generatorsWhy ‘human in the loop' systems are critical for ethical AIHow organisations can audit their data, clean up algorithms, and lead responsiblyHost: Jenelle McMaster, Deputy CEO and People & Culture Leader at EYGuest: Tracey Spicer AM, journalist, author, and AI ethics advocate
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Jessica Talisman, a senior information architect deeply immersed in the worlds of taxonomy, ontology, and knowledge management. The conversation spans the evolution of libraries, the shifting nature of public and private access to knowledge, and the role of institutions like the Internet Archive in preserving digital history. They also explore the fragility of information in the digital age, the ongoing battle over access to knowledge, and how AI is shaping—and being shaped by—structured data and knowledge graphs. To connect with Jessica Talisman, you can reach her via LinkedIn. Check out this GPT we trained on the conversation!Timestamps00:05 – Libraries, Democracy, Public vs. Private Knowledge Jessica explains how libraries have historically shifted between public and private control, shaping access to knowledge and democracy.00:10 – Internet Archive, Cyberattacks, Digital Preservation Stewart describes visiting the Internet Archive post-cyberattack, sparking a discussion on threats to digital preservation and free information.00:15 – AI, Structured Data, Ontologies, NIH, PubMed Jessica breaks down how AI trains on structured data from sources like NIH and PubMed but often lacks alignment with authoritative knowledge.00:20 – Linked Data, Knowledge Graphs, Semantic Web, Tim Berners-Lee They explore how linked data enables machines to understand connections between knowledge, referencing the vision behind the semantic web.00:25 – Entity Management, Cataloging, Provenance, Authority Jessica explains how libraries are transitioning from cataloging books to managing entities, ensuring provenance and verifiable knowledge.00:30 – Digital Dark Ages, Knowledge Loss, Corporate Control Stewart compares today's deletion of digital content to historical knowledge loss, warning about the fragility of digital memory.00:35 – War on Truth, Book Bans, Algorithmic Bias, Censorship They discuss how knowledge suppression—from book bans to algorithmic censorship—threatens free access to information.00:40 – AI, Search Engines, Metadata, Schema.org, RDF Jessica highlights how AI and search engines depend on structured metadata but often fail to prioritize authoritative sources.00:45 – Power Over Knowledge, Open vs. Closed Systems, AI Ethics They debate the battle between corporations, governments, and open-source efforts to control how knowledge is structured and accessed.00:50 – Librarians, AI Misinformation, Knowledge Organization Jessica emphasizes that librarians and structured knowledge systems are essential in combating misinformation in AI.00:55 – Future of Digital Memory, AI, Ethics, Information Access They reflect on whether AI and linked data will expand knowledge access or accelerate digital decay and misinformation.Key InsightsThe Evolution of Libraries Reflects Power Struggles Over Knowledge: Libraries have historically oscillated between being public and private institutions, reflecting broader societal shifts in who controls access to knowledge. Jessica Talisman highlights how figures like Andrew Carnegie helped establish the modern public library system, reinforcing libraries as democratic spaces where information is accessible to all. However, she also notes that as knowledge becomes digitized, new battles emerge over who owns and controls digital information.The Internet Archive Faces Systematic Attacks on Knowledge: Stewart Alsop shares his firsthand experience visiting the Internet Archive just after it had suffered a major cyberattack. This incident is part of a larger trend in which libraries and knowledge repositories worldwide, including those in Canada, have been targeted. The conversation raises concerns that these attacks are not random but part of a broader, well-funded effort to undermine access to information.AI and Knowledge Graphs Are Deeply Intertwined: AI systems, particularly large language models (LLMs), rely on structured data sources such as knowledge graphs, ontologies, and linked data. Talisman explains how institutions like the NIH and PubMed provide openly available, structured knowledge that AI systems train on. Yet, she points out a critical gap—AI often lacks alignment with real-world, authoritative sources, which leads to inaccuracies in machine-generated knowledge.Libraries Are Moving From Cataloging to Entity Management: Traditional library systems were built around cataloging books and documents, but modern libraries are transitioning toward entity management, which organizes knowledge in a way that allows for more dynamic connections. Linked data and knowledge graphs enable this shift, making it easier to navigate vast repositories of information while maintaining provenance and authority.The War on Truth and Information Is Accelerating: The episode touches on the increasing threats to truth and reliable information, from book bans to algorithmic suppression of knowledge. Talisman underscores the crucial role librarians play in preserving access to primary sources and maintaining records of historical truth. As AI becomes more prominent in knowledge dissemination, the need for robust, verifiable sources becomes even more urgent.Linked Data is the Foundation of Digital Knowledge: The conversation explores how linked data protocols, such as those championed by Tim Berners-Lee, allow machines and AI to interpret and connect information across the web. Talisman explains that institutions like NIH publish their taxonomies in RDF format, making them accessible as structured, authoritative sources. However, many organizations fail to leverage this interconnected data, leading to inefficiencies in knowledge management.Preserving Digital Memory is a Civilization-Defining Challenge: In the digital age, the loss of information is more severe than ever. Alsop compares the current state of digital impermanence to the Dark Ages, where crucial knowledge risks disappearing due to corporate decisions, cyberattacks, and lack of preservation infrastructure. Talisman agrees, emphasizing that digital archives like the Internet Archive, WorldCat, and Wikimedia are foundational to maintaining a collective human memory.
Jameela Jamil joins us for a conversation about her rise to fame, the dangers of divisive echo chambers, and her decision to reject harmful beauty narratives.With her signature wit and vulnerability, Jameela's dives into toxic beauty standards, the destructive anti-aging narrative that has plagued women for generations, and the deep-rooted values that inspire Jameela to use her platform as a force for positive change.
We hope you're enjoying our past season rebroadcasts so far. It's been fun to take this trip down memory lane to revisit some of the episodes that you all enjoyed the most. We're on to season 3, and Qudsiya's conversation with Lydia X. Z. Brown, a disability justice advocate and activist, who has dedicated their life and resources to combating injustice, oppression, and violence in all its forms. This is another foundational episode Qudsiya often recommends to listeners who want to understand the concept of intersectionality. Lydia breaks it all down for us in this episode, and applies the concept of intersectionality to the context of algorithmic bias, which is a hot topic these days with the rise of artificial intelligence.Visit our website for transcripts.-- Subscribe to Qudsiya's Substack, Getting Down To It Support the team behind the podcast with a donationLet us know what you think with a comment or review onApple podcasts.
Les algorithmes influencent nos vies bien plus que nous ne le réalisons : de nos playlists musicales à nos décisions d'achat, de nos recherches sur Internet à la façon dont nous consommons les médias. Mais que se passe-t-il derrière ces recommandations ? Quels sont les risques éthiques et comment développer notre esprit critique pour rester maître de nos choix ? Cet épisode vous éclaire sur le rôle croissant des algorithmes et les questions qu'ils soulèvent dans notre quotidien numérique.Sources citées dans l'épisode :Data & Society - Report on Algorithmic Bias (lien : https://datasociety.net/research/algorithmic-bias/)CNIL - Comprendre les algorithmes (lien : https://www.cnil.fr/fr/algorithmes)Netflix - Derrière nos écrans de fumée (lien : https://www.netflix.com/title/81254224)Rapport UNESCO sur les algorithmes et l'intelligence artificielle (lien : https://en.unesco.org/artificial-intelligence)----------------------------------DSI et des Hommes est un podcast animé par Nicolas BARD, qui explore comment le numérique peut être mis au service des humains, et pas l'inverse. Avec pour mission de rendre le numérique accessible à tous, chaque épisode plonge dans les expériences de leaders, d'entrepreneurs, et d'experts pour comprendre comment la transformation digitale impacte nos façons de diriger, collaborer, et évoluer. Abonnez-vous pour découvrir des discussions inspirantes et des conseils pratiques pour naviguer dans un monde toujours plus digital.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Comment on the Show by Sending Mark a Text Message.This episode is part of my initiative to provide access to important court decisions impacting employees in an easy to understand conversational format using AI. The speakers in the episode are AI generated and frankly sound great to listen to. Enjoy!Can technology uphold fairness, or is it silently perpetuating bias? Discover the complex world of AI in the hiring process as we unravel the case of Derek Mobley versus Workday Inc. Mobley, a black man over 40 with mental health conditions, challenges the algorithms that he claims have unjustly barred him from over 100 job opportunities. Despite the court's decision not to categorize Workday as an employment agency, the episode prompts a pivotal discussion about the responsibilities HR tech companies might bear when their software influences employment outcomes. We grapple with the concept of disparate impact discrimination and what it means when unintentional practices result in a skewed playing field for protected groups.From the courtrooms to the broader tech landscape, the implications of this case ripple across the HR industry and beyond. We weigh the necessity for transparency, accountability, and fairness in algorithmic decision-making while acknowledging the delicate balance with innovation. Listen as we delve into the potential for increased scrutiny and regulation of HR tech companies, and encourage job seekers to critically engage with the data that drives these systems. Join us in exploring how technology shapes our employment landscape and what needs to change to ensure it does so equitably. If you enjoyed this episode of the Employee Survival Guide please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States. For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.Disclaimer: For educational use only, not intended to be legal advice.
Brad M. Thompson, Partner, Epstein Becker & Green PC, Chris Provan, Managing Director & Senior Principal Data Scientist, Mosaic Data Science, and Sam Tyner-Monroe, Ph.D., Managing Director of Responsible AI, DLA Piper LLP (US), discuss how to analyze and mitigate the risk of bias in artificial intelligence through the lens of data science. They cover HHS' Section 1557 Final Rule as it pertains to algorithmic bias, examples of biased algorithms, the role of proxies, stratification of algorithms by risk, how to test for biased algorithms, how compliance programs can be adapted to meet the unique needs of algorithmic bias, the NIST Risk Management Framework, whether it's possible to ever get rid of bias, and how explainability and transparency can mitigate bias. Brad, Chris, and Sam spoke about this topic at AHLA's 2024 Complexities of AI in Health Care in Chicago, IL.To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.
Leaders Of Transformation | Leadership Development | Conscious Business | Global Transformation
What do you need to know about data privacy and generative AI? In this informative episode we explore the dynamic world of data privacy and generative AI with Amruta Moktali, Chief Product Officer at Skyflow - the world's first and only data privacy vault delivered as an API. Amruta's impressive career trajectory spans leading roles at industry giants like Microsoft, Salesforce, and Cleo, culminating in her current position where she champions data privacy solutions. Listen as she shares invaluable insights on the importance of safeguarding original data, the impact of generative AI, and the growing presence of women in technology. Join us as Amruta unpacks the intricacies of data privacy vaults, offers strategies to manage data responsibly, and discusses the challenges unique to remote work environments. Whether you're a business leader, tech enthusiast, or advocate for diversity in the workplace, this episode offers a wealth of knowledge and actionable advice. What We Discuss in This Episode Amruta's journey from product design to tech leadership. The importance of data privacy in the era of generative AI. Major challenges organizations face in maintaining control over shared data. The balance between using original data and ensuring consent and compliance with privacy regulations. Overview of how Skyflow's data privacy vault secures sensitive information. The impact of diversity and inclusion on tech innovation and decision-making. Strategies for companies to ensure AI models are trained with unbiased data. Risks associated with data breaches and proactive prevention measures. The representation of women in tech and encouraging more diverse talent in STEM fields. Podcast Highlights 0:00 – How Amruta found her passion in data privacy. 5:32 - Protecting your data in the context of generative AI. 8:14 - Privacy policies, consent, and the originality of data. 11:29 - Challenges of maintaining control over shared data. 14:55 - Skyflow's solution for securing sensitive data. 19:22 - Diversity and inclusion's impact on tech innovation. 23:17 - Ensuring equitable training data for AI models. 27:06 - Proactive measures to prevent data breaches. 30:45 - Encouraging diverse talent in tech. Favorite Quotes On Data Security: “Privacy policies and regulations are not just boxes to tick—they're about safeguarding the originality and integrity of data.” On Diversity in Tech: “It's not enough to have a seat at the table; genuine inclusion means actively listening and valuing diverse perspectives.” On Proactive Measures: “The repercussions of a data breach are far-reaching; investing in proactive security measures is crucial for protecting both reputation and trust.” Episode Show Notes and Resources: https://leadersoftransformation.com/podcast/business/509-data-privacy-ai-and-women-in-tech-with-amruta-moktali/ Check out our complete library of episodes and other leadership resources here: https://leadersoftransformation.com ________
Algorithmic bias has long been recognized as a key problem affecting decision-making processes that integrate artificial intelligence (AI) technologies. The increased use of AI in making military decisions relevant to the use of force has sustained such questions about biases in these technologies and in how human users programme with and rely on data based on hierarchized socio-cultural norms, knowledges, and modes of attention. In this post, Dr Ingvild Bode, Professor at the Center for War Studies, University of Southern Denmark, and Ishmael Bhila, PhD researcher at the “Meaningful Human Control: Between Regulation and Reflexion” project, Paderborn University, unpack the problem of algorithmic bias with reference to AI-based decision support systems (AI DSS). They examine three categories of algorithmic bias – preexisting bias, technical bias, and emergent bias – across four lifecycle stages of an AI DSS, concluding that stakeholders in the ongoing discussion about AI in the military domain should consider the impact of algorithmic bias on AI DSS more seriously.
As AI continues to permeate various aspects of society, its impact on decision-making, bias, and future technological developments is complex. How can we navigate the challenges posed by AI, particularly when it comes to fairness and bias in algorithms? What insights can be drawn from the intersection of economics, computer science, and behavioral studies to guide the responsible development and use of AI?In this episode, Sendhil Mullainathan, a prominent economist and professor, delves into these pressing issues. He shares his journey from computer science to behavioral economics and discusses the role of AI in shaping the future of decision-making and societal structures. Sendhil provides a nuanced view of algorithmic bias, its origins, and the challenges in mitigating it. He also explores the potential and pitfalls of AI in healthcare and policymaking, offering insights into how we can harness AI for the greater good while being mindful of its limitations.0:00 - Start1:51 - Introducing Sendhil14:20 - Algorithmic bias29:20 - Handling Bias41:57 - AI and Decision Making57:01 - AI in our Future1:02:29 - Conclusion and the last question
Data Bytes listeners get an exclusive discount to join Women in Data. View discount here. (00:00:00) Intro(00:00:20) AI in Hiring(00:00:37) Bias in Automation(00:01:04) Welcome to Podcast(00:01:09) Guest Introduction(00:01:16) Journalism to AI(00:01:26) First Encounter with AI(00:02:10) Job Interview with Robot(00:02:48) Research and Rabbit Hole(00:05:24) Hiring Tools Bias(00:05:51) Systemic Hiring Issues(00:07:04) Human Bias in Hiring(00:08:09) Bias in AI Tools(00:13:26) Echo Chamber Effect(00:13:58) Workplace Surveillance(00:14:12) Amazon Hiring Example(00:22:04) AI and Employee Surveillance(00:24:01) Stress from Surveillance(00:24:38) No Privacy on Work Computer(00:25:07) Tools to Track Activity(00:27:45) Productivity Theater(00:28:19) Meaningful Productivity(00:31:24) Tools for Flight Risk(00:35:45) Need for Transparency(00:40:11) Suggestions for Job Seekers(00:41:07) Forced Consumerism(00:43:02) Journalistic Role(00:43:15) Outro --- Support this podcast: https://podcasters.spotify.com/pod/show/women-in-data/support
This week, South Korea is hosting a significant gathering focused on the risks and regulation of artificial intelligence, building on the work started at last year's inaugural AI safety conference in Britain. The Seoul summit aims to mitigate the threats posed by cutting-edge artificial intelligence systems, including algorithmic bias and existential risks to humanity. Delegates from over two dozen countries will attend, including the US and China, to discuss how to address the risks of AI and set shared standards to manage it. The meeting will feature updates from leading AI companies and digital ministers sharing best practices to protect society from AI's negative impacts. --- Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message
Last week, states parties met for the first session of the Group of Governmental Experts (GGE) on lethal autonomous weapon systems (LAWS) in 2024. This debate featured the GGE's most substantive discussion to date about bias under the topic “risk mitigation and confidence building”, including around a working paper dedicated to bias by Canada, Costa Rica, Germany, Ireland, Mexico, and Panama. In this post, Dr. Ingvild Bode, Associate Professor at the Center for War Studies (University of Southern Denmark) argues that bias is as much a social as a technical problem and that addressing it therefore requires going beyond technical solutions. She holds that the risks of algorithmic bias need to receive more dedicated attention as the GGE's work turns towards thinking around operationalisation. These arguments are based on the author's presentation at the GGE side event “Fixing Gender Glitches in Military AI: Mitigating Unintended Biases and Tackling Risks” organised by UNIDIR on 6 March 2024.
Dive into the intricate world of trustworthy AI in this enlightening episode. Discover the multifaceted nature of trustworthiness, from accuracy and reliability to fairness and transparency. Explore the methodologies, technologies, and industry practices shaping trustworthy AI systems. Learn from real-world case studies and envision the promising future of AI that's not just intelligent but also trustworthy. Join us as we unravel the importance of trust in AI for its broader acceptance and effectiveness.----------Resources used in this episode:In AI We Trust: Ethics, Artificial Intelligence, and Reliability [Link].The relationship between trust in AI and trustworthy machine learning technologies [Link].Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims [Link].Trustworthy Artificial Intelligence: A Review [Link].Blockchain for explainable and trustworthy artificial intelligence [Link].Trustworthy AI in the Age of Pervasive Computing and Big Data [Link].From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems [Link].Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims [Link].Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems [Link].Trustworthy AI: From Principles to Practices [Link].Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details
Dr. Rob Eschmann, Associate Professor of Social Work at Columbia University, discusses his latest book When the Hood Comes Off: Racism and Resistance in the Digital Age (University of California Press). We cover how social media works to unmask everyday experiences of racism, and how this affects student life at American universities. Dr. Eschmann also shares his research on social media, racial microaggressions, and Black Twitter; thoughts on TikTok and algorithmic bias; and how resisting racism requires engaging in conversation.
Hands-On Courses for Rust, Data, Cloud, AI and LLMs
As the pandemic made a doctor visit as easy as a Zoom call and computer vision proved able to distinguish a benign blemish from something more worrisome, guest Eleni Linos, MD, DrPH, grew fascinated with the many ways digital technologies will impact all of medicine, not just her specialty, dermatology. She now believes the future of digital health is the future of health, period. But much work remains to ensure those benefits extend to every sector of society. Linos previews the future of digital health for host and fellow physician Russ Altman on this episode of Stanford Engineering's The Future of Everything podcast.Contact Links:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads or Twitter/XConnect with School of Engineering >>> Twitter/XChapters:(00:00) IntroductionRuss Altman introduces Professor Eleni Linos and they discuss a future where digital health encompasses all aspects of healthcare and how we have moved towards that. (02:10) Defining Digital HealthThe challenge of defining digital health and envisioning a future where it integrates seamlessly into healthcare without differentiation.(03:33) Dermatology and Digital HealthEleni explains her interest in digital tools for dermatology, how they have been applied in dermatology and why they are useful.(06:41) Challenges in Examining Diverse Skin TypesAddressing challenges in dermatological exams for patients with diverse skin tones and backgrounds.(09:05) Impact on Patients and Health DisparitiesAssessing patient reactions & benefits to remote interactions and studying health disparities concerning age, ethnicity, and technology literacy.(10:56) LLMs, Digital Health, & BiasHow large language models (LLMs) like ChatGPT are used in digital health, and their biases, and the need for and how Dr. Linos is working to reduce these.(15:24) Empathy and AI Dr. Linos tells a personal story about empathy demonstrated by Chat GPT, and reflects on the potential of AI to enhance patient interactions and care.(18:47) Social Media in Public HealthInsights into leveraging social media for public health campaigns, the strategies used to influence behavioral changes in specific demographics, and how it was employed during COVID(24:33) Challenges in Digital Medicine EducationExploring the challenges & opportunities in preparing future clinicians for a digital medicine-infused future. (28:20) Stanford Center for Digital HealthThe vision and purpose of the Center for Digital Health at Stanford, emphasizing the collaboration between academia, tech companies, and a global perspective to tackle future health challenges. Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads or Twitter/XConnect with School of Engineering >>> Twitter/X
EPISODE 1913: In this KEEN ON show, Andrew talks to new media artist Eryk Salvaggio who sifts through the debris of an AI age in which we can no longer trust anything we seeEryk Salvaggio is an interdisciplinary design researcher and new media artist. His work explores emerging technologies through a critically engaged lens, testing their mythologies and narratives against their impacts on social and cultural ecosystems. His work, which focuses on generativity and artificial intelligence, often exposes the ideologies embedded into technologies. His work has been curated into film and music festivals, gallery installations, and conferences (such as DEFCON 31 and SXSW). The work interrogates generative AI through a blend of cybernetics, visual culture & media theory, with a critique grounded in resistance and creative misuse, highlighting the gaps that emerge between the analog and digital, such as datasets and the world they claim to represent. Eryk has since worked with partners including AIxDesign's Story & Code program, the AI Village at DEFCON 31, Space10, the Australian National University, the Swiss National Science Foundation, the Wikimedia Foundation, the Internet Archive, and the National Gallery of Australia. His work has been published in academic journals such as Leonardo, Communications of the ACM, IMAGE, Patterns, and by art publishers including DAHJ Gallery, the Furtherfield gallery (London), Turbulence (Boston), Rhizome (New York) and 10th Floor Design Studios (San Francisco). His artwork has been included in pieces with the BBC4, The New York Times, ArtForum, NBC News, Neural, Dirty, and Mute Magazine. His work has been exhibited at Michigan State University Science Museum, the UN Internet Governance Forum, Eyebeam, CalArts, Brown University, Turbulence, The Internet Archive, and in books including Jon Ippolito & Joline Blais' At the Edge of Art, Alex Galloway's Protocol: How Control Exists After Decentralization, and Peter Langford's Image & Imagination. He has presented talks, keynotes and works at SXSW, DEFCON 31, the Systems Research & Design Conference (RSD10&11), the Advances in Systems Sciences and Systems Practice Conference (2022), Melbourne Design Week (2021), MIT Press (2021), the University of St. Gallen (2018), California College of the Arts (2018, 2019, 2020), the University of Maine, RightsCon (2020), and Gensler San Francisco (2017). As a Wikipedia Visiting Scholar at Brown University, he created the article on Algorithmic Bias in 2016. Eryk has taught at the Elisava Barcelona School of Design and Engineering, RIT, and Bradley Universities, and has given talks or lectures at NYU, the University of Cambridge, Aarhus, the University of Copenhagen, and Northeastern. He holds a Masters in Media and Communication from the London School of Economics and a Masters in Applied Cybernetics from the Australian National University. He earned two concurrent undergraduate degrees, in New Media and Journalism, from the University of Maine, where he was listed as visiting faculty as an undergraduate based on his early interactive, online net.art work.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.
Guest: Dr. Bruce Y Lee, Executive Director of PHICOR (Public Health Informatics, Computational, and Operations Research) [@PHICORteam]On LinkedIn | https://www.linkedin.com/in/bruce-y-lee-68a6834/On Twitter | https://twitter.com/bruce_y_leeWebsite | https://www.bruceylee.com/On Forbes | https://www.forbes.com/sites/brucelee/On Psychology Today | https://www.psychologytoday.com/us/contributors/bruce-y-lee-md-mba_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli_____________________________This Episode's SponsorsBlackCloak
From high school students given the wrong marks just as they're applying to university to Black defendants misclassified as higher risk for reoffending, AI is driving unfair and damaging outcomes. Technology firms promise what they call Responsible AI. But can they really deliver if they can't keep up with the speed of change? Can governments impose ethical standards and safe use of AI-based systems and products? In this episode, Anton Ovchinnikov, Distinguished Professor of Management Analytics at Smith School of Business, discusses his groundbreaking research into the government response to algorithmic bias and what happens when large language models are fed AI-generated synthetic content rather than human-generated content. He is joined in conversation by host Meredith Dault. Special Guest: Anton Ovchinnikov.
In this interview, Dr. Joy Buolamwini argues that algorithmic bias in AI systems poses an existential risk to marginalized people. She challenges the assumptions of tech leaders who advocate for AI “alignment” and explains why tech companies are hypocritical when it comes to addressing bias. Dr. Joy Buolamwini is the founder of the Algorithmic Justice League and the author of “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.”Correction: Aza says that Sam Altman, the CEO of OpenAI, predicts superintelligence in four years. Altman predicts superintelligence in ten years. RECOMMENDED MEDIAUnmasking AI by Joy Buolamwini“The conscience of the AI revolution” explains how we've arrived at an era of AI harms and oppression, and what we can do to avoid its pitfallsCoded BiasShalini Kantayya's film explores the fallout of Dr. Joy's discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us allHow I'm fighting bias in algorithmsDr. Joy's 2016 TED Talk about her mission to fight bias in machine learning, a phenomenon she calls the "coded gaze." RECOMMENDED YUA EPISODESMustafa Suleyman Says We Need to Contain AI. How Do We Do It?Protecting Our Freedom of Thought with Nita FarahanyThe AI Dilemma Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
This season, IRL host Bridget Todd meets people who are balancing the upsides of artificial intelligence with the downsides that are coming into view worldwide. Stay tuned for the first of five biweekly episodes on October 10! IRL is an original podcast from the non-profit Mozilla.
The popular social media app Instagram and its parent company, Meta, use artificial intelligence to moderate content. But there are growing concerns that the “training data” for AI is biased against women and people of color. A Los Angeles photographer thinks this “algorithmic bias” is part of the reason Instagram disabled his account. Reporter: Beth Tribolet, KQED The California legislature has passed a bill that would ban the hand-counting of ballots in most elections. The legislation was targeted specifically at Northern California's Shasta County, where supervisors did away with Dominion voting machines earlier this year. Reporter: Roman Battaglia, Jefferson Public Radio Much of the world's highest quality cotton is grown in the San Joaquin Valley. But the return of Tulare Lake could have a devastating impact on the Central Valley's cotton industry. Reporter: Kerry Klein, KVPR
Guest: Eric Olson, Co-Founder & CEO at Consensus.app [@ConsensusNLP]On LinkedIn | https://www.linkedin.com/in/eric-olson-1822a7a6/On Twitter | https://twitter.com/IPlayedD1_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli_____________________________This Episode's SponsorsBlackCloak
Guillermo Corea is the Managing Director of the SHRM Workplace Innovation Lab and Venture Capital initiatives. He joined SHRM in 2015. He and his team are focused on finding and cultivating technologies that will impact the future of work. Guillermo's team organizes the SHRM Better Workplaces Challenge Cup and Workplace Tech Accelerator plus they lead the organization's impact investing program. Guillermo is a vocal leader in the HRTech community. This was a fun one because we got to record in person at SHRMTech 2023 in San Francisco. Only our fifth live recording in more than 190 episodes!Listen and learn...How HR teams should drive workplace innovation Which Shark Tank shark is judging the Better Workplaces Challenge CupHow SHRM Labs connects tech entrepreneurs with HR leaders Why the CHRO is the most strategic exec in the C-suite How the pandemic and an aging employee population are creating opportunities for HRTech The technology Guillermo says will change work most in the next decade How to confront the problem of biased algorithms making HR decisions Why the HR blockchain will replace background check vendors The HRTech company Guillermo is ready to fund! References in this episode...Reza Nazeman, former CIO of SAP Concur, on AI and the Future of WorkKamal Ahluwalia, Eightfold President, on AI and the Future of WorkJason Corsello, VA at Acadia Ventures, on AI and the Future of WorkSHRM Labs
There's an extra dose of dysfunction in this episode but we think it fits since we're talking about how the computers are all trying to take our jobs and the best way to fight back! Okay not really - but we are talking about Artificial Intelligence (AI) and Algorithmic Bias in Healthcare today. Piggybacking on our episodes about Medical Bias (#29 and #30, in case you want to re-listen real quick), this episode was recommended to us by a listener and we take a look at what AI in Healthcare looks like, what it is used for, and why it can be a problem. Hospitals and doctors use AI every day to help diagnose and treat patients - but what happens when the mathematic equation (a/k/a ALGORITHM) used to get those diagnoses was written by someone who is biased? Or what happens when the data we used to create that algorithm is all skewed in the first place? We also talk about ways to combat those problems, so don't worry - it's not the worst thing to ever happen, unless you ask Garet, in which case, the world is ending. Follow us on Instagram at www.instagram.com/thebarisanklehigh or look us up @thebarisanklehigh and consider joining our Patreon at www.patreon.com/thebarisanklehigh. You can also check out our merch at bit.ly/anklehighmerch if you want to put us on your body! Sources: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6875681/ https://nihcm.org/publications/artificial-intelligences-racial-bias-in-health-care https://www.science.org/doi/10.1126/science.aax2342 https://en.wikipedia.org/wiki/Artificial_intelligence https://www.researchgate.net/profile/Huiwen-Loh/publication/360001378_Automated_detection_of_ADHD_Current_trends_and_future_perspective/links/625b57eaa279ec5dd7fd6e8c/Automated-detection-of-ADHD-Current-trends-and-future-perspective.pdf https://www.nature.com/articles/s41746-020-0288-5 https://www.healthnavigator.org.nz/health-a-z/h/heart-attack-symptoms-womens/
Algorithms and artificial intelligence (AI) are increasingly being deployed in the financial services industry, with massive potential to automate and enhance processes, increase efficiency, improve customer service, and augment investment and lending analyses. However, with those potential benefits come challenges, such as the risk that AI applications may result in unintended bias or “unfair” discrimination against certain sub-groups. Please join Mayer Brown partners Niketa Patel, Tori Shinohara, and Jenn Rosa as they discuss potential risks and the current federal regulation landscape with respect to AI.
Camille Castelyn interviews Marisa Dallas about her diverse career interests including sex-based bias in healthcare artificial intelligence algorithms, patient non-compliance, and the intersection of the pharmaceutical industry and bioethics. Dallas is currently a Doctor of Pharmacy Candidate at the University of Michigan. She completed a Bachelor of Science in Human Biology at Michigan State University and a Master of Science in Bioethics at Columbia University.
digital kompakt | Business & Digitalisierung von Startup bis Corporate
EXPERTENGESPRÄCH | Eine künstliche Intelligenz kann nur so sensibel sein, wie der Mensch, der sie programmiert hat. Dadurch dass es auch beim Menschen immer noch viele Blindspots beim Thema Diversity gibt, spiegeln sich diese natürlich auch in der Maschine wieder. Im Gespräch mit Autorin Kenza Ait Si Abbou, erkunden Joel und Lunia diesmal, wie man diese Algorithmic Biases erkennt und korrigieren kann und wie man sowohl Mensch, als letztlich auch künstliche Intelligenz für Diversity besser sensibilisieren kann. Du erfährst... …warum Algorithmic Biases entstehen …welche Algorithmic Biases in KI derzeit ein Problem sind …ob man diese Biases einfach „reparieren“ kann …wie die Methode des Reinforced Learnings mit dem Thema Diskriminierung umgeht …ob zuviel Nachbearbeitung von KI-Ergebnissen auch eine Gefahr darstellen können …mit welchen Methoden man Diskriminierung entlernen kann …welche Empathie in die Führung spielt …wie man Hierarchie-Barrieren innerhalb einer Firma abbauen kann Diese Episode dreht sich schwerpunktmäßig um Diversity: Lasst uns Organisationen neu, offen und tolerant denken! Nachdem wir anfangs die Organisationsentwicklerin Marina Löwe und Ratepay-Gründerin Miriam Wohlfarth wiederholt vor dem Mirko hatten, um dich für Diversity zu sensibilisieren, diskutiert Joel mittlerweile regelmäßig mit Lunia Hara (Diconium) zu Themen rund um Leadership und Diversity. Dabei geht es den beiden explizit nicht um Mann oder Frau, sondern um die Schaffung von Empathie füreinander sowie ein ganzheitliches Bild und Verständnis für verschiedene Rollen und Perspektiven. __________________________ ||||| PERSONEN |||||
Welcome back to the Tech Policy Grind! Today, Reema chats with Caitlin Chin, a Class 4 Fellow at the Foundry and a a fellow at the Center for Strategic and International Studies (CSIS), on her work at the intersection of privacy, antitrust, and algorithmic bias. At CSIS, Caitlin researches technology regulation in the United States and abroad. She previously worked as a research analyst at the Brookings Institution, where her projects centered around U.S. federal and state legislation related to information privacy, antitrust, and algorithmic bias. At Brookings, Chin coauthored "Bridging the gaps: A path forward to federal privacy legislation" (with Cameron Kerry, John Morris Jr., and Nicol Turner Lee), which put forward a comprehensive framework for national commercial privacy standards in the United States. In addition, she has published over two dozen other reports or commentaries on public policy issues including "Addressing Big Tech's power over speech" (with Bill Baer) and "Why Democrats and Republicans would benefit from hate crime protections for Asian Americans." She's also spoken on C-SPAN, WOSU/NPR, and France 24, and her work has been cited by the Washington Post, the Wall Street Journal, and the Future of Privacy Forum, among other organizations. She has a BA in government and Spanish from the University of Maryland and an MPP from Georgetown University's McCourt School of Public Policy. Her master's thesis, "Examining national privacy laws in the context of international trade," won a student paper award at the 48th Research Conference on Communications, Information, and Internet Policy (TPRC48) in 2020. She was also a recipient of Public Knowledge's 20/20 Visionaries award in 2021. Coming soon from the Foundry: keep an eye out for the next round of applications to become a Foundry Fellow! If you'd like to sponsor an episode or propose a guest for the show, get in touch with us: foundrypodcasts@ilpfoundry.us If you'd like to support the show, consider donating to the Foundry; you can do so here.
Maia Hightower, M.D., M.B.A., MPH Chief Digital Technology officer of the University of Chicago MedicineIvor Horn, M.D. MPH Director, Health Equity & Social Determinants of Health, Google Artificial Intelligence is full of technological and economic promise, but just like its creators, AI isn't free from subconscious discrimination. As AI becomes more commonplace in the medical field, questions of whether racial bias will be mitigated or expanded in the future are omnipresent. The solution will depend on how much effort is put into making AI more equitable. Join Lee Hawkins, Drs. Maia Hightower and Ivor Horne as they delve into this new frontier.“Algorithmic bias is part of our history. It is part of the history of medicine, part of the history of the United States, and part of the history of our world, for many reasons.”--Dr. Maia Hightower“The real-world bias is in the real-world data.”--Dr. Maia Hightower“I literally went into medicine to transform the way people behave in, the way physicians behave in health care.”— Ivor Horn, M.D. MPH“And when I think about technology, it's all about, ‘how am I giving people more information, more access,' so that when they walk through the doors of a health care system, like, they have the tools to say, ‘I know this, I understand this, this is my question for you, and this is what I expect of this health care system for me and for my family.'”— Ivor Horn, M.D. MPH“We know that data shows that more diverse teams have better outcomes. They're more, businesses are more profitable when they have more diverse teams sitting around the table. And it's really important not just to have the team but also give the team space to speak and confidence to have their voice and bring and be their full selves when they come to the room.”— Ivor Horn, M.D. MPH
Kareem Saleh, CEO & Co-Founder, FairPlayFairness as a Service is a new concept being championed by FairPlay, with the aim of reducing algorithmic bias for people of colour, women, and other historically disadvantaged groups. Using artificial intelligence (AI) to identify bias in lending models, the idea is to identify more good borrowers – typically people who would otherwise be turned down for credit as a result of bias. This summer, the company announced a $10 million Series A funding round. Robin Amlôt of IBS Intelligence speaks to Kareem Saleh, CEO and Co-Founder of FairPlay.
This week, Karen is joined by Itoro Liney, Data Science Manager at Principality Building Society, and AI Ethics Researcher, to discuss one of the most important issues in our field: algorithmic bias. In this episode, Itoro uses a simple, yet powerful example to highlight the impact of biased data, especially when amplified by Machine Learning algorithms. You will also hear practical advice on how data scientists can gain clarity on possible bias in their data, and how to account for it. Enjoy! ********************* Show notes: ********************* Itoro's LinkedIn: https://www.linkedin.com/in/chrisliney/ She recommends reading: Radical Candor: https://www.radicalcandor.com/ The five dysfunctions of a team: https://www.amazon.co.uk/Five-Dysfunctions-Team-Leadership-Lencioni/dp/0787960756
Everyone who interacts with technology in any way is either affected by or directly affecting algorithmic bias on a daily basis, and most of it is invisible to us. That's what I'm talking about today: algorithmic bias, where it comes from, how it affects you and your business, and how we can use strategic big-picture thinking to mitigate and erase the harm it causes.
Welcome back to the Tech Policy Grind Podcast by the Internet Law and Policy Foundry! In this episode, Class 4 Fellow Lama Mohammed interviews Sophia Baik, a Postdoctoral Researcher at the Center for Long-Term Cybersecurity and incoming Assistant Professor at the Department of Communication Studies at the University of San Diego. Episode 15 follows Sophia and Lama in a detailed discussion on the future of the American Data Protection and Privacy Act (ADPPA) — the United States' most comprehensive federal privacy legislation to date — by breaking the bill down, highlighting its significance and the future of the bill when Congress comes back from the August recess. While the rest of the world responds to the growing ubiquitous nature of technology through its enactment of comprehensive privacy bills, the United States continues to fall behind by failing to pass federal privacy legislation. Our experts use this opportunity to dive into why the ADPPA is such a monumental bill, especially as it relates to protecting civil rights and liberties in the digital era. Although the bill is at risk of failing to pass to the House, Sophia provides listeners with recommendations on how to get involved with the privacy movement and how to protect our online data and digital identities. You can connect with Sophia on Twitter (@jeeyunbaik) and read all her amazing published research on her Google Scholar profile. Thanks for listening, and stay tuned for our next episode!
On 27 September 2021, the ODI in partnership with the Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge and the Center for Responsible AI at NYU (NYU R/AI) convened an online roundtable to explore experimentation in data policy and practice around how structurally under-represented communities in North America and the EU can be transnational emergent forces that renegotiate or reimagine the social contract under the Fourth Industrial Revolution.
Artificial intelligence has a variety of useful applications in healthcare. But algorithmic bias, along with data privacy and security concerns, have prompted significant ethical and legal concerns. Linda Malek, partner at Moses & Singer and chair of the firm's healthcare, privacy, and cybersecurity practice group, discusses the risks associated with AI in healthcare.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reading the ethicists: A review of articles on AI in the journal Science and Engineering Ethics, published by Charlie Steiner on May 18, 2022 on LessWrong. Epistemic status: Stream of consciousness reactions to papers read in chronological order. Caveat lector. I have a dirty family secret. My uncle is a professional ethicist. In a not-too roundabout way, this is why I ended up looking at the October 2020 issue of the journal Science and Engineering Ethics, their special issue on the ethics of AI. I am now going to read that issue, plus every article this journal has published about AI since then [I wussed out and am just going to skim the latter for ones of special interest] and give you the deets. October 2020 Hildt et al., Editorial: Shaping Ethical Futures in Brain-Based and Artificial Intelligence Research This is the introduction to the issue. They give each paper a sentence or two of summary and try to tie them all together. The authors helpfully give a list of topics they think are important: Data Concerns: Data management, data security, protection of personal data, surveillance, privacy, and informed consent. Algorithmic Bias and Discrimination: How to avoid bias and bias related problems? This points to questions of justice, equitable access to resources, and digital divide. Autonomy: When and how is AI autonomous, what are the characteristics of autonomous AI? How to develop rules for autonomous vehicles? Responsibility: Who is in control? Who is responsible or accountable for decisions made by AI? Questions relating to AI capabilities: Can AI ever be conscious or sentient? What would conscious or sentient AI imply? Values and morality: How to build in values and moral decision-making to AI? Are moral machines possible? Should robots be granted moral status or rights? Based on this list, I anticipate that I'm about to run into four-sixths ethics papers about present-day topics that I will skim to point out particularly insightful or anti-insightful ones, one-sixth philosophers of mind that I will make fun of a little, and one-sixth papers on "How to build values into general AI" that I'm really curious as to the quality of. Onward! Nallur, Landscape of Machine Implemented Ethics Primarily this paper is a review of a bunch of papers that have implemented or proposed ethics modules in AI systems (present-day things like expert systems to give medical advice, or lethal autonomous weapons [which he has surprisingly few qualms about]). These were mostly different varieties of rule-following or constraint-satisfaction, with a few handwritten utility functions thrown in. And then one of these is Stuart Armstrong (2015) for some reason - potentially that reason is that the author wanted to at least mention "value-loading," and nobody else was talking about it (I checked - there's a big table of properties of different proposals). It also proposes evaluating different proposals by having a benchmark of trolley-problem-esque ethical dilemmas. The main reason this idea won't work is that making modern-day systems behave ethically involves a bunch of bespoke solutions only suitable to the domain of operation of that system, not allowing for cross-comparison in any useful way. If were to salvage this idea, we might wish to have a big list of ethical questions the AI system should get the right answer to, and then when building a sufficiently important AI (still talking about present-day applications), the designers should go through this list and find all the questions that can be translated into their system's ontology and check that their decision-making procedure gets acceptable answers. E.g. "Is it better to kill one person or two people?" can become self-driving car scenarios where it's going to hit either one or two people, and it should get the right answer, bu...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reading the ethicists: A review of articles on AI in the journal Science and Engineering Ethics, published by Charlie Steiner on May 18, 2022 on LessWrong. Epistemic status: Stream of consciousness reactions to papers read in chronological order. Caveat lector. I have a dirty family secret. My uncle is a professional ethicist. In a not-too roundabout way, this is why I ended up looking at the October 2020 issue of the journal Science and Engineering Ethics, their special issue on the ethics of AI. I am now going to read that issue, plus every article this journal has published about AI since then [I wussed out and am just going to skim the latter for ones of special interest] and give you the deets. October 2020 Hildt et al., Editorial: Shaping Ethical Futures in Brain-Based and Artificial Intelligence Research This is the introduction to the issue. They give each paper a sentence or two of summary and try to tie them all together. The authors helpfully give a list of topics they think are important: Data Concerns: Data management, data security, protection of personal data, surveillance, privacy, and informed consent. Algorithmic Bias and Discrimination: How to avoid bias and bias related problems? This points to questions of justice, equitable access to resources, and digital divide. Autonomy: When and how is AI autonomous, what are the characteristics of autonomous AI? How to develop rules for autonomous vehicles? Responsibility: Who is in control? Who is responsible or accountable for decisions made by AI? Questions relating to AI capabilities: Can AI ever be conscious or sentient? What would conscious or sentient AI imply? Values and morality: How to build in values and moral decision-making to AI? Are moral machines possible? Should robots be granted moral status or rights? Based on this list, I anticipate that I'm about to run into four-sixths ethics papers about present-day topics that I will skim to point out particularly insightful or anti-insightful ones, one-sixth philosophers of mind that I will make fun of a little, and one-sixth papers on "How to build values into general AI" that I'm really curious as to the quality of. Onward! Nallur, Landscape of Machine Implemented Ethics Primarily this paper is a review of a bunch of papers that have implemented or proposed ethics modules in AI systems (present-day things like expert systems to give medical advice, or lethal autonomous weapons [which he has surprisingly few qualms about]). These were mostly different varieties of rule-following or constraint-satisfaction, with a few handwritten utility functions thrown in. And then one of these is Stuart Armstrong (2015) for some reason - potentially that reason is that the author wanted to at least mention "value-loading," and nobody else was talking about it (I checked - there's a big table of properties of different proposals). It also proposes evaluating different proposals by having a benchmark of trolley-problem-esque ethical dilemmas. The main reason this idea won't work is that making modern-day systems behave ethically involves a bunch of bespoke solutions only suitable to the domain of operation of that system, not allowing for cross-comparison in any useful way. If were to salvage this idea, we might wish to have a big list of ethical questions the AI system should get the right answer to, and then when building a sufficiently important AI (still talking about present-day applications), the designers should go through this list and find all the questions that can be translated into their system's ontology and check that their decision-making procedure gets acceptable answers. E.g. "Is it better to kill one person or two people?" can become self-driving car scenarios where it's going to hit either one or two people, and it should get the right answer, bu...
Too many young people – particularly young people of color – lack enough familiarity or experience with emerging technologies to recognize how artificial intelligence can impact their lives, in either a harmful or an empowering way. Educator Ora Tanner saw this and rededicated her career toward promoting tech literacy and changing how we understand data sharing and surveillance, as well as teaching how AI can be both a dangerous tool and a powerful one for innovation and activism.By now her curricula have touched more than 30,000 students, many of them in her home state of Florida. Tanner also went to bat against the Florida Schools Safety Portal, a project to amass enormous amounts of data about students in an effort to predict and avert school shootings – and a proposal rife with potential biases and abuses.Tanner speaks with EFF's Cindy Cohn and Jason Kelley on teaching young people about the algorithms that surround them, and how they can make themselves heard to build a fairer, brighter tech future.In this episode you'll learn about:Convincing policymakers that AI and other potentially invasive tech isn't always the answer to solving public safety problems.Bringing diverse new voices into the dialogue about how AI is designed and used.Creating a culture of searching for truth rather than just accepting whatever information is put on your plate.Empowering disadvantaged communities not only through tech literacy but by teaching informed activism as well.This podcast is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology.Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: Meet Me at Phountain by gaetanh (c) copyright 2022 http://ccmixter.org/files/gaetanh/64711Hoedown at the Roundabout by gaetanh (c) copyright 2022 http://ccmixter.org/files/gaetanh/64711JPEG of a Hotdog by gaetanh (c) copyright 2022 http://ccmixter.org/files/gaetanh/64711reCreation by airtone (c) copyright 2019 http://dig.ccmixter.org/files/airtone/59721
In this episode of the Faegre Drinker on Law and Technology Podcast, Jason G. Weiss sits down with Bennett Borden, Faegre Drinker's chief data scientist, to discuss algorithmic bias and what companies should know about the latest regulatory developments.
For today's episode, Tyrell and Daphne explore how artificial intelligence contributes to racial inequity by discussing the Netflix documentary, Coded Bias. The hosts begin the episode by catching up (00:30) and discussing “Oh Lawd” news (6:40). They then turn their attention to the topic of algorithmic bias and the future of technology (43:38). They start the conversation by discussing the history of artificial intelligence (44:40), how algorithmic determinism shapes decision making across various industries (47:25), and how algorithms are used in the United States and abroad (54:00). They close the episode by discussing the social consequences of algorithmic bias as well as key lessons and takeaways from the documentary (1:06:55). Resources: BhD Patreon - https://www.patreon.com/bhdpodcast The Dark Side of Reform- https://rowman.com/ISBN/9781793643759/The-Dark-Side-of-Reform-Exploring-the-Impact-of-Public-Policy-on-Racial-EquityDiscount Code (30% Off): LXFANDF30
As machine learning is used to market and sell, we must consider how biases in models and data can impact society. Arizona State University Professor Katina Michael joins Frederic Van Haren and Stephen Foskett to discuss the many ways in which algorithms are skewed. Even a perfect model will produce biased answers when fed input data with inherent biases. How can we test and correct this? Awareness is important, but companies and governments should take active interest in detecting bias in models and data. Links: "Algorithmic bias in machine learning-based marketing models" Three Questions: Frederic: When will AI be able to reliably detect when a person is lying? Stephen: Is it possible to create a truly unbiased AI? Tom Hollingsworth of Gestalt IT: Can AI ever recognize that it is biased and learn how to overcome it? Gests and Hosts Katina Michael, Professor in the School for the Future of Innovation in Society and School of Computing and Augmented Intelligence at Arizona State University. Read here paper here in the Journal of Business Research. You can find more about her at KatinaMichael.com. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen's writing at GestaltIT.com and on Twitter at @SFoskett. Date: 2/21/2022 Tags: @SFoskett, @FredericVHaren
Radically Pragmatic, a podcast from the Progressive Policy Institute
On this episode of the Radically Pragmatic podcast, Dr. Kalinda Ukanwa sits down with Jordan Shapiro, PPI's Data and Economic Policy Analyst, to discuss her new paper entitled, “Breaking Up Big Tech Will Not Prevent Algorithmic Harm to Society.” Dr. Ukanwa and Jordan dive into what algorithms really are and how they're simultaneously helping and harming consumers. They unpack why breaking up Big Tech is not the solution to the algorithm bias problem, and they explore what actions can be taken to remedy algorithmic harm on a societal level and in your everyday life. Dr. Kalinda Ukanwa is an Assistant Professor at the University of Southern California's Marshall School of Business who researches how algorithmic bias, algorithmic decision-making, and consumer reputations impact firms. Follow Dr. Ukanwa on Twitter at @KalindaUkanwa. Follow Jordan Shapiro, PPI's Data and Economic Policy Analyst on Twitter at @jd_shapiro. Read the report here. Learn more about the Mosaic Economic Project here. Learn more about the Progressive Policy Institute here.
Aktuelle Trends und Hypes sprießen aktuell aus der Erde wie Löwenzahn zwischen Gehsteiglücken. Ein kurzer Blick auf Twitter oder auf aktuelle Schlagzeilen reicht, um den Eindruck zu bekommen, dass man sich am besten gestern schon mit Web 3.0, AI, Metaverse, Algorithmic Bias und und und, beschäftigen hätte sollen. Die Themen sind oft technisch komplex, polarisierend und mit Interessenskonflikten von Einzelpersonen oder Unternehmen behaftet. Der Wunsch kommt auf, dass einem das Thema von einer neutralen Instanz erklärt wird, die komplizierte Thematiken wenn schon nicht leicht greifbar, dann wenigstens empirisch belegen kann: Die Wissenschaft. Paradoxerweise scheint jedoch die Forschung, vor allem die Sozialwissenschaften, den Blogs, Youtubevideos und Tweets um Jahre hinterherzuhinken. Warum das so ist, damit beschäftig sich die aktuelle Desk Reject Folge. Unter anderem diskutieren die Hosts: ○ Welche Hypethemen derzeit in ihren Feldern aktuell sind ○ Ob es eine nuancierte Definition von Hypethemen gibt ○ Was die Rolle der Wissenschaft mit Bezug zu Hypethemen sein sollte ○ Die Risiken für PhDs wenn man ein Hypethema behandeln möchte Und noch viel mehr Halbwahrheiten.
Sara Gerke, assistant professor of law at the Penn State University Dickinson School of Law, talks about the ethical and legal challenges of artificial intelligence and use of big data in health care.
What you'll get out of this episodeListen in as co-host Rebecca Gwilt chats with Circulo Health Founding Chief Medical Officer and Head of Mosaic Circle, Vik Bakru, MD, about how diversity, equity, and inclusion (DEI) helps Circulo avoid algorithm bias and worsening health inequities as they focus on their mission to be the future of Medicaid. In this episode you'll discover: What algorithmic bias is—and its impact on health inequity What the economic case is for investing in DEI/prevention of algorithmic bias How technology can address so many of these societal inequalities What the Medicaid of the future looks like How he defines success for Circulo and for himself as CEO This lively conversation is one you won't want to miss if you're an artificial intelligence or machine learning innovator, a provider, a startup founder, or a healthcare tech investor. Quotables“We have had structural issues in our society that have driven the wrong pattern of behaviors in serving those who need it.” @vikbakhru #CirculoHealth on Ep16 @DecodingHealth1 w @NixonGwitlLaw https://sliceofhealthcare.com/category/decoding-healthcare-innovation/ “I'm seeing so much innovation around how to get providers oriented around gaps in understanding between themselves and the people they serve.” @vikbakhru Ep16 @DecodingHealth1 w @NixonGwitlLaw https://sliceofhealthcare.com/category/decoding-healthcare-innovation/ “We don't need prior authorizations…if a clinician on the front lines identifies a need, they should be able to address that need.” @vikbakhru @DecodingHealth1 w @NixonGwiltLaw https://sliceofhealthcare.com/category/decoding-healthcare-innovation/ “It's a heavy lift to pull off, but if we get it right, we can remove 16 cents on the dollar of administrative overhead” @vikbakhru #CirculoHealth on Ep16 @DecodingHealth1 w @NixonGwitlLaw https://sliceofhealthcare.com/category/decoding-healthcare-innovation/ “The normal person assumes that it couldn't possibly be as inefficient as it actually is” @rebeccagwilt talking w @vikbakru re #Medicaid inefficiencies on Ep16 @DecodingHealth1 w @NixonGwitlLaw https://sliceofhealthcare.com/category/decoding-healthcare-innovation/ Recommended Resources Find out the mission and direction at Circulo in this overview video Read about the $50M Series A funding for Circulo from earlier this year Connect with Vik Bakhru on LinkedIn From the NGL blog: We published our 2022 trends and predictions post just before this interview was taped, and we're in alignment with Vik in thinking there will be greater focus on digital health equity. Read the rest of the opportunities and warnings for the year ahead here. Join the ConversationAre you a digital health or health system innovator? Tell us what topics and people you'd like us to cover in future episodes:– Website – LinkedIn – Twitter – Instagram – YouTubeFollow our daily updates on LinkedIn:– Carrie – RebeccaAbout Your HostsCarrie Nixon and Rebecca Gwilt are partners at Nixon Gwilt Law, a healthcare innovation law firm exclusively serving Providers, Digital Health Companies, and Life Science Businesses seeking to transform the way we receive and experience healthcare. Find out more at NixonGwiltLaw.com. This podcast is produced by Slice of Healthcare LLC.
Catherine Tucker is the Sloan Distinguished Professor of Management Science, Professor of Marketing, Chair of the MIT Sloan Ph.D. Program, a co-founder of the MIT Cryptoeconomics Lab, which studies the applications of blockchain, and also a co-organizer of the Economics of Artificial Intelligence Initiatives sponsored by the Alfred P. Sloan Foundation. Her research interests lie in how technology allows firms to use digital data and machine learning to improve performance, and in the challenges this poses for regulation. Professor Tucker has particular expertise in online advertising, digital health, social media, and electronic privacy. Her research studies the interface between marketing and the economics of technology and law. She holds a BA from the University of Oxford and a PhD in economics from Stanford University.
DAILYS NUGS: Algorithmic bias, misused private equity funds, and Amazon's big decision-- human life vs. profit from stupid items no one needs
In Ep. 9, artificial intelligence expert Kristine Gloria joins us. With a PhD in cognitive science and a masters in media studies, we talk everything tech and well-being when it comes to our relationship and behaviors with tech, and those of our kids. We also cover how to talk with your kids about tech, language to use when it's time to step away, and resources that can help parents navigate what's appropriate for their kids, including emerging platforms.Resources from in this episode:Common Sense MediaTogether: The Healing Power of Human Connection in a Sometimes Lonely World by Vivek MurthyPlayful by Design Digital Wellness Lab Social Media & Adolescent Research Team (UW-Madison) Welcome to the Human Experience Digital for Good: Raising Kids to Thrive in an Online World (by Richard Culatta)Power and Progress in Algorithmic Bias (2020) Find Kristine Gloria:EmailLinkedInSlow TalkWe are building out more spaces to connect! For now, find us:Newsletter (more joy in motherhood!):InstagramFacebookYouTubeTwitterLinkedIn
In the fifth episode of our season on simplemindedness, Ryan, with the help of "The Social Dilemma" Netflix documentary, discusses the issue of algorithmic bias.
On Track - Trending Topics in Business and Law - by Haynes and Boone, LLP
Welcome to a new episode of AI Chats, a podcast series produced by the law firm of Haynes Boone and lawyers from its AI and Deep Learning Practice Group to explore the exciting, ever-evolving, and occasionally controversial world of artificial intelligence.In today's episode, we are going to discuss algorithmic bias in AI with Clarita Uchendu, in-house counsel for the R&D arm of a Fortune 50 company. Moderator: Dina BlikshteynSpeaker: Clarita Uchendu
Have you ever considered whether you are in truly in control of the social media and internet searches you consume?This is our most thought provoking interview yet.We are joined by Corey Patrick White, a technologist and futurist and a Partner and Senior Vice President of Future Point of View.Corey helps leaders to look out into the future and anticipate how technology will impact their organizations and themselves. He is focused in the areas of machine learning, artificial intelligence, cybersecurity, and data privacy. This episode is Part 2 of a special 2 part series where Corey White and The Leadership Project founder, Mick Spiers discuss issues about technology that all modern leaders need to know.They explore the power (and dangers) of machine learning algorithms using social media and internet searches as key examples.It is the responsibility of all leaders to ensure that technology is embraced for virtuous purposes; ensure that technology is not used for malicious intent; AND to ensure technology does not have unintended negative consequences on society.Corey and Mick discuss how algorithmic bias impacts our daily lives and amplifies societal prejudices and bias making it increasingly more difficult to address unconscious bias, equity and equality in the world.They discuss the considerations a leader needs when approaching how to ethically use technology for good and avoid unintended outcomes.YOU DO NOT WANT TO MISS THIS!!!For more information on Corey Patrick White you see his ground breaking TEDx talk at - https://www.ted.com/talks/corey_patrick_white_how_algorithms_spread_human_bias His personal website - https://coreypatrickwhite.com and Future Point of View - https://fpov.com For more information on The Leadership Project you can find us at:LinkedIn - https://www.linkedin.com/company/the-leadership-proj Facebook - https://www.facebook.com/TheLeadershipProj Instagram - https://www.instagram.com/theleadershipproj Twitter - https://twitter.com/leadershipproj1 Website - www.mickspiers.com YouTube - https://www.youtube.com/channel/UCPG9X7weoI4Xs3SreZab1rQ The full video of this interview will soon be posted on our YouTube Channel.
Our 66th episode with a summary and discussion of last week's big AI news! NOTE: audio is fixed on this upload. This week: Meet BeachBot, a beach rover that uses AI to remove cigarette butts from beaches Here Is A Fully Autonomous AI-Powered Beehive That Could Save Bee Colonies Researchers demonstrate that malware can be hidden inside AI models Facebook AI Releases ‘VoxPopuli', A Large-Scale Open Multilingual Speech Corpus For AI Translations in NLP Systems China built the world's largest facial recognition system. Now, it's getting camera-shy. Twitter announces first algorithmic bias bounty challenge Say Hello To The Tokyo Olympic Robots Subscribe: RSS | iTunes | Spotify | YouTube Find this and more in our text version of this news roundup: https://lastweekin.ai/p/127 Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)
Algorithmic machine learning is everywhere, from resume scanning software to advertisements social media. But the discrimination inherently programmed into these algorithms can have real consequences for marginalized groups, including people of color and disabled people. In this episode, advocate, educator and attorney Lydia X.Z. Brown tells Qudsiya about their work to help more people understand and combat these threats. Lydia and Qudsiya also explore the definition of intersectionality, and its relevance when combating algorithmic discrimination.—Mentioned in the episode:Center for Democracy and Technology - Plain language resources on algorithmic discrimination—PATREON: This project is a labor of love, but it also requires resources. Learn more about how to support Down to the Struts by clicking here.—Episode transcript here.
Have you ever considered whether you are in truly in control of the social media and internet searches you consume?This is our most thought provoking interview yet.We are joined by Corey Patrick White, a technologist and futurist and a Partner and Senior Vice President of Future Point of View.Corey helps leaders to look out into the future and anticipate how technology will impact their organizations and themselves. He is focused in the areas of machine learning, artificial intelligence, cybersecurity, and data privacy. This episode is Part 1 of a special 2 part series where Corey White and The Leadership Project founder, Mick Spiers discuss issues about technology that all modern leaders need to know.They explore the power (and dangers) of machine learning algorithms using social media and internet searches as key examples.It is the responsibility of all leaders to ensure that technology is embraced for virtuous purposes; ensure that technology is not used for malcontent; AND to ensure technology does not have unintended negative consequences on society.Corey and Mick discuss how algorithmic bias impacts our daily lives and amplifies societal prejudices and bias making it increasingly more difficult to address unconscious bias, equity and equality in the world.They discuss the considerations a leader needs when approaching how to ethically use technology for good and avoid unintended outcomes.YOU DO NOT WANT TO MISS THIS!!!For more information on Corey Patrick White you see his ground breaking TEDx talk at - https://www.ted.com/talks/corey_patrick_white_how_algorithms_spread_human_bias His personal website - https://coreypatrickwhite.com and Future Point of View - https://fpov.com For more information on The Leadership Project you can find us at:LinkedIn - https://www.linkedin.com/company/the-leadership-proj Facebook - https://www.facebook.com/TheLeadershipProj Instagram - https://www.instagram.com/theleadershipproj Twitter - https://twitter.com/leadershipproj1 Website - www.mickspiers.com YouTube - https://www.youtube.com/channel/UCPG9X7weoI4Xs3SreZab1rQ The full video of this interview will soon be posted on our YouTube Channel.
Environmental justice activist and intersectional engineer Kiana Kazemi talks about her role as EIC of the Leaflet at UC Berkley, organizing within an institutional space, the importance of checking in on friends, and so much more. Follow Kiana on Instagram @kiana.kaz. Music by Alden Hellmuth --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
What if a technology company becomes so rich, so powerful, so exploitative, and so oblivious that that the harm it's doing begins to outweigh the quality and utility of its products? What if that company happens to run the world's dominant search, advertising, email, web, and mobile platforms? This month's episode of Soonish argues that it's time to rein in Google—and that individual internet users can play a meaningful part by switching to other tools and providers. It's half stem-winder, half how-to, featuring special guest Mark Hurst of the WFMU radio show and podcast Techtonic.* * * Back in 2019, in the episode A Future Without Facebook, I explained why I had decided that it was time to delete my Facebook account. In short, I was tired of being part of a system that amplified hateful and polarizing messages in order to keep users engaged and drive more advertising revenue for Zuckerberg & Co. I knew at the time that Google also engages in such practices at YouTube, and that the search giant's whole surveillance capitalism business model rests on tracking user's behavior and serving them targeted ads. But I continued as a customer of Google nonetheless, while keeping one eye on the company to see whether its tactics were growing more toxic, or less.The moment when Google finally exhausted my patience came in December 2020, when the company fired a prominent Black computer scientist and AI ethicist named Timnit Gebru in a dispute over a scholarly paper she'd co-written. Gebru and her co-authors argued in the paper that without better protections, racial and gender bias might seep into Google's artificial intelligence systems in areas like natural language processing and face recognition. Google executives thought the paper was too harsh and forbade Gebru from publishing it; she objected; and things went downhill from there.It was a complicated story, but it convinced me that at the upper echelons of Google, any remnant of a commitment to the company's sweeping motto—"Don't Be Evil"—had given way to bland and meaningless statements about "protecting users" and "expanding opportunity" and "including all voices." In fact, the company was doing the opposite of all of those things. It was time for me to opt out. How I went about doing that—and how other consumers can too—is what this episode is all about. I explain the Gebru case and other problems at Google, and I also speak at length with guest Mark Hurst, a technology critic who runs the product design consultancy Creative Good and hosts the radio show and podcast Techtonic at WFMU. Mark publishes an important site called Good Reports, where consumers can find the best alternatives to the services offered by today's tech giants in areas like search, social media, and mobile technology.Hurst emphasizes—and I agree—that leaving Google isn't an all-or-nothing proposition. The company is so deeply embedded in our lives that it's almost impossible to cut it out entirely. Instead, users can uncouple from Google step by step—first switching to a different search engine, then trying a browser other than Chrome, then switching from Gmail to some other email platform, and so forth."Setting a goal of getting ourselves 100 percent off of Google is is unrealistic," Mark says. "And it's I think it's a little bit of a harmful goal, because it's so hard that people are going to give up early on. But instead, let's let's have a goal of learning what's happening in the world and then making some choices for ourselves, some small choices at first, of how we want to do things differently. If enough of us make the decision to extricate ourselves from Google, we'll form a movement and other companies will see an opportunity to build less exploitative tools for us. You've got to start somewhere!"NotesThe Soonish opening theme is by Graham Gordon Ramsay. All additional music by Titlecard Music and Sound.If you enjoy Soonish, please rate and review the show on Apple Podcasts. Every additional rating makes it easier for other listeners to find the show.Listener support is the rocket fuel that keeps our little ship going! You can pitch in with a per-episode donation at patreon.com/soonish.Follow us on Twitter and get the latest updates about the show in our email newsletter, Signals from Soonish.Chapter Guide0:08 Soonish theme00:21 Time to Find a New Favorite Restaurant02:46 What I'm Not Saying04:01 Re-introducing Mark Hurst07:08 The Ubiquity of Google11:04 Surveillance Capitalism and YouTube Extremism12:29 The Timnit Gebru Case18:01 Hurst: "Let's shut down the entire Google enterprise"19:48 Midroll announcement: Support Soonish on Patreon20:54 10 Steps toward Reducing Your Reliance on Google29:04 Using Google Takeout30:20 The Inevitability of YouTube31:44 Be a Google Reducetarian32:20 Enmeshed in Big Tech37:04 The Value of Sacrifice40:17 End Credits and Hub & Spoke Promo for Open Source
Joy Buolamwini is a Ghanian-American computer scientist at MIT who researches algorithmic bias in computer vision systems and is focused on identifying bias in artificial intelligence and practices for accountability. She's also the founder of the Algorithmic Justice League. Wildly accomplished, Joy's research on this subject has been covered in over 40 countries. In this episode, we discuss how the tech industry has a tendency to center the potential of technology, while often overlooking existing gaps and outcomes that perpetuate harm. In Joy's case, she identified racially biased facial recognition software that couldn't detect her skin color due to outdated code. By re-writing the practices that perpetuate harmful bias, Joy calls herself a “poet of code.” Listen to learn about how Joy's discoveries are evidence for why developing ethical software matters. Have a guest in mind for a future episode? We'd love to hear from you. Email Podcast@pdxwit.org.
On today's episode, we sat down with Ryan Baker, an Associate Professor in the Graduate School of Education at the University of Pennsylvania and directs the Penn Center for Learning Analytics. In Ryan's work he's done much research and analysis on the impact on Artificial Intelligence (AI) in education and how it impacts a range of learners. We asked him a few questions specific to students with disabilities and how we can analyze AI's ability to help all students. A few of the questions discussed were ‘what impact does bias truly have on AI and ‘what opportunities are there for individuals with disabilities with current and future AI.' Mentioned People/Resources: Info on Ryan Baker: https://www.upenn.edu/learninganalytics/ryanbaker/ Ryan Baker's work on Algorithmic Bias in Education: https://edarxiv.org/pbmvz/download?format=pdf René F. Kizilcec and Hansol Lee's work on Algorithmic Fairness in Education: https://arxiv.org/ftp/arxiv/papers/2007/2007.05443.pdf More information on the Educating All Learners Alliance: Website: www.educatingalllearners.org Twitter: @educateall_org YouTube: Educating All Learners
On Sippin' the Political Tea, Farai Chideya goes through the most important news of the week with contributors Errin Haines of the 19th and Jess Morales Rocketto of the National Domestic Workers Alliance. Farai welcomes Trymaine Lee of MSNBC's podcast “Into America” to talk about his coverage of the centennial of the Tulsa Massacre. Social justice advocate Tamika Mallory shares her family's legacy of activism in her new book, “State of Emergency.” OBP tech contributor Mutale Nkonde of AI for the People explains why we're more aware of disinformation and misinformation after January 6th, and what her organization is doing to combat algorithmic discrimination. EPISODE RUNDOWN00:39 Sippin' the Political Tea with Errin Haines of the 19th and Jess Morales Rocketto of the National Domestic Workers Alliance goes over the week's news18:22 Trymaine Lee of MSNBC on the Tulsa Race Massacre's impact on Black business owners today30:39 Tamika Mallory on her new book and generations of her family being activists39:45 Mutale Nkonde of AI for the People explains our growing understanding of disinformation and misinformation and their role in Covid vaccinations
Algorithms play a significant role in our everyday lives. From the searches you perform on Google, the content you see in your Facebook feed or the GPS navigation in your car, their influence on our day-to-day is everywhere. Only problem is, most of the time we're oblivious to how much sway they have and the data they use to make these decisions. Plus by influencing future decisions based on past data, they are prone to repeating our mistakes. The implications of this range from the perpetuation of fake news to minorities unfairly receiving longer prison sentences. So can anything be done to curb this bad influence in our lives? This week on The Good, The Bad & The Advertising, hosts Amy and Dino are joined by special guest Lou Nylander. Lou is a digital marketing guru with over 15 years of experience. She's also a trusted advisor to the Good-Loop team and a FIERCE advocate for supporting women in business. Together, they discuss the impact of algorithms on our everyday lives and whether accountability from creators could lead to better transparency and less discrimination. And again, our three hosts will chat about how advertising can help solve these issues or, indeed, hinder them.Get in touch!https://www.linkedin.com/in/amy-williams-/https://www.linkedin.com/in/dinosaw/https://www.linkedin.com/in/lounylander/The show was produced by Charley Tunnicliffe. The sound engineer is Alex Rayment. Find out more here: www.alexrayment.com Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
Show Notes(02:06) Azin described her childhood growing up in Iran and going to a girls-only high school in Tehran designed specifically for extraordinary talents.(05:08) Azin went over her undergraduate experience studying Computer Science at the University of Tehran.(10:41) Azin shared her academic experience getting a Computer Science MS degree at the University of Toronto, supervised by Babak Taati and David Fleet.(14:07) Azin talked about her teaching assistant experience for a variety of CS courses at Toronto.(15:54) Azin briefly discussed her 2017 report titled “Barriers to Adoption of Information Technology in Healthcare,” which takes a system thinking perspective to identify barriers to the application of IT in healthcare and outline the solutions.(19:35) Azin unpacked her MS thesis called “Subspace Selection to Suppress Confounding Source Domain Information in AAM Transfer Learning,” which explores transfer learning in the context of facial analysis.(28:48) Azin discussed her work as a research assistant at the Toronto Rehabilitation Institute, working on a research project that addressed algorithmic biases in facial detection technology for older adults with dementia.(33:02) Azin has been an Applied Research Scientist at Georgian since 2018, a venture capital firm in Canada that focuses on investing in companies operating in the IT sectors.(38:20) Azin shared the details of her initial Georgian project to develop a robust and accurate injury prediction model using a hybrid instance-based transfer learning method.(42:12) Azin unpacked her Medium blog post discussing transfer learning in-depth (problems, approaches, and applications).(48:18) Azin explained how transfer learning could address the widespread “cold-start” problem in the industry.(49:50) Azin shared the challenges of working on a fintech platform with a team of engineers at Georgian on various areas such as supervised learning, explainability, and representation learning.(51:46) Azin went over her project with Tractable AI, a UK-based company that develops AI applications for accident and disaster recovery.(55:26) Azin shared her excitement for ML applications using data-efficient methods to enhance life quality.(57:46) Closing segment.Azin’s Contact InfoWebsiteTwitterLinkedInGoogle ScholarGitHubMentioned ContentPublications“Barriers to Adoption of Information Technology in Healthcare” (2017)“Subspace Selection to Suppress Confounding Source Domain Information in AAM TransferLearning” (2017)“A Hybrid Instance-based Transfer Learning Method” (2018)“Prediction of Workplace Injuries” (2019)“Algorithmic Bias in Clinical Populations — Evaluating and Improving Facial Analysis Technology in Older Adults with Dementia” (2019)“Limitations and Biases in Facial Landmark Detection” (2019)Blog Posts“An Introduction to Transfer Learning” (Dec 2018)“Overcoming The Cold-Start Problem: How We Make Intractable Tasks Tractable” (April 2021)PeopleYoshua Bengio (Professor of Computer Science and Operations Research at University of Montreal)Geoffrey Hinton (Professor of Computer Science at University of Toronto)Louis-Philippe Morency (Associate Professor of Computer Science at Carnegie Mellon University)Book“Machine Learning: A Probabilistic Approach” (by Kevin Murphy)Note: Azin and her collaborator are going to give a talk at ODSC Europe 2021 in June about a Georgian’s project with a portfolio company, Tractable. They have written a short blog post about it too which you can find HERE.
Show Notes(02:06) Azin described her childhood growing up in Iran and going to a girls-only high school in Tehran designed specifically for extraordinary talents.(05:08) Azin went over her undergraduate experience studying Computer Science at the University of Tehran.(10:41) Azin shared her academic experience getting a Computer Science MS degree at the University of Toronto, supervised by Babak Taati and David Fleet.(14:07) Azin talked about her teaching assistant experience for a variety of CS courses at Toronto.(15:54) Azin briefly discussed her 2017 report titled “Barriers to Adoption of Information Technology in Healthcare,” which takes a system thinking perspective to identify barriers to the application of IT in healthcare and outline the solutions.(19:35) Azin unpacked her MS thesis called “Subspace Selection to Suppress Confounding Source Domain Information in AAM Transfer Learning,” which explores transfer learning in the context of facial analysis.(28:48) Azin discussed her work as a research assistant at the Toronto Rehabilitation Institute, working on a research project that addressed algorithmic biases in facial detection technology for older adults with dementia.(33:02) Azin has been an Applied Research Scientist at Georgian since 2018, a venture capital firm in Canada that focuses on investing in companies operating in the IT sectors.(38:20) Azin shared the details of her initial Georgian project to develop a robust and accurate injury prediction model using a hybrid instance-based transfer learning method.(42:12) Azin unpacked her Medium blog post discussing transfer learning in-depth (problems, approaches, and applications).(48:18) Azin explained how transfer learning could address the widespread “cold-start” problem in the industry.(49:50) Azin shared the challenges of working on a fintech platform with a team of engineers at Georgian on various areas such as supervised learning, explainability, and representation learning.(51:46) Azin went over her project with Tractable AI, a UK-based company that develops AI applications for accident and disaster recovery.(55:26) Azin shared her excitement for ML applications using data-efficient methods to enhance life quality.(57:46) Closing segment.Azin’s Contact InfoWebsiteTwitterLinkedInGoogle ScholarGitHubMentioned ContentPublications“Barriers to Adoption of Information Technology in Healthcare” (2017)“Subspace Selection to Suppress Confounding Source Domain Information in AAM TransferLearning” (2017)“A Hybrid Instance-based Transfer Learning Method” (2018)“Prediction of Workplace Injuries” (2019)“Algorithmic Bias in Clinical Populations — Evaluating and Improving Facial Analysis Technology in Older Adults with Dementia” (2019)“Limitations and Biases in Facial Landmark Detection” (2019)Blog Posts“An Introduction to Transfer Learning” (Dec 2018)“Overcoming The Cold-Start Problem: How We Make Intractable Tasks Tractable” (April 2021)PeopleYoshua Bengio (Professor of Computer Science and Operations Research at University of Montreal)Geoffrey Hinton (Professor of Computer Science at University of Toronto)Louis-Philippe Morency (Associate Professor of Computer Science at Carnegie Mellon University)Book“Machine Learning: A Probabilistic Approach” (by Kevin Murphy)Note: Azin and her collaborator are going to give a talk at ODSC Europe 2021 in June about a Georgian’s project with a portfolio company, Tractable. They have written a short blog post about it too which you can find HERE.
Michelle Martin speaks to Amber Sappington, Head of Data Analytics, Finastra to find out how algorithmic bias impacts consumers in terms of financial provision, costs of financial solutions and even interest rates. See omnystudio.com/listener for privacy information.
Algorithms, Dr. Safiya Noble and an upcoming movie, Coded Bias. --- Support this podcast: https://podcasters.spotify.com/pod/show/diane-t-sands/support
Fabio Pereira is leading the Open Innovation Labs initiative in Latin America at Red Hat. Open Innovation Labs is an intensive, highly focused residency in an environment designed to experiment, immerse and catalyze innovation. He is the author of the book "Digital Nudge: The hidden forces behind the 35,000 decisions we make every day". He had been a Principal Consultant, a Digital Transformation Advisor at ThoughtWorks, a large software consultancy firm for over 10 years. Fabio Pereira 00:00 intro 00:30 35,000 Decisions , Cognitive Overload and Delegation to Technology 02:30 Pre-Modern Man’s Cognitive Load and Number of Choices 13:52 Antidote to Decision Fatigue, Pomodoro Technique and CrossFit 09:40 Digital Quotient (DQ), Emotional Quotient (EQ) and IQ 14:10 Digital Nudge, Behavioral Sciences and Netflix 20:41 ThoughtWorks, Business Process and Insurance Company Use Case 26:20 UX and RX in Book Writing, Summarization and Legibility 30:20 Nir Eyal, Indestructible and Divided Attention 31:28 Growing up in Brazil, Movies with Subtitles and Hobbies 35:45 IRC, Global Citizenship, Pharmacy and World Travel 44:53 Snapstreak, YouTube Videos, Notifications, Dopamine and Loss Aversion 48:26 John Suler’s 6 Factors of Inhibition, Dual Identity and Twitter vs Linkedin 55:00 Clubhouse, Leaked Recordings and Real Self 56:00 Big Brother Brazil, Reality Shows and Evocation of Real Emotions 57:55 Cyborgs, Biohacking, Neuralink teaches Monkey’s to Play Games and Earthquake in Chile 01:01:00 Induced Emotions through Neurotransmitters and Hunger Aggression 01:05:00 Human Behaviour Biology, Robert Sapolsky and Diet/Memory Relationship 01:07:00 Thinking Fast and Slow and Ivy League baffling Bat and Ball Riddle 01:12:30 Intuition, ESP, Meditation and Eckhardt Tolle 01:18:12 Playing Prank on TEDx Audience, GDPR cookies and Privacy Policy Agreement 01:22:00 Default Biases for Visitors, Checkboxes, Radioboxes and Automatic Suggestions 01:29:52 Algorithmic Bias, Newzealand’s AI-based Passport issuance and Movie Coded-Bias 01:38:15 Intentional Bias, Diversified Training Set and Double-Edged Sword 01:41:40 Moral Decisions for Self-Driving Car, MIT Review article on flawed IMAGENET data 01:49:25 Time Well Spent movement , AR/VR tools for patients in Hospitals and Digital Nudging Tools 01:56:20 From CBT to Self-Assessing Behavioral Patterns 01:57:30 Innovation and Work at RedHat and Infobizity 02:01:00 Steve Wozniak, CS101 and Goals for ‘Digital Nudge’
My honored guest for the launch episode of Indivisible AI is Edward Santow, the Human Rights Commissioner representing Australia at the Australian Human Rights Commission. Under Ed's pioneering leadership the Commission launched the Human Rights & Technology Project back in 2018, which reflects its status as a first mover in the global digital policy domain.We begin our conversation by discussing the importance of using human rights as a system of law to frame our approach to AI governance. We then discuss a recent paper published by the Commission that specifically provides guidance for companies on addressing algorithmic bias, where we frame the relationship between this distinct concept of bias and the legal principle of non-discrimination. Our conversation also touches on the geopolitical. In particular we address the stakes of promoting liberal values not only at home but but in the global deployment of AI and how to approach those challenges given our respective country roles on a shared stage. Ed also happens to be one of my favorite people working in this field. He is what you would call a true leader: not only incisive and approachable in his ability to articulate complex issues but also kind and committed in a way that inspires others to be the same.I am always delighted to learn from him, and I hope after listening to this conversation you come away feeling the same.
Τι γίνεται όταν οι έμφυλες προκαταλήψεις μιας κοινωνίας ενσωματώνονται στις νέες τεχνολογίες, που επηρεάζουν την καθημερινότητά μας; Τι είναι η αλγοριθμική προκατάληψη και πώς μπορούμε να εξασφαλίσουμε ότι η τεχνολογία και η τεχνητή νοημοσύνη θα συνεργαστούν μαζί μας για να γεφυρώσουν τις ανισότητες και δεν θα τις βαθύνουν ακόμα πιο πολύ;
Siann is a lead data scientist and analytics manager at Booz Allen Hamilton where she helps her stakeholders cut through the clutter to make better decisions, and leads a team that transforms complex problems into simple solutions. For her contributions to data science and social good, she's been awarded the 2020 Women of Color in STEM All Star award, the 2019 DCFemTech award, and the 2017 Prince George's County, MD 40 Under 40 honoree. FIND SIAN ONLINE LinkedIn: https://www.linkedin.com/in/allsian/ Website: https://www.sianlewis.org/ Corporate Profile: https://www.boozallen.com/e/insight/people-profiles/sian-lewis.html QUOTES [00:04:17] "When you're an immigrant, you find any form of these social enclaves. Wherever you are with people who are similar to you, who are from similar countries as you, you form tight knit communities." [00:08:29] "You know, I went to grad school. I was terrible at it. I didn't want to be there. And I actually learned that I had no interest in actually anything health care related. So I quit after great anguish, great terror. And I was like, oh, my God, what am I going to do with my life?" [00:14:08] "I love that people think that we are magical wizards that control the world. And then I get to burst people's bubble..." [00:14:31] "I also love Data science because I blink and something new has come out that has fundamentally changed the way I did things. Literally every single day there's something new, there's a new package, there's a new technique, there's a new finding. There's a new paper that comes out. And I get to rethink what I learned in school. I get to rethink what I've done practically over the years. And I love that." [00:18:42] "You're hired to solve a very specific problem. And the problems are usually in three categories: How are you going to increase usage of something? How are you going to increase revenue? Or how are you going to increase engagement on something?" HIGHLIGHTS FROM THE SHOW [00:03:14] Where Siann grew up and what it was like there [00:05:30] The immigrant experience [00:06:18] What Siann was like in high school [00:08:13] The journey into data science [00:10:50] How data science is used in political science [00:14:03] What do you love most about being a data scientist? [00:15:21] Do you consider Data science machine learning to be an art or purely a hard science? [00:17:58] What role do you think being creative and curious plays in being successful as a Data scientist? [00:20:57] What is a model and why is it that we even build them in the first place? [00:22:39] How can we use algorithms to build models with equality and equality? [00:28:25] How to make sure you're building a fair model [00:29:20] Some tips for feature engineering [00:35:53] Project idea for survey data [00:36:19] The importance of MLOps [00:37:51] Communicating model results with business stakeholders [00:40:17] The non-obvious skills you need for success [00:46:22] Communicating with executives [00:48:39] Don't be afraid to apply for a job just because the description looks crazy [00:52:10] Advice for women in STEM [00:55:23] How to foster diversity in data science [00:58:59] It's 100 years in the future, what do you want to be remembered for? [00:59:46] The random round Special Guest: Sian Lewis.
Books mentioned: The Period Book by Karen Gravelle and Jennifer Gravelle Song of the Lioness series by Tamora Pierce Princess Nevermore by Dian Curtis Regan Blood and Chocolate by Annette Curtis Klause Angela's Ashes by Frank McCourt Wind in the Willows by Kenneth Grahame Uncle Wiggily by Howard R. Garis Winnie the Pooh by A. A. Milne Tarzan series by Edgar Rice Burroughs Conan series by R. H. Howard Gor series by John Norman Throne of Glass series by Sarah J. Maas A Court of Thorns and Roses series by Sarah J. Maas The Awakening by Kate Chopin The Scarlet Letter by Nathaniel Hawthorne The Grapes of Wrath by John Steinbeck Little Birds and Delta of Venus by Anaïs Nin The Sleeping Beauty Trilogy by A. N. Roquelaure (aka Anne Rice!) Crimes of Love and Philosophies in the Boudoir by the Marquis de Sade The Sadeian Woman by Angela Carter Essays Mentioned: "The Best Sex I Ever Read Was By the Famously 'Dick-Centric' Henry Miller" by Mary Gaitskill, as told to Lila Shapiro "Porn: An Effective Vehicle for Sexual Role Modeling and Education" by Nina Hartley "Exclusive: An Investigation into Algorithmic Bias in Content Policing on Instagram" Podcasts Mentioned: American Sex Podcast, Ep. 151 Fangasm/Potterotica Legal Cases, Etc. Mentioned: Attorney General v. The Book Called Tropic of Cancer (1962) Jacoblellis v. Ohio (1964) A Book Named John Cleland's Memoirs of a Woman of Pleasure v. Attorney General of Massachusetts (1966) Ginzburg v. United States (1965) Attorney General v. A Book Called Naked Lunch (1966) Stanley v. Georgia (1969) Miller v. California (1973) The case against Tom Arthur and the Mr. Double website The Report of the Commission on Obscenity and Pornography (1976) The Meese Commission on Obscenity (1985) Reno v. American Civil Liberties Union (1997) SESTA (Stop Enabling Sex Traffickers Act)/FOSTA (Allow States and Victims to Fight Online Sex Trafficking Act) (2018) EARN IT (Eliminating Abusive and Rampant Neglect of Interactive Technologies) Act (2020) Instagram Community Guidelines J. Davis: The Whole Book Experience Leaves of Cha Donation-based Meditation Instagram: @leavesofcha Of Prurient Interest social media: Insta: @ofprurientinterest Twitter: @highlyprurient FB: /ofprurientinterest Litsy: @prurientinterest Email: ofprurientinterest@gmail.com Patreon: /ofprurientinterest Website: ofprurientinterest.com Kaelyn's Instagram: @lalatiburona Score by Rose Droll: @myhandsarepaws Logo by @irizofen If you like this podcast, consider becoming a patron either here on Anchor or on Patreon. You can also make a one-time donation through the website. Lastly, subscribe, rate, and review! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/ofprurientinterest/support
Episode Topic: Technology Ethics Conference 2020 - Panel 1The Technology Ethics Center at Notre Dame convenes global thought leaders in academia, industry, non-profit organizations, and governmental agencies to develop ethical and legal frameworks that keep pace with technological development. This lecture discusses algorithmic bias in technology and the ethics and implications behind it. Featured Speakers: Solon Barocas, Principal Researcher in the New York City lab of Microsoft Research and an Assistant Professor in the Department of Information Science, Cornell UniversityShaun Barry, Global Leader in the Fraud & Security Intelligence practice, SASKevin Bowyer, Shubmehl-Prein Professor of Computer Science and Engineering, University of Notre DameGenevieve Fried, Researcher, Office of Senator Chris Coons (D-DE) Sara R. Jordan, Policy Counsel, Artificial Intelligence and Ethics at the Future of Privacy ForumRonald Metoyer, Associate Professor of Computer Science and Engineering, University of Notre DameRead this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: go.nd.edu/96ee34.This podcast is a part of the TEC Talks ThinkND Series titled “Algorithmic Bias: Sources and Responses”.
Episode Topic: Technology Ethics Conference 2020 - Panel 2The Technology Ethics Center at Notre Dame convenes global thought leaders in academia, industry, non-profit organizations, and governmental agencies to develop ethical and legal frameworks that keep pace with technological development. This lecture discusses algorithmic bias in technology and the ethics and implications behind it. Featured Speakers: Kirsten Martin, William P. and Hazel B. White Professor of Technology Ethics, University of Notre Dame Mendoza School of BusinessScott Nestler, Associate Teaching Professor in the IT, Analytics, and Operations (ITAO) Department and Academic Director of the MS in Business Analytics Program, University of Notre DameMutale Nkonde, 2020-2021 Fellow at the Notre Dame Institute for Advanced Study, University of Notre DameFrancesca Rossi, IBM Fellow and the IBM AI Ethics Global Leader, IBM CorporationKate Vredenburgh, Assistant Professorship in the Department of Philosophy, Logic and Scientific Method, London School of EconomicsMichael Zimmer, Ph.D., Associate Professor in the Department of Computer Science, Director of Undergraduate Studies, Co-Director of the Interdisciplinary Data Science Major, and Director of the Graduate Data Science Certificate, Marquette UniversityRead this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: go.nd.edu/7bfb2f.This podcast is a part of the TEC Talks ThinkND Series titled “Algorithmic Bias: Sources and Responses”.
Episode Topic: Technology Ethics Conference 2020 - KeynoteThe Technology Ethics Center at Notre Dame convenes global thought leaders in academia, industry, non-profit organizations, and governmental agencies to develop ethical and legal frameworks that keep pace with technological development. This lecture discusses algorithmic bias in technology and the ethics and implications behind it. Featured Speakers: Mark P. McKenna, John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center, University of Notre DameCathy O'Neil, Author of the New York Times bestselling book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens DemocracyRead this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: go.nd.edu/d028b4.This podcast is a part of the TEC Talks ThinkND Series titled “Algorithmic Bias: Sources and Responses”.
https://www.engati.com/ Engati is the world's leading no-code, multi-lingual chatbot platform. https://open.spotify.com/show/3G0uQwPnQib22emRi9VhUg Blog link: https://engati.com/blog | Subscribe now. Tulsee Doshi, ""Google Product Lead - ML Fairness and Responsible AI talks about importance of ethics security when it comes to ai and how we can bring a shift if we work on it collectively Follow us on Facebook: http://s.engati.com/157 LinkedIn: http://s.engati.com/158 Twitter: http://s.engati.com/156 Instagram: https://www.instagram.com/getengati/ https://www.engati.com/glossary/machine-learning #EngatiCX #responsibleAI #AlgorithmicBias
The ladies discuss if rising ticket prices, streaming services and COVID-19 have killed the cinema as we know it, whether celebrities and public figures give up their right to privacy when choosing a life in the public eye, why artificial intelligence (AI) has developed a racial bias and what we can do about it, and the worst excuses they've given to avoid work and unwanted advances from men.Subscribe to our newsletter for your weekly dose of Extra Ethnic:https://mailchi.mp/c2749d9238ee/extraethnic--------------------------------------- FOLLOW THE CONVERSATION #EthnicallySpeakingInstagram: https://www.instagram.com/unitedmelanincoFacebook: https://facebook.com/unitedmelanincoTwitter: https://instagram.com/unitedmelanincoLinkedIn: https://www.linkedin.com/company/unitedmelaninco/YouTube: https://www.youtube.com/c/UnitedMelaninGroupGet in touch with us: ethnicallyspeaking@unitedmelaningroup.com---------------------------------------- LINKS MENTIONED IN THIS EPISODE https://unitedmelaningroup.com/es018(Website – Show notes)https://www.bbc.co.uk/news/av/business-47336214(Video – Netflix: Is streaming killing the cinema industry? – BBC)https://www.statista.com/statistics/285783/cinema-ticket-prices-average-annual-price-in-the-united-kingdom-uk/(Website – Average annual cinema ticket price in the United Kingdom (UK) from 2000 to 2019 (in GBP) – Joseph Johnson)https://www.theguardian.com/film/2020/jul/12/the-future-of-film-can-cinema-survive-covid-19(Article – The future of film: can cinema survive Covid-19? – Wendy Ide for The Guardian)https://www.bbc.co.uk/news/technology-54349538(Article – UK passport photo checker shows bias against dark-skinned women – Maryam Ahmed for BBC News)https://www.theguardian.com/technology/2019/dec/12/ai-end-uk-use-racially-biased-algorithms-noel-sharkey(Article – AI expert calls for end to UK use of ‘racially biased' algorithms – Henry McDonald for The Guardian)https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency(Article – Why algorithms can be racist and sexist – Rebecca Heilweil for Vox)https://metro.co.uk/2020/04/01/race-problem-artificial-intelligence-machines-learning-racist-12478025/(Article – The race problem with Artificial Intelligence: ‘Machines are learning to be racist' – Natalie Morris for Metro)https://www.netflix.com/title/81254224(Documentary – The Social Dilemma – Netflix)https://www.businessinsider.com/funny-excuses-people-have-used-to-call-in-sick-2015-10?r=US&IR=T(Article – The 19 most ridiculous excuses people have used to call in sick – Jacquelyn Smith and Erin McDowell for Business Insider)----------------------------------------Each week join Anissa, Luanda Yasmin and Sophie Hannah, three smart, curious and opinionated highly-melanated women, as they discuss everything from current affairs to pop culture, and everything in between. No subject is off limits for these ladies, especially when it comes to issues affecting British communities of colour. Get ready to laugh, learn and liberate your mind, because if there's one thing you can guarantee, it's that the Ethnically Speaking ladies will be giving it to you straight!----------------------------------------Music by GC
Do algorithms run the world? Nowadays we measure the amount of data we generate in zettabytes - that's 1 followed by 21 zeroes. This data, in turn, powers algorithms that are getting more and more sophisticated at predicting our behaviour, and are making ever more decisions for us. What does this mean for our society, privacy, and even our inner selves? Plus, in the news, the science - or lack of it - behind the latest COVID "rule of six" guidelines; the Arctic ice shelf that's lost a Manchester-sized chunk of itself; and the whales that ended up tens of kilometres up an inland river... Like this podcast? Please help us by supporting the Naked Scientists
How can we reduce data discrimination & algorithmic bias that perpetuate gender and racial inequalities? In partnership with All Tech is Human we present this Livestreamed conversation featuring Safiya Noble (Associate Professor at the University of California, Los Angeles (UCLA) in the Department of Information Studies and author of Algorithms of Oppression: How Search Engines Reinforce Racism) and Meredith Broussard (Associate Professor at the Arthur L. Carter Journalism Institute of New York University and the author of Artificial Unintelligence: How Computers Misunderstand the World). This conversation is moderated by All Tech Is Human's David Ryan Polgar. The organizational partner for the event is TheBridge. The conversation does not stop here! For each of the episodes in our series with All Tech is Human, you can find a detailed “continue the conversation” page on our website radicalai.org. For each episode we will include all of the action items we just debriefed as well as annotated resources that were mentioned by the guest speakers during the livestream, ways to get involved, relevant podcast episodes, books, and other publications.
In this episode, Lloyd talks about ICED(AI)'s pledge to provide $100,000 worth of pro-bono consulting hours to small businesses and individuals who have been impacted by discrimination and social unrest. The conversation then focuses on the ways in which human biases seep into algorithmic decision making, as well as the impact of this phenomenon on society. You can read more about the pledge here: https://bit.ly/ICEDAI_Pledge Episode Guide: 2:02 - The ICED(AI) Pledge 3:20 - Intro to Algorithmic Bias 6:40 - Racial Bias in Healthcare Algorithms 10:18 - Biased Training Data 12:10 - The Orwellian Future of Facial Recognition 14:51 - Algorithmic Accountability 16:04 - Disparate Algorithmic Impact 18:04 - The Curious Case of Car Insurance 19:56 - Effective Altruism & Challenging Assumptions 24:29 - A North Korean Anecdote 29:44 - The Need for Compassion & Empathy More Info: Visit us at aiexperience.org Brought to you by ICED(AI) Host - Lloyd Danzig
ALGORITHMIC BIAS: AI Traps and Possible Escapes Caroline Sinders (Machine Learning Designer/User Researcher, Artist, Digital Anthropologist, USA/DE) and Sarah Grant (Media Artist and Educator, Radical Networks, USA/DE) in conversation with Ruth Catlow (Co-Founder and Co-Artistic Director, Furtherfield, UK). Algorithms are not neutral and unbiased, but instead often reflect, reinforce and automate the current and historical biases and inequalities of society, such as social, racial and gender prejudices. This panel frames this issue, and aims to discuss some possible escapes. Caroline Sinders discusses what an intersectional Feminist AI could look like, and how we could get it. Sarah Grant organises Radical Networks, a community event and arts festival for critical investigations in telecommunications. She will go into how the repeated biases and behaviours that we find in Internet could find themselves patterned and spread into AI systems. ACTIVATION: Collective Strategies to Expose Injustice The Art of Exposing Injustice - Part 4 The 18th conference of the Disruption Network Lab www.disruptionlab.org/activation Photo credit: Maria Silvano
In this episode, Bob & Kevin kick things off discussing all the various streaming services that are popping up as fast as javascript libraries were a couple years ago. After the break, they get into the meat of the episode and discuss the topic of algorithmic bias and the recent accusations against the Apple credit card and Goldman Sachs. Feel free to reach out to us on twitter at https://twitter.com/bobandkevinshow and let us know what you think. If you listen on Apple Podcasts, please consider giving us a star rating and/or leaving us a review - it is greatly appreciated. Thanks for listening! - Bob & Kevin
We all love our pets and think of them as part of our families. Even though we might love them as much (or more) than we love people, the legal system does not allow them the same protection. That doesn't mean that there are no rights for animals, in fact, there are many specific laws designated to protect them. In this episode of The Geek in Review, we bring on four experts in researching Animal Laws, not just in the US, but worldwide. We talk with the following members of the American Association of Law Libraries' Animal Law Caucus: Alex Zhang - Law Library Director and Professor of Practice at Washington and Lee University School of Law Stacey Gordon Sterling -Law Library Director and Professor of law - Alexander Blewett III School of Law at the University of Montana Katie Ott - Reference Librarian - Robert Crown Law Library at Stanford University Sarah Slinger - Reference Librarian and Lecturer at Law - University of Miami Law Library Whether it is legal issues ranging from which "parent" gets the pet in a divorce, or how an orangutan is granted "personhood" in Argentina, to animal testing on cute beagles (yes... that's still going on in the US), our experts from the Animal Law Caucus cover these issues and more. Information Inspirations There is a lot of tech opportunities in America, but one of the unrealized places are on Native American Reservations. The Make Me Smart podcast talks with a Native Financial Officer about the upcoming Wiring the Rez conference. One Texas Federal Judge asks if you submit a brief more than 10 pages long, that you make a recording so he can listen to it, rather than read it. Sounds like an opportunity for some talented legal podcasters with better voices than Greg. Despite some of the best efforts to make things better, even allies can come up short when it comes to bias in the workplace. One leader in the Social Justice non-profit area shares her story, and it mirrors some of the same situations we see in the legal industry. To understand Algorithmic Bias, you must first understand the different types of discrimination, and how they apply to the bias. It's very complicated, but here's a primer to get your started. Listen, Subscribe, Comment Contact us anytime by tweeting us at @gebauerm or @glambert. Or, you can call The Geek in Review hotline at 713-487-7270 and leave us a message. You can email us at geekinreviewpodcast@gmail.com. As always, the great music you hear on the podcast is from Jerry David DeCicca.
Are machines racist? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated. But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no. WIRED OPINION ABOUT Yaël Eisenstat is a former CIA officer, National Security Advisor to Vice President Biden, and CSR leader at ExxonMobil.