American computer scientist and digital activist
POPULARITY
The grave consequences artificial intelligence poses aren't 'potential' — they are happening now, warns MIT researcher Joy Buolamwini. She argues that encoded discrimination embedded in AI systems — racial bias, sex and gender bias, and ableism — pose unprecedented threats to humankind. Buolamwini has been at the forefront of artificial intelligence research and encourages everyone to join in the fight for "algorithmic justice." Her book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, uncovers the existential danger produced by Big Tech. "AI should be for the people and by the people, not just the privileged few.”
In this special episode, Jackye Clayton joins Bill Banham to preview UNLEASH America 2025, an event she describes as offering key solutions to the most pressing workplace challenges.UNLEASH AMERICA Conference & Exhibition, one of the fastest-growing HR event in the world, is a place where global HR Leaders come to do business and discover inspirational stories that change ways organizations think about HR and innovation.Jackye's passion for improving work experiences shines through as she shares her excitement for this Las Vegas conference happening May 6-8 at Caesars Palace. "What keeps me up at night really is the state of the world of work," she explains. "We all need jobs, we're all trying to make a living, we all want to take care of our family, and so I really want it to not suck." This refreshingly honest perspective frames her anticipation for the event's three major focus areas.First, the conference tackles AI in HR - examining both its promise and pitfalls. Jackye highlights Dr. Joy Buolamwini, founder of the Algorithmic Justice League, who will address algorithmic bias and preventing inequality in AI strategies. Second, human-centered leadership takes center stage, ensuring "that the technology serves humanity, not that we are serving the technology." Finally, inclusive talent strategies return to prominence with panels like "DEI at a Crossroads" featuring representatives from Alcon and the Canadian Olympic Committee. Additional highlights include a panel on redefining talent acquisition with remote work, featuring Active, Atlassian, and Toshiba.The massive UNLEASH expo hall will showcase cutting-edge HR technologies and innovative startups that represent "the future of HR and talent acquisition." If you're attending Unleash America, Jackye invites you to connect with her there—she's eager to meet fellow HR professionals passionate about transforming the world of work. Support the showFeature Your Brand on the HRchat PodcastThe HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score. Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here. Follow us on LinkedIn Subscribe to our newsletter Check out our in-person events
In deze special spreekt Piek met Michel van Leeuwen, Directeur AI bij het ministerie van Justitie en Veiligheid, over wat het voor hem betekent om een technorealist te zijn, hoe je AI demystificeert en wat hem inspireert in zijn werk. Isis schuift aan.In deze aflevering komen de volgende namen voorbij:Joy Buolamwini (informaticus, activist)Edmund Burke (filosoof, politicus)Immanuel Kant (filosoof)Barack Obama (voormalig president van de Verenigde Staten)Che Guevara (revolutionair, guerrillaleider)Deze publicaties worden genoemd:Black-out: morgen is het te laat – Marc Elsberg (2012)Hyperion – Dan Simmons (1989)En deze documentaire:Black-out (2024)--------------------Deze special is opgenomen op 12 november tijdens de Conferentie Digitale Rechtsstaat. De conferentie werd georganiseerd door het ministerie van Justitie en Veiligheid en het ministerie van Asiel en Migratie, in DeFabrique in Utrecht.Host: Piek KnijffRedactie: Team Filosofie in actie Studio en montage: De Podcasters Tune: Uma van WingerdenArtwork: Hans Bastmeijer – Servion StudioWil je nog ergens over napraten of wil jij ook een special van onze podcast voor jouw organisatie? Dat kan! Neem contact op via info@filosofieinactie.nlMeer weten over Filosofie in actie en onze werkzaamheden? Bezoek dan onze website: www.filosofieinactie.nl, of volg onze LinkedIn-pagina.
What happens when technology isn't held accountable? Dr. Joy Buolamwini, founder of the Algorithmic Justice League, is here to guide us through AI's power, pitfalls, and potential. From exposing bias in facial recognition to championing ethical AI, Dr. Joy is leading the charge to protect what makes us human in a world dominated by machines. As a Rhodes Scholar, MIT researcher, and author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Dr. Joy's groundbreaking work has reshaped the conversation on AI ethics. Her viral TED Talk and the Emmy-nominated documentary Coded Bias highlight the real-world consequences of unchecked technology and why ethical AI is essential for everyone. AI isn't inherently good or evil—it's a tool. How we use it defines its impact, and being human isn't just a feature—it's the whole point. Connect with Dr. Joy: Website: www.Unmasking.ai Algorithmic Justice League: https://www.ajl.org/ Book: https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-dr-joy-buolamwini/ Poet of Code: https://www.poetofcode.com/ TSA Facial Scan Opt Out: https://www.ajl.org/campaigns/fly Related Podcast Episodes: 202 / Building Your Email Lists & Websites with Brittni Schroeder 172 / Boomers to Gen Z - Understanding Generational Differences with Kim Lear Share the Love: If you found this episode insightful, please share it with a friend, tag us on social media, and leave a review on your favorite podcast platform!
The future is here, and it looks like deepfakes of real people saying fake things, chatbots claiming to have human-level consciousness, and evil robots ready to take everyone's jobs. Artificial Intelligence, while only just recently becoming widespread and accessible, is transforming our world in ways that make understanding it more crucial than ever. Joining me for today's important conversation on the ethical implications of AI is Dr. Joy Buolamwini. She is the founder of the Algorithmic Justice League, an award-winning researcher, and poet of code. Dr. Joy is the author of the national best-selling book, Unmasking AI: My Mission to Protect What is Human in a World of Machines. During our conversation, we examined some of the basic definitions, players, and concerns associated with AI, how biases are transferred in the creation of AI and then reflected in its application, and lastly, the specific challenges AI poses particularly for communities of color. About the Podcast The Therapy for Black Girls Podcast is a weekly conversation with Dr. Joy Harden Bradford, a licensed Psychologist in Atlanta, Georgia, about all things mental health, personal development, and all the small decisions we can make to become the best possible versions of ourselves. Resources & Announcements Grab your copy of Sisterhood Heals. Where to Find Dr. Buolamwini Support the Algorithmic Justice League Read ‘Unmasking AI: My Mission To Protect What Is Human In A World Of Machines” Instagram Website Stay Connected Is there a topic you'd like covered on the podcast? Submit it at therapyforblackgirls.com/mailbox. If you're looking for a therapist in your area, check out the directory at https://www.therapyforblackgirls.com/directory. Take the info from the podcast to the next level by joining us in the Therapy for Black Girls Sister Circle community.therapyforblackgirls.com Grab your copy of our guided affirmation and other TBG Merch at therapyforblackgirls.com/shop. The hashtag for the podcast is #TBGinSession. Make sure to follow us on social media: Twitter: @therapy4bgirls Instagram: @therapyforblackgirls Facebook: @therapyforblackgirls Our Production Team Executive Producers: Dennison Bradford & Maya Cole Howard Senior Producer: Ellice Ellis Producer: Tyree Rush Associate Producer: Zariah TaylorSee omnystudio.com/listener for privacy information.
Enter our giveaway with Breathless Riviera Cancun Resort & Spa® for a chance to win a five-day, four-night Unlimited-Luxury® stay for two adults, plus airfare credit for two, capped at $500 each. The giveaway closes on October 4, 2024! https://girlboss.com/pages/girlboss-breathless-riviera-giveaway Never miss an episode, subscribe to the Girlboss Radio podcast. AI is everywhere. So what does that mean for jobs? Are they all going to be obsolete in the next 10 years? And how can we prepare ourselves—and our careers—for the future? In this episode, Dr. Joy Buolamwini answers all of your most pressing AI questions. She is the founder of the Algorithmic Justice League, an award-winning researcher, and a “poet of code.” As the author of the national best-selling book Unmasking AI, Dr. Joy has dedicated her career to uncovering and addressing racial and gender biases in artificial intelligence. With degrees from both Oxford and MIT, and recognition as one of Forbes' Top 50 Women in Tech, she shares invaluable insights and advice for navigating the complexities of this ever-changing technology. In the age of AI, protecting ourselves starts with knowing our rights and options—like the right to refuse facial recognition technology at airports and identifying which AI tools are ethical (and which ones aren't—looking at you, ChatGPT). In our conversation, Avery and Dr. Joy dive into the positive sides of AI and the future of work, exploring how it can enhance feedback, boost efficiency, and streamline summarizing information. But she also emphasizes the importance of being cautious about how we adopt AI, both in our workplaces and beyond. Dr. Joy shines a light on the real consequences of flawed systems, particularly for marginalized communities, leading to wrongful arrests, misidentifications, and discriminatory hiring. She highlights how crucial representation and storytelling are in raising awareness about the limitations and biases of AI. So, while AI isn't going anywhere anytime soon, we hope this conversation eases your anxiety, and offers you the knowledge and perspective needed to engage thoughtfully with the technology shaping our world. It's better to prepare for the future than try to avoid it.
AI is changing our lives every single day. To help you keep up, AI scientist, entrepreneur, and investor Dr. Rana el Kaliouby offers a definitive guide to the forefront of this transformative technology with her new podcast, Pioneers of AI. To mark the show's launch, Bob Safian welcomes Rana back to Rapid Response. We introduce the first episode of Pioneers of AI, featuring Dr. Joy Buolamwini, an expert in AI and algorithmic bias, who shares the story behind founding the Algorithmic Justice League and donning the moniker, “the poet of code”.Subscribe to the Pioneers of AI podcast feed: https://pioneersof.ai/subscribe See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
During Westbound Equity Partners' Annual Summit, Sixth Street Co-Founder and Co-President, David Stiepleman sat down with award-winning AI researcher and best-selling author Dr. Joy Buolamwini for a conversation about, in the words of her Algorithmic Justice League, getting the world to “remember that who codes matters, how we code matters, and that we can code a better future.” On this episode of It's Not Magic, you'll hear how in her office at MIT, Dr. Buolamwini stumbled on the realization that nascent AI systems weren't neutral and could prefer, and exclude, people based on how we look. We discuss Dr. Buolamwini's journey from academia, to discovering ways to combine hard research and art, to becoming a Sundance documentary star, to walking the halls of power, to leading the movement for equitable and accountable AI. We also discuss how, if AI eliminates entry-level drudgery, we may be living in the “age of the last masters.” We are proud to be a founding strategic partner of Westbound Equity Partners, an early-stage investment firm deploying financial and social capital to build great companies and close gaps for underrepresented talent. The conversation took place this summer at the Westbound Equity Partners Summit. Thank you to the Westbound Equity team for having us and to Dr. Buolamwini for the important and timely discussion. Note: Westbound Equity Partners, formerly known as Concrete Rose Capital. Hosted on Acast. See acast.com/privacy for more information.
Humans hallucinate. Algorithms lie. At least, that's one difference that Joy Buolamwini and Kyle Chayka want to make clear. When ChatGPT tells you that a book exists when it doesn't – or professes its undying love – that's often called a "hallucination." Buolamwini, a computer scientist, prefers to call it "spicy autocomplete." But not all algorithmic errors are as innocuous. So today's show, we get into: How do algorithms work? What are their impacts? And how can we speak up about changing them? This is a shortened version of Joy and Kyle's live interview, moderated by Regina G. Barber, at this year's Library of Congress National Book Festival.If you liked this episode, check out our other episodes on facial recognition in Gaza, why AI is not a silver bullet and tech companies limiting police use of facial recognition.Interested in hearing more technology stories? Email us at shortwave@npr.org — we'd love to consider your idea for a future episode!Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
The pandemic, flavor mixing, librarians, and artificial intelligence. Christiann Gibeau, head of adult services at Troy Public Library, is back with her monthly recommendations of new nonfiction books to read. First we hear about a scavenger hunt to find images around Troy. And then come the books: "2020: One City, Seven People, and the Year Everything Changed" (Eric Klinenberg, 2024); "The Flavor Thesaurus: More Flavors: Plant-Led Pairings, Recipes, and Ideas for Cooks" (Niki Segnit, 2023); "The Secret Lives of Booksellers and Librarians" (James Patterson & Matt Eversmann, 2024); and "Unmasking AI: My Mission to Protect What is Human in a World of Machines" (Joy Buolamwini, 2023). For more details on books and activities, visit www.thetroylibrary.org. To find other libraries in New York State, see https://www.nysl.nysed.gov/libdev/libs/#Find. Produced by Brea Barthel for Hudson Mohawk Magazine.
We have plenty to catch up on! Agenda Sussex Updates Meghan Launches American Riviera Orchard (Fourth Anniversary of the Freedom Flight) Vogue Defends Meghan Prince Harry Honours The Diana Awards' Legacy Award Recipient Dr. Joy Buolamwini is Awarded the 2024 NAACP - Archewell Foundatio Digital Civil Rights award! The Rusty Royals Are Still Struggling Major News Agency Says Kensington Palace is ABSOLUTELY NOT a Trust Sources The world is now calling another photo of Kate's Photoshopped News Sources are Shining a Bright Light on Rose Hanbury
Dr. Joy Boulamwini visits Town Hall Seattle (in conjunction with Third Place Books) to discuss her 2023 publication, Unmasking AI: My Mission To Protect What Is Human in a World of Machines. Dr. Joy is a Rhodes Scholar, as in African Race Soldier Cecil Rhodes, and was the focal point of the 2020 documentary, Coded Bias, which examines how the System of White Supremacy is manifest in the rapidly evolving field of artificial intelligence. She discussed some of the more recent cases, including the arrest of privileged black male, 42-year-old Robert Julian-Borchak Williams. The attempted black father of two was snatched from his front lawn and arrested in front of his offspring - just like the character Maverick in The Hate U Give. Detroit, Michigan enforcement officers used facial recognition technology to "identify" Williams as a watch thief. Turns out the technology makes a lot of "false positives" when it comes to identifying dark faces. Dr. Joy also mentioned the case of Porcha Woodruff, who also lives in Detroit. Just like Mr. Williams, Detroit enforcement officials used facial recognition technology to pin a carjacking caper on Woodruff. Officers didn't think Woodruff being 8 months pregnant with child would hinder her ability to loot vehicles. #TechnologyOfWhitePower #TheCOWS15Years INVEST in The COWS – http://paypal.me/TheCOWS Cash App: https://cash.app/$TheCOWS CALL IN NUMBER: 605.313.5164 CODE: 564943#
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My cover story in Jacobin on AI capitalism and the x-risk debates, published by Garrison on February 13, 2024 on The Effective Altruism Forum. Google cofounder Larry Page thinks superintelligent AI is "just the next step in evolution." In fact, Page, who's worth about $120 billion, has reportedly argued that efforts to prevent AI-driven extinction and protect human consciousness are "speciesist" and " sentimental nonsense." In July, former Google DeepMind senior scientist Richard Sutton - one of the pioneers of reinforcement learning, a major subfield of AI said that the technology "could displace us from existence," and that "we should not resist succession." In a 2015 talk, Sutton said, suppose "everything fails" and AI "kill[s] us all"; he asked, "Is it so bad that humans are not the final form of intelligent life in the universe?" This is how I begin the cover story for Jacobin's winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren't part of it. Whether you're new to the topic or work in the field, I think you'll get something out of it. I spent five months digging into the AI existential risk debates and the economic forces driving AI development. This was the most ambitious story of my career - it was informed by interviews and written conversations with three dozen people - and I'm thrilled to see it out in the world. Some of the people include: Deep learning pioneer and Turing Award winner Yoshua Bengio Pathbreaking AI ethics researchers Joy Buolamwini and Inioluwa Deborah Raji Reinforcement learning pioneer Richard Sutton Cofounder of the AI safety field Eliezer Yudkowksy Renowned philosopher of mind David Chalmers Sante Fe Institute complexity professor Melanie Mitchell Researchers from leading AI labs Some of the most powerful industrialists and companies are plowing enormous amounts of money and effort into increasing the capabilities and autonomy of AI systems, all while acknowledging that superhuman AI could literally wipe out humanity: Bizarrely, many of the people actively advancing AI capabilities think there's a significant chance that doing so will ultimately cause the apocalypse. A 2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to "human extinction or [a] similarly permanent and severe disempowerment" of humanity. Just months before he cofounded OpenAI, Altman said, "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies." This is a pretty crazy situation! But not everyone agrees that AI could cause human extinction. Some think that the idea itself causes more harm than good: Some fear not the "sci-fi" scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrust biased, brittle, and confabulating systems with too much responsibility, opening a more pedestrian Pandora's box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates - often labeled "AI ethics" - tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness. Others buy the idea of transformative AI, but think it's going to be great: A third camp worries that when it comes to AI, we're not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessen agree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far mo...
Those horrifying deep fake AI images of Taylor Swift that circulated on social media last week are a threat to all of us. It's time for real cultural and legislative change. Check out Dr. Joy Buolamwini's Algorithmic Justice League: https://www.ajl.org/ Listen to Dr. Joy Buolamwini's episode: https://omny.fm/shows/there-are-no-girls-on-the-internet/biden-s-executive-order-on-ai-protects-privacy-and 404 Media's reporting on Microsoft: https://www.404media.co/ai-generated-taylor-swift-porn-twitter/ Teen Marvel star speaks out about sexually explicit deepfakes: ‘Why is this allowed?' https://www.nbcnews.com/tech/misinformation/teen-marvel-star-xochitl-gomez-speaks-deepfake-rcna134753 See omnystudio.com/listener for privacy information.
(This conversation was originally broadcast on November 27, 2023.) Tom's guest is Dr. Joy Buolamwini, her ground-breaking work in the field of artificial (AI) intelligence led her to form an organization called the Algorithmic Justice League, with which she leads the crusade against the harms of AI. Dr. Buolamwini recently published a book that tells her remarkable story and how she came to understand the shortcomings and dangers of AI. The book is a clarion call to world leaders, tech entrepreneurs and scholars to address the deficits in AI and regulate this powerful technology so it cannot be deployed unfairly and illegally.Email us at midday@wypr.org, tweet us: @MiddayWYPR, or call us at 410-662-8780.
It's time for our annual predictions episode! Kara and Scott share their 2024 predictions on politics, stocks, China, Google and more. Plus, some Friend of Pivot predictions from Jen Psaki, Mike Birbiglia, Fei-Fei Li, Bill Cohan, Dr. Joy Buolamwini, and Matt Belloni. Follow us on Instagram and Threads at @pivotpodcastofficial. Follow us on TikTok at @pivotpodcast. Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices
This is an excerpt from the full episode, "Protecting What Is Human in Artificial Intelligence: With Dr. Joy Buolamwini."Michael speaks with MIT researcher Dr. Joy Buolamwini about her book, "Unmasking AI: My Mission to Protect What Is Human in a World of Machines." The pair discuss who holds the power within AI and tech and how AI creates algorithmic biases, the impact of AI in elections, the consequences of facial identification and the ousting of OpenAI CEO Sam Altman.Check out the book here: https://www.amazon.com/Unmasking-AI-Mission-Protect-Machines/dp/0593241835If you enjoyed this podcast, be sure to leave a review or share it with a friend!Follow Dr. Joy Buolamwini @jovialjoyFollow Michael @MichaelSteeleFollow the podcast @steele_podcastThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/3668522/advertisement
Welcome to another episode of the Events Demystified Podcast with your host, Anca Platon Trifan! Today, we're thrilled to have Lori Mazor, the visionary CEO & Founder of SYNTHETIVITY, as our special guest. In this episode, we're diving deep into the fascinating intersection of Generative AI and the future of creativity, exploring its profound impact on the event industry and beyond.
To many of us, it might seem like recent developments in artificial intelligence emerged out of nowhere to pose unprecedented threats to humanity. But to Dr. Joy Buolamwini, a trailblazer in AI research, this moment has been a long time in the making. Dr. Buolamwini has spent decades pondering the many implications of an AI-powered world—all the potential benefits, detriments, and injustices. But Dr. Buolamwini hasn't simply explored the potential for harm by AI; she has researched and identified real-world AI harm that has already been done by some of the world's largest tech companies. In graduate school, she led groundbreaking research at MIT's Future Factory that exposed widespread racial and gender bias in AI services from tech giants like Microsoft, IBM, and Apple. In her upcoming book, Unmasking AI, Dr. Buolamwini takes readers through the remarkable journey of how she uncovered what she calls “the coded gaze”—the evidence of encoded discrimination and exclusion in tech products—and how she galvanized the movement to prevent AI harms by founding the Algorithmic Justice League. Dr. Buolamwini has educated President Biden's administration and international leaders at the World Economic Forum and the United Nations on the importance of rectifying algorithmic harms. Her work has been featured in Time, The New York Times, and the Netflix documentary Coded Bias. Now, she shares her story with us. Join us to hear from a pioneer of algorithmic justice as talks with OpenAI CEO Sam Altman and Wall Street Journal technology journalist Deepa Seetharaman, explaining Buolamwini's belief that computers are reflections of both the aspirations and the limitations of the people who create them. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of “B The Way Forward,” host Brenda Darden Wilkerson is joined live, onstage by artificial intelligence expert Dr. Joy Buolamwini. There's a reason why she was named one of TIME's 100 Most Influential People in AI in 2023. Founder and Artist-in-Chief of the Algorithmic Justice League, Dr. Buolamwini questions the responsibility that comes with new technology and sounds the alarm about the implications for our future. Think about all the moments you are confronted with artificial intelligence at places like the airport or on your cell phones. Dr. Buolamwini, who recently authored “Unmasking AI: My Mission to Protect what is Human in a World of Machines, is an essential voice uncovering the existential risks that come with AI technology. She'll show you how real-world data meant for efficiency and convenience can end up being used against you and how her work at the Algorithmic Justice League encourages everyone to speak up if something feels wrong. “You think about training new models on synthetic data because maybe we started resisting and we said, no more scraping our data. Then you get even more skewed data sets going into the next generation. And I'm using images as one example, but you can think of this with text, you can think of it with a voice and so forth. So I think it's really important that one, we don't set our current status quo as the target, higher aspirations, right. But also, knowing that the systems as they exist are making things worse.” For more of Dr. Dr. Buolamwini... Newsletter - poetofcode.substack.com X - @jovialjoy LinkedIn - /buolamwini --- At AnitaB.org, we envision a future where the people who imagine and build technology mirror the people and societies for whom they build it. Find out more about how we support women, non-binary individuals, and other underrepresented groups in computing, as well as the organizations that employ them and the academic institutions training the next generations. --- Connect with AnitaB.org Instagram - @anitab_org Facebook - /anitab.0rg LinkedIn - /anitab-org On the web - anitab.org --- Our guests contribute to this podcast in their personal capacity. The views expressed in this interview are their own and do not necessarily represent the views of Anita Borg Institute for Women and Technology or its employees (“AnitaB.org”). AnitaB.org is not responsible for and does not verify the accuracy of the information provided in the podcast series. The primary purpose of this podcast is to educate and inform. This podcast series does not constitute legal or other professional advice or services. --- B The Way Forward Is… Produced by Dominique Ferrari and Paige Hymson Sound design and editing by Neil Innes and Ryan Hammond Mixing and mastering by Julian Kwasneski Associate Producer is Faith Krogulecki Executive Produced by Dominique Ferrari, Stacey Book, and Avi Glijansky for Riveter Studios and Frequency Machine Executive Produced by Arlan Hamilton for Arlan Was Here Executive Produced by Brenda Darden Wilkerson for AnitaB.org Podcast Marketing from Lauren Passell and Arielle Nissenblatt with Riveter Studios and Tink Media in partnership with Carolyn Schneller and Coley Bouschet at AnitaB.org Photo of Brenda Darden Wilkerson by Mandisa Media Productions Photo of Dr. Joy Buolamwini by Naima Green For more ways to be the way forward, visit AnitaB.org
This week on Notes from America, host Kai Wright talks with Dr. Joy Buolamwini, a computer scientist who uses art and research to illuminate the social implications of artificial intelligence. The self-described “poet of code” warns that A.I. could write the biases of today's world into algorithms and even regress the progress of U.S. civil rights in everything from medicine to loan applications and police surveillance. Kai and Dr. Buolamwini take calls about listener fears around A.I. and address which concerns we should focus on. Plus, she shares her latest poem on the implications of A.I. in war as the crisis in the Middle East continues. Tell us what you think. Instagram and X (Twitter): @noteswithkai. Email us at notes@wnyc.org. Send us a voice message by recording yourself on your phone and emailing us, or record one here. Notes from America airs live on Sundays at 6 p.m. ET. The podcast episodes are lightly edited from our live broadcasts.
Computer scientist Joy Buolamwini coined the term the "coded gaze" while in grad school at MIT. As a brown-skinned woman, the facial recognition software program she was working on couldn't detect her face until she put on a white mask. She's written a book about the potential harms of AI — which include the social implications of bias and how it affects everyone. Also, we'll talk about UFO conspiracy theories with journalist Garrett Graff. He talks with us about how they've led to other conspiracy theories about the government.And Justin Chang will review the latest film by Japanese animator Hayao Miyazaki, The Boy and the Heron.
Computer scientist Joy Buolamwini coined the term the "coded gaze" while in grad school at MIT. As a brown-skinned woman, the facial recognition software program she was working on couldn't detect her face until she put on a white mask. She's written a book about the potential harms of AI — which include the social implications of bias and how it affects everyone. Also, we'll talk about UFO conspiracy theories with journalist Garrett Graff. He talks with us about how they've led to other conspiracy theories about the government.And Justin Chang will review the latest film by Japanese animator Hayao Miyazaki, The Boy and the Heron.
Michael speaks with MIT researcher Dr. Joy Buolamwini about her book, "Unmasking AI: My Mission to Protect What Is Human in a World of Machines." The pair discuss who holds the power within AI and tech and how AI creates algorithmic biases, the impact of AI in elections, the consequences of facial identification and the ousting of OpenAI CEO Sam Altman.Check out the book here: https://www.amazon.com/Unmasking-AI-Mission-Protect-Machines/dp/0593241835If you enjoyed this podcast, be sure to leave a review or share it with a friend!Follow Dr. Joy Buolamwini @jovialjoyFollow Michael @MichaelSteeleFollow the podcast @steele_podcastThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/3668522/advertisement
Computer scientist and AI expert Joy Buolamwini warns that facial recognition technology is riddled with the biases of its creators. She is the author of Unmasking AI and founder of the Algorithmic Justice League. She coined the term "coded gaze," a cousin to the "white gaze" or "male gaze." She says, "This is ... about who has the power to shape technology and whose preferences and priorities are baked in — as well as also, sometimes, whose prejudices are baked in."Also, we remember former First Lady Rosalynn Carter, who died at age 96 last week. She spoke with Terry Gross in 1984.
Computer scientist and AI expert Joy Buolamwini warns that facial recognition technology is riddled with the biases of its creators. She is the author of Unmasking AI and founder of the Algorithmic Justice League. She coined the term "coded gaze," a cousin to the "white gaze" or "male gaze." She says, "This is ... about who has the power to shape technology and whose preferences and priorities are baked in — as well as also, sometimes, whose prejudices are baked in."Also, we remember former First Lady Rosalynn Carter, who died at age 96 last week. She spoke with Terry Gross in 1984.
Tom's guest is Dr. Joy Buolamwini, her ground-breaking work in the field of artificial intelligence led her to form an organization called the Algorithmic Justice Leage, with which she leads the crusade against the harms of AI. Dr. Joy has just published a book that tells her remarkable story and elucidates how she came to understand the shortcomings and dangers of AI. The book is a clarion call to world leaders, tech entrepreneurs and scholars to address the deficits in AI and regulate this powerful technology so it cannot be deployed unfairly and illegally.Email us at midday@wypr.org, tweet us: @MiddayWYPR, or call us at 410-662-8780.
Kara shares her latest reporting on Sam Altman and his decision to go to Microsoft, then she and Scott discuss what's next for OpenAI. Plus, Elon Musk threatens a "thermonuclear lawsuit," and X CEO Linda Yaccarino resists calls to resign. Our Friend of Pivot is Dr. Joy Buolamwini, founder of the Algorithmic Justice League, and author of "Unmasking AI: My Mission to Protect What Is Human in a World of Machines." Dr. Joy gives her take on OpenAI and the Altman ouster, and also discusses her mission to root out bias in AI.Follow Joy at @jovialjoy Follow us on Instagram and Threads at @pivotpodcastofficial. Follow us on TikTok at @pivotpodcast. Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Many of us are familiar with the biases baked into modern technology, but these are heightened with AI, warns Dr. Joy Buolamwini, who has been called the “conscience of the A-I revolution.” A computer scientist and digital activist who holds a PhD from MIT, Dr. Buolamwini exposed how facial recognition technology failed to recognize darker skin color across a range of commonly used apps. We talk to Dr. Buolamwini about her new book “Unmasking AI,” which chronicles her efforts to bring humanity back to technology and her fight for “algorithmic justice.”
Silicon Valley markets itself as the place where futures are born, and yet tech corporations have no real understanding of where our civilizations are headed. We are wrapping up our Silicon Valley vs. Science Fiction series with some final thoughts on why this might be. Then we talk to AI developer, ethicist, and poet Dr. Joy Buolamwini, founder of the Algorithmic Justice League and author of a new book called Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Show notes: www.ouropinionsarecorrect.com/shownotes
Dr. Joy Buolamwini has been raising the alarm about the harm of AI for years, through her research and advocacy. People are finally starting to listen. Check out Dr. Buolamwini's new book Unmasking AI: My Mission to Protect What Is Human in a World of Machines: https://www.amazon.com/Unmasking-AI-Mission-Protect-Machines/dp/0593241835 Dr. Buolamwini runs the Algorithmic Justice League. Find out more about her work: https://www.ajl.org/See omnystudio.com/listener for privacy information.
The Globe's Brian Bergstein will be joining Say More about once a month to host conversations about artificial intelligence, with the aim of asking big questions and getting past the hype. This week, Brian speaks to computer scientist Joy Buolamwini about her new book, “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.” Buolamwini says technology that has a harder time recognizing Black faces should not be used by our government, and that the solution to AI bias is not “more AI.” She also talks about the organization she founded, the Algorithmic Justice League, and what she calls the “poetry of code.” Email us at saymore@globe.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In 2017, then-MIT graduate student Joy Buolamwini shared the challenge of getting facial analysis software to notice her. “Hi camera, can you see my face? You can see her face. What about my face?” she asks the program as she stares at her webcam. It couldn’t “see” her until she wore a white mask. The reason, argued Buolamwini, who is Black, is because of algorithmic bias. Fighting it is one goal of the executive order on AI unveiled Monday by the Biden administration. Buolamwini, author of the new book “Unmasking AI,” told Marketplace’s Lily Jamali the executive order is a step in the right direction.
In 2017, then-MIT graduate student Joy Buolamwini shared the challenge of getting facial analysis software to notice her. “Hi camera, can you see my face? You can see her face. What about my face?” she asks the program as she stares at her webcam. It couldn’t “see” her until she wore a white mask. The reason, argued Buolamwini, who is Black, is because of algorithmic bias. Fighting it is one goal of the executive order on AI unveiled Monday by the Biden administration. Buolamwini, author of the new book “Unmasking AI,” told Marketplace’s Lily Jamali the executive order is a step in the right direction.
Coded bias, intersectionality in AI, and computer vision: Founder of the Algorithmic Justice League Joy Buolamwini talks to host Jon Krohn about the impact of exclusion and inclusion in datasets, the need to address intersectionality when identifying racial, age, or gender-based prejudice in machine learning tools, protections for artists and creative practitioners against AI, and the role that AI may have in combating systemic racism. This episode is brought to you by Gurobi (https://gurobi.com/sds), the Decision Intelligence Leader, and by CloudWolf (https://www.cloudwolf.com/sds), the Cloud Skills platform. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information. In this episode you will learn: • What coded bias is [06:49] • The problem with bias in machine learning datasets [18:41] • The Incoding Movement [42:08] • About the Pilot Parliaments Benchmark [52:07] • Ethics and the future of AI [1:20:10] • The potential for AI to end systemic racism [1:32:59] Additional materials: www.superdatascience.com/727
Facial recognition was made for convenience, but some have discovered that it may be encoded with deeply, ingrained biases. Joy Buolamwini is the author of “UNMASKING AI: My Mission to Protect What Is Human in a World of Machines”. She tells Tavis how there must be a call to action for ethical artificial intelligence.
Cruise's robotaxi permit was immediately suspended in San Francisco after the company failed to report the whole story about a pedestrian-involved accident and two days later the company suspended service everywhere.Facebook and Instagram launched ad-free premium subscriptions in Europe. And X, formerly Twitter, launched Basic and Premium+ subscriptions here in the U.S.Multiple states are suing Meta because of beauty filters' effects on children's development.And, Dr. Joy Buolamwini, the Biden Administration, and the United Nations all feel some type of way about AI, its safeguards, and its proliferation.Link to Show Notes Hosted on Acast. See acast.com/privacy for more information.
Today, we have a truly remarkable guest. Joining us today is the brilliant Dr. Joy Buolamwini, a computer scientist, digital activist and self-described “Poet of Code” whose journey began at that Temple of Technology, the Massachusetts Institute of Technology, or MIT for short. She's the founder of the Algorithmic Justice League, a place where art and activism intersect to illuminate the social implications of AI. She also has a book dropping on Halloween called, wait for it, Unmasking AI. How fitting is that for Halloween?But her story isn't just about her prestigious academic credentials; it's about the extraordinary transformation her creative journey has taken. In today's conversation, she reveals how her quest to create a digital filter, one that could change the reflection of herself in a mirror, led to a profound exploration of technology's hidden biases. Be sure to share some of your thoughts on today's episode with us on Instagram at @blackimagination. If you want to stay updated on all our latest news and exclusive content, click on this newsletter link. If you love what we do and would like to support the show, click this support link. Key LinksKimberlé W. Crenshaw- American activist, intersectionalitySingle axis analysisNiles Luther - Cellist & ComposerRobert Williams- man arrested through skewed AI detectionCoded Bias- a film on NetflixMassachusetts Institute of Technology (MIT) - a private land-grant research university in Cambridge, Massachusetts.What to ReadUnmasking AI- Dr. Joy BuolamwiniBreaking the Code: Thriving as Black Individuals in the Era of Artificial Intelligence - Rayshaun "Chu" SmithBlack in White Space: The Enduring Impact of Color in Everyday...
In this interview, Dr. Joy Buolamwini argues that algorithmic bias in AI systems poses an existential risk to marginalized people. She challenges the assumptions of tech leaders who advocate for AI “alignment” and explains why tech companies are hypocritical when it comes to addressing bias. Dr. Joy Buolamwini is the founder of the Algorithmic Justice League and the author of “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.”Correction: Aza says that Sam Altman, the CEO of OpenAI, predicts superintelligence in four years. Altman predicts superintelligence in ten years. RECOMMENDED MEDIAUnmasking AI by Joy Buolamwini“The conscience of the AI revolution” explains how we've arrived at an era of AI harms and oppression, and what we can do to avoid its pitfallsCoded BiasShalini Kantayya's film explores the fallout of Dr. Joy's discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us allHow I'm fighting bias in algorithmsDr. Joy's 2016 TED Talk about her mission to fight bias in machine learning, a phenomenon she calls the "coded gaze." RECOMMENDED YUA EPISODESMustafa Suleyman Says We Need to Contain AI. How Do We Do It?Protecting Our Freedom of Thought with Nita FarahanyThe AI Dilemma Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
SHOW NOTES:0:00 Alan Robertshaw1:00 Emily Gould - overview of AI historical development2:30 first phase - 1950s Alan Turing - machines do what they are told3:10 second phase - machine learning creating models using data and develop methods to make decisions / predictions based on that data3:50 third phase - deep learning usually using neural networks to mimic the human brain4:50 GANs - part of third phase that involve generator and discriminator algorithms5:55 Obvious' Portrait of Edmond de Belamy6:40 Robbie Barrett's code used by Obvious 8:40 unpredictability in the deep learning phase 9:25 different tests applied to determine if a machine is intelligent9:55 Turing test - machine is intelligent if you can't tell the difference between responses by a human and a machine10:10 Lovelace test - machine is intelligent if you can't explain machine's answer11:20 ‘Alpha Go' algorithm 13:30 uses of AI14:20 huge training data sets15:50 major risks with AI include copyright 17:10 privacy and data protection17:20 transparency - deep fake17:40 bias amplification18:15 MIT researcher Joy Buolamwini's work with facial analysis software 19:45 UK's pro-innovation approach to AI21:45 text and data mining (TDM) exception only for non-commercial use - proposal to expand to commercial use24:25 Nov 2022 government decided not to expand TDM exception to commercial use24:55 UK Pro-innovation Regulation of Technologies Review 26:45 A pro-innovation approach to AI regulation policy paper - no legislation in the short term, no move to central regulatory body for AI 29:30 AI described in UK white paper as including autonomy and adaptivity 32:25 Global Summit on AI Safety32:45 EU AI Act with risk—based approach - June 2023 signed off by Parliament; final conclusions expected late 2023; operational circa 202636:35 US - AI suits pending37:00 Robbie Barrett 38:00 opt in versus opt out policy39:20 Senate testimony regarding UK's AI advances40:15 US Task Force on AI Policy proposed; Privacy Consumer Protection Framework40:45 Getty v. Stability AI suits in US and UK41:25 2024 elections and AI 44:00 Alan Robertshaw's case with Getty 47:05 Gould: AI voice scam48:00 Robertshaw: AI uses50:20 AI medical screening53:00 consciousness56:00 Artist Sofia Crespo's work with natural history56:30 Lines and Bones by artist Iskra Velitchkova56:50 Dawn Chorus Alexandra Daisy Ginsberg 57:30 projection for how artists in the UK will address AI issues Please share your comments and/or questions at stephanie@warfareofartandlaw.comTo hear more episodes, please visit Warfare of Art and Law podcast's website.To view rewards for supporting the podcast, please visit Warfare's Patreon page.To leave questions or comments about this or other episodes of the podcast and/or for information about joining the 2ND Saturday discussion on art, culture and justice, please message me at stephanie@warfareofartandlaw.com. Thanks so much for listening!© Stephanie Drawdy [2023]
Charles Coleman is in for Ali Velshi and is joined by NBC News' Ryan Reilly, Senior Executive Editor of Bloomberg Opinion Tim O'Brien, President and CEO of Citizens for Responsibility and Ethics Noah Bookbinder, NBC News' Monica Alba, Opinion Writer with The Washington PostJennifer Rubin, Columnist with The New York Times Michelle Goldberg, NBC News' Steve Patterson, Co-Founder of Black Lives Matter Charlottesville Don Gathers, Author and Poet Caroline Randall Williams, Professor at Georgetown School of Law Paul Butler, Criminal Defense Attorney Danny Cevallos, Fmr. Florida 9TH Judicial Circuit State Attorney Monique Worrell, NBC News Senior Reporter Ben Collins, Founder of Algorithmic Justice League Dr. Joy Buolamwini
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms ■Post on this topic (You can get FREE learning materials!) https://englist.me/107-academic-words-reference-from-joy-buolamwini-how-im-fighting-bias-in-algorithms-ted-talk/ ■Youtube Video https://youtu.be/4a1C4ASLmXY (All Words) https://youtu.be/5KznQjhRnVA (Advanced Words) https://youtu.be/pXJ4nx-rAxc (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
Bloomberg's Ed Ludlow breaks down the latest after Lordstown files for bankruptcy and the EV maker's deal with Foxconn unravels. Plus, his conversation with MIT researcher Joy Buolamwini on her meeting with President Biden on AI, and an exclusive interview with Y Combinator CEO Garry Tan.See omnystudio.com/listener for privacy information.
In this episode Johanna speaks with author and journalist Tracey Spicer about her new book, Man-made: how the bias of the past is being built into the future. The book explores the history of discrimination in technology and the importance of diversity and inclusion in today's tech ecosystem. Spicer makes a case for a new social contract, one that would see people holding the power over machines. Relevant Links: Tracey Spicer website: https://traceyspicer.com.au/ Tracey Spicer new book, Man-made: https://www.simonandschuster.com.au/books/Man-Made/Tracey-Spicer/9781761106378 Dr. Joy Buolamwini: https://www.poetofcode.com/ Dr. Joy Buolamwini TED Talk: https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms Algorithmic Justice League: https://www.ajl.org/ Coded Bias, Netflix documentary: https://www.netflix.com/au/title/81328723 Professor Yolande Strengers: https://research.monash.edu/en/persons/yolande-strengers Follow: Tracey Spicer on Twitter: @TraceySpicer Tracey Spicer on LinkedIn: Tracey Spicer AM GAICD Dr. Joy Buolamwini on Twitter: @jovialjoy Yolande Strengers on Twitter: @YolandeStreng
On Today's Show: Modi's State Visit: Biden Embraces Indian Leader Despite Rights Crackdown Biden Calls Xi Jinping a “Dictator”: China-U.S. Relations and a Growing Multipolar World How AI Is Enabling Racism & Sexism: Algorithmic Justice League's Joy Buolamwini on Meeting with Biden The post Democracy Now 6am – June 22, 2023 appeared first on KPFA.
Artificial Intelligence (AI) and machine learning are huge buzz words these days and for good reason. They are increasingly being used within research and by companies to make important decisions about our lives. Dr Mavis Machirori, a senior researcher in AI and healthcare, talks about the dangers of poorly designed AI systems and how the lack of diversity in the tech space means algorithms and machine-learning systems discriminate against Black (and other minority) communities. Host: Tulela Pea, from Black Women Science Network Talk to Dr Mavis Machirori on: Twitter: @thinkspeakmavis Website: https://www.adalovelaceinstitute.org/person/mavis-machirori/ Further resources: Watch 'Coded Bias' the documentary on Netflix Support the Algorithmic Justice League (AJL) by Dr Joy Buolamwini here https://www.ajl.org/ What is 'Facial Recognition Technology' (by AJL) - https://www.ajl.org/facial-recognition-technology What is Intersectionality? - https://www.bwisnetwork.co.uk/post/interpreting-intersectionality More information: Check us out on this list for Top Women in Science Podcasts on Feedspot - https://blog.feedspot.com/women_in_science_podcasts/
Episode Summary:In this captivating episode, we journey with Tega Brain from her roots as an environmental engineer to her evolution into an art-tech visionary. Exploring the digital art landscape reshaped by AI and Machine Learning, she draws parallels with influential figures like Ian Cheng, Refik Anadol, and Elon Musk. Her works mirror the transformative power these technologies wield in creating unique artistic experiences, akin to what Trevor Paglen and Agnes Denes are known for. Amidst our tech-driven world, Tega challenges the status quo, intertwining creativity with environmental sustainability, and navigating ethical concerns similar to scholars like Kate Crawford, Timnit Gebru, and Joy Buolamwini. This episode is a must for anyone keen on the intersection of technology, art, and environmental sustainability.In what ways artificial intelligence and machine learning are transforming the digital art landscape, and what opportunities do these technologies present for artists?How do you address ethical concerns when incorporating AI and other emerging technologies into your art practice?The Speaker:Tega Brain is an Australian-born artist, environmental engineer, and educator whose work intersects art, technology, and ecology. Her projects often address environmental issues and involve creating experimental systems, installations, and software. She has exhibited her work at various venues, including the Victoria and Albert Museum, the Whitney Museum of American Art, and the Haus der Kulturen der Welt. In addition to her art practice, Tega Brain is an Assistant Professor of Integrated Digital Media at New York University's Tandon School of Engineering. Her research and teaching focus on the creative and critical applications of technology, with an emphasis on sustainability and environmental concerns.Follow Tega Brain's journey.Hosts: Farah Piriye & Elizabeth Zhivkova, ZEITGEIST19 FoundationFor sponsorship enquiries, comments, ideas and collaborations, email us at info@zeitgeist19.com Follow us on InstagramHelp us to continue our mission and to develop our podcast: Donate
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Future Perfect 50, published by Catherine Low on October 23, 2022 on The Effective Altruism Forum. I just saw that Future Perfect have a new feature. I found it really inspiring so I thought I'd share it here. It is the Future Perfect 50: The scientists, thinkers, scholars, writers, and activists building a more perfect future.There are some wonderful profiles of people that will be familiar to many Forum readers, like Leah Garcés' work with farmers, Lucia Coulter and Jack Rafferty's work on Lead Elimination and Kevin Esvelt's Gene Drive research. But there are a host of inspiring people and stories I've never heard before, like Setusko Thurlow's anti-nuclear weapon work, Joy Buolamwini's algorithmic justice campaign, and Olga Kikou's fight for a ban on all caged farming in the EU. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Original broadcast date: October 30, 2020. False information on the internet makes it harder and harder to know what's true, and the consequences have been devastating. This hour, TED speakers explore ideas around technology and deception. Guests include law professor Danielle Citron, journalist Andrew Marantz, and computer scientist Joy Buolamwini.
AI needs to benefit everyone, not just those who build it. But fulfilling this promise requires careful thought before new technologies are built and released into the world. In this episode, Hannah delves into some of the most pressing and difficult ethical and social questions surrounding AI today. She explores complex issues like racial and gender bias and the misuse of AI technologies, and hears why diversity and representation is vital for building technology that works for all. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind's Sasha Brown, William Isaac, Shakir Mohamed, Kevin Mckee & Obum Ekeke CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: What a machine learning tool that turns Obama white can (and can't) tell us about AI bias, The Verge: https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-biasTuskegee Syphilis Study, Wikipedia: https://en.wikipedia.org/wiki/Tuskegee_Syphilis_StudyEthics & Society, DeepMind: https://deepmind.com/about/ethics-and-societyRow over AI that 'identifies gay faces', BBC: https://www.bbc.co.uk/news/technology-41188560The Trevor Project: https://www.thetrevorproject.org/AI takes root, helping farmers identify diseased plants, Google: https://www.blog.google/technology/ai/ai-takes-root-helping-farmers-identity-diseased-plants/How Can You Use Technology to Support a Culture of Inclusion and Diversity?, myHRfuture: https://www.myhrfuture.com/blog/2019/7/16/how-can-you-use-technology-to-support-a-culture-of-inclusion-and-diversityScholarships at DeepMind: https://www.deepmind.com/scholarshipsAI, Ain't I a Woman? Joy Buolamwini, YouTube: https://www.youtube.com/watch?v=QxuyfWoVV98How to be Human in the Age of the Machine, Hannah Fry: https://royalsociety.org/grants-schemes-awards/book-prizes/science-book-prize/2018/hello-world/