Podcasts about gebru

  • 43PODCASTS
  • 63EPISODES
  • 41mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 18, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about gebru

Latest podcast episodes about gebru

Data & Society
Resisting Predatory Data | Book Talk

Data & Society

Play Episode Listen Later Apr 18, 2025 62:41


At the turn of the 20th century, the anti-immigration and eugenics movements used data about marginalized people to fuel racial divisions and political violence under the guise of streamlining society toward the future. Today, as the tech industry champions itself as a global leader of progress and innovation, we are falling into the same trap.On April 10th, Anita Say Chan, author of Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future (UCP 2025 and open access), joined Émile P. Torres and Timnit Gebru for a discussion of the 21st century eugenics revival in big tech and how to resist it in a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem. Predatory Data is the first book to draw this direct line between the datafication and prediction techniques of past eugenicists and today's often violent and extractive “big data” regimes. Torres and Gebru have also extensively studied the second wave of eugenics, identifying a suite of tech-utopian ideologies they call the TESCREAL bundle.Purchase your own copy of Anita Say Chan's book Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future: https://bookshop.org/a/14284/9780520402843.Learn more about the event at datasociety.net (https://datasociety.net/events/resisting-predatory-data/).

re:verb
E97: re:joinder - OI: Oprahficial Intelligence

re:verb

Play Episode Listen Later Oct 17, 2024 87:57


On today's show, we once again fire up our rhetorical stovetop to roast some dubious public argumentation: Oprah Winfrey's recent ABC special, “AI and the Future of Us.” In this re:joinder episode, Alex and Calvin listen through and discuss audio clips from the show featuring a wide array of guests - from corporate leaders like Sam Altman and Bill Gates to technologists like Aza Raskin and Tristan Harris, and even FBI Director Christopher Wray - and dismantle some of the mystifying rhetorical hype tropes that they (and Oprah) circulate about the proliferation of large language models (LLMs) and other “AI” technologies into our lives. Along the way, we use rhetorical tools from previous episodes, such as the stasis framework, to show which components of the debate around AI are glossed over, and which are given center-stage. We also bring our own sociopolitical and media analysis to the table to help contextualize (and correct) the presenters' claims about the speed of large language model development, the nature of its operation, and the threats - both real and imagined - that this new technological apparatus might present to the world. We conclude with a reflection on the words of novelist Marilynne Robinson, the show's final guest, who prompts us to think about the many ways in which “difficulty is the point” when it comes to human work and developing autonomy. Meanwhile, the slick and tempting narratives promoting “ease” and “efficiency” with AI technology might actually belie a much darker vision of “the future of us.” Join us as we critique and rejoin some of the most common tropes of AI hype, all compacted into one primetime special. In the spirit of automating consumptive labor, we watched it so you don't have to!Works & Concepts cited in this episode:Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?

The Nonlinear Library
EA - The "TESCREAL" Bungle by ozymandias

The Nonlinear Library

Play Episode Listen Later Jun 4, 2024 22:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "TESCREAL" Bungle, published by ozymandias on June 4, 2024 on The Effective Altruism Forum. A specter is haunting Silicon Valley - the specter of TESCREALism. "TESCREALism" is a term coined by philosopher Émile Torres and AI ethicist Timnit Gebru to refer to a loosely connected group of beliefs popular in Silicon Valley. The acronym unpacks to: Transhumanism - the belief that we should develop and use "human enhancement" technologies that would give people everything from indefinitely long lives and new senses like echolocation to math skills that rival John von Neumann's. Extropianism - the belief that we should settle outer space and create or become innumerable kinds of "posthuman" minds very different from present humanity. Singularitarianism - the belief that humans are going to create a superhuman intelligence in the medium-term future. Cosmism - a near-synonym to extropianism. Rationalism - a community founded by AI researcher Eliezer Yudkowsky, which focuses on figuring out how to improve people's ability to make good decisions and come to true beliefs. Effective altruism - a community focused on using reason and evidence to improve the world as much as possible. Longtermism - the belief that one of the most important considerations in ethics is the effects of our actions on the long-term future.[1] TESCREALism is a personal issue for Torres,[2] who used to be a longtermist philosopher before becoming convinced that the ideology was deeply harmful. But the concept is beginning to go mainstream, with endorsements in publications like Scientific American and the Financial Times. The concept of TESCREALism is at its best when it points out the philosophical underpinnings of many conversations occurring in Silicon Valley - principally about artificial intelligence but also about everything from gene-selection technologies to biosecurity. Eliezer Yudkowsky and Marc Andreessen - two influential thinkers Torres and Gebru have identified as TESCREAList - don't agree on much. Eliezer Yudkowsky believes that with our current understanding of AI we're unable to program an artificial general intelligence that won't wipe out humanity; therefore, he argues, we should pause AI research indefinitely. Marc Andreessen believes that artificial intelligence will be the most beneficial invention in human history: People who push for delay have the blood of the starving people and sick children whom AI could have helped on their hands. But their very disagreement depends on a number of common assumptions: that human minds aren't special or unique, that the future is going to get very strange very quickly, that artificial intelligence is one of the most important technologies determining the trajectory of future, that intelligences descended from humanity can and should spread across the stars.[3] As an analogy, Republicans and Democrats don't seem to agree about much. But if you were explaining American politics to a medieval peasant, the peasant would notice a number of commonalities: that citizens should choose their political leaders through voting, that people have a right to criticize those in charge, that the same laws ought to apply to everyone. To explain what was going on, you'd call this "liberal democracy." Similarly, many people in Silicon Valley share a worldview that is unspoken and, all too often, invisible to them. When you mostly talk to people who share your perspective, it's easy to not notice the controversial assumptions behind it. We learn about liberal democracy in school, but the philosophical underpinnings beneath some common debates in Silicon Valley can be unclear. It's easy to stumble across Andreesen's or Yudkowsky's writing without knowing anything about transhumanism. The TESCREALism concept can clarify what's going on for confused outsiders. How...

The Other Side Of The Firewall
TBT - Dr. Gebru Sounds Off - The Other Side of the Firewall S1 Ep109

The Other Side Of The Firewall

Play Episode Listen Later Dec 28, 2023 16:31


What's up, everyone! In this episode, Ryan and Shannon discuss Dr. Gebru, Google's former AI ethics subject matter expert, calling for a reconceptualizing of AI and potential government regulation. Please LISTEN

Luiza's Podcast
#9: Understanding LLMs and Breaking Down the AI Hype, with Dr. Alex Hanna & Prof. Emily M. Bender

Luiza's Podcast

Play Episode Listen Later Sep 21, 2023 57:15


In this exclusive live talk, Luiza Jarovsky discusses with Dr. Alex Hanna and Prof. Emily M. Bender:What the current AI hype is about, and what are the main counter-arguments;Why the "Stochastic Parrots"

Midday
A.I. & Race: Will future technology include racial equity?

Midday

Play Episode Listen Later Sep 18, 2023 48:32


Today, a conversation about equity and AI. Tom's guest is Dr. Timnit Gebru, a computer scientist and tech expert who is working to reduce the negative effects of artificial intelligence. In 2021, Dr. Gebru founded the Distributed AI Research Institute (DAIR), a non-profit she currently serves as executive director. Before that, she was employed at Google, where she co-led the Ethical Artificial Intelligence research team. She was fired in 2020 — although the company insists she resigned — after a dispute with Google about its suppression of some of her research, and her criticism of the company's diversity, equity and inclusion (DEI) policies. She had previously done ground-breaking work at Apple and Microsoft.Dr. Gebru is also the co-founder of Black in AI, a nonprofit whose stated mission is "to increase the presence, inclusion, visibility and health of Black people in the field of AI."Dr. Timnit Gebru joined us on Zoom from San Francisco.Email us at midday@wypr.org, tweet us: @MiddayWYPR, or call us at 410-662-8780.

Technically Optimistic
Timnit Gebru is asking different questions about AI

Technically Optimistic

Play Episode Listen Later Sep 15, 2023 42:08


Timnit Gebru is a co-author of one of the most influential research papers on AI from this decade, which coined the term “stochastic parrots” to describe large language models. Following her very public departure from Google in 2020, Gebru founded the Distributed AI Research (DAIR) Institute, an organization that describes itself as doing independent, community-rooted work, free from the pervasive influence of Big Tech. She's now DAIR's executive director. And recently, she was selected as one of TIME's 100 Most Influential People in AI — like several other guests you hear from in season one of this show. Gebru sat down with host Raffi Krikorian for a wide-ranging and deep conversation about AI, touching on things like the obfuscation around its capabilities, what Big Tech hopes we don't pay attention to, and the importance of imagining alternative possible futures. To learn more about Technically Optimistic: emersoncollective.com/technicallyoptimistic For more on Emerson Collective: emersoncollective.com Learn more about our host, Raffi Krikorian: emersoncollective.com/persons/raffi-krikorian Technically Optimistic is produced by Emerson Collective with music by Mattie Safer.  Email us with questions and feedback at technicallyoptimistic@emersoncollective.com. Subscribe to Emerson Collective's newsletter: emersoncollective.com To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

The Nonlinear Library
LW - The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate by Adam David Long

The Nonlinear Library

Play Episode Listen Later Aug 1, 2023 6:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate, published by Adam David Long on August 1, 2023 on LessWrong. Summary of Argument: The public debate among AI experts is confusing because there are, to a first approximation, three sides, not two sides to the debate. I refer to this as a three-sided framework, and I argue that using this three-sided framework will help clarify the debate (more precisely, debates) for the general public and for policy-makers. Broadly speaking, under my proposed three-sided framework, the positions fall into three broad clusters: AI "pragmatists" or realists are most worried about AI and power. Examples of experts who are (roughly) in this cluster would be Melanie Mitchell, Timnit Gebru, Kate Crawford, Gary Marcus, Klon Kitchen, and Michael Lind. For experts in this group, the biggest concern is how the use of AI by powerful humans will harm the rest of us. In the case of Gebru and Crawford, the "powerful humans" that they are most concerned about are large tech companies. In the case of Kitchen and Lind, the "powerful humans" that they are most concerned about are foreign enemies of the U.S., notably China. AI "doomers" or extreme pessimists are most worried about AI causing the end of the world. @Eliezer Yudkowsky is, of course, the most well-known to readers of LessWrong but other well-known examples include Nick Bostrom, Max Tegmark, and Stuart Russell. I believe these arguments are already well-known to readers of LessWrong, so I won't repeat them here. AI "boosters" or extreme optimists are most worried that we are going to miss out on AI saving the world. Examples of experts in this cluster would be Marc Andreessen, Yann LeCun, Reid Hoffman, Palmer Luckey, Emad Mostaque. They believe that AI can, to use Andreessen's recent phrase, "save the world," and their biggest worry is that moral panic and overregulation will create huge obstacles to innovation. These three positions are such that, on almost every important issue, one of the positions is opposed to a coalition of the other two of the positions AI Doomers + AI Realists agree that AI poses serious risks and that the AI Boosters are harming society by downplaying these risks AI Realists + AI Boosters agree that existential risk should not be a big worry right now, and that AI Doomers are harming society by focusing the discussion on existential risk AI Boosters and AI Doomers agree that AI is progressing extremely quickly, that something like AGI is a real possibility in the next few years, and that AI Realists are harming society by refusing to acknowledge this possibility Why This Matters. The "AI Debate" is now very much in the public consciousness (in large part, IMHO, due to the release of ChatGPT), but also very confusing to the general public in a way that other controversial issues, e.g. abortion or gun control or immigration, are not. I argue that the difference between the AI Debate and those other issues is that those issues are, essentially two-sided debates. That's not completely true, there are nuances, but, in the public's mind at their essence, they come down to two sides.To a naive observer, the present AI debate is confusing, I argue, because various experts seem to be talking past each other, and the "expert positions" do not coalesce into the familiar structure of a two-sided debate with most experts on one side or the other. When there are three sides to a debate, then one fairly frequently sees what look like "temporary alliances" where A and C are arguing against B. They are not temporary alliances. They are based on principles and deeply held beliefs. It's just that, depending on how you frame the question, you wind up with "strange bedfellows" as two groups find common ground on on...

The Nonlinear Library: LessWrong
LW - The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate by Adam David Long

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 1, 2023 6:09


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate, published by Adam David Long on August 1, 2023 on LessWrong. Summary of Argument: The public debate among AI experts is confusing because there are, to a first approximation, three sides, not two sides to the debate. I refer to this as a three-sided framework, and I argue that using this three-sided framework will help clarify the debate (more precisely, debates) for the general public and for policy-makers. Broadly speaking, under my proposed three-sided framework, the positions fall into three broad clusters: AI "pragmatists" or realists are most worried about AI and power. Examples of experts who are (roughly) in this cluster would be Melanie Mitchell, Timnit Gebru, Kate Crawford, Gary Marcus, Klon Kitchen, and Michael Lind. For experts in this group, the biggest concern is how the use of AI by powerful humans will harm the rest of us. In the case of Gebru and Crawford, the "powerful humans" that they are most concerned about are large tech companies. In the case of Kitchen and Lind, the "powerful humans" that they are most concerned about are foreign enemies of the U.S., notably China. AI "doomers" or extreme pessimists are most worried about AI causing the end of the world. @Eliezer Yudkowsky is, of course, the most well-known to readers of LessWrong but other well-known examples include Nick Bostrom, Max Tegmark, and Stuart Russell. I believe these arguments are already well-known to readers of LessWrong, so I won't repeat them here. AI "boosters" or extreme optimists are most worried that we are going to miss out on AI saving the world. Examples of experts in this cluster would be Marc Andreessen, Yann LeCun, Reid Hoffman, Palmer Luckey, Emad Mostaque. They believe that AI can, to use Andreessen's recent phrase, "save the world," and their biggest worry is that moral panic and overregulation will create huge obstacles to innovation. These three positions are such that, on almost every important issue, one of the positions is opposed to a coalition of the other two of the positions AI Doomers + AI Realists agree that AI poses serious risks and that the AI Boosters are harming society by downplaying these risks AI Realists + AI Boosters agree that existential risk should not be a big worry right now, and that AI Doomers are harming society by focusing the discussion on existential risk AI Boosters and AI Doomers agree that AI is progressing extremely quickly, that something like AGI is a real possibility in the next few years, and that AI Realists are harming society by refusing to acknowledge this possibility Why This Matters. The "AI Debate" is now very much in the public consciousness (in large part, IMHO, due to the release of ChatGPT), but also very confusing to the general public in a way that other controversial issues, e.g. abortion or gun control or immigration, are not. I argue that the difference between the AI Debate and those other issues is that those issues are, essentially two-sided debates. That's not completely true, there are nuances, but, in the public's mind at their essence, they come down to two sides.To a naive observer, the present AI debate is confusing, I argue, because various experts seem to be talking past each other, and the "expert positions" do not coalesce into the familiar structure of a two-sided debate with most experts on one side or the other. When there are three sides to a debate, then one fairly frequently sees what look like "temporary alliances" where A and C are arguing against B. They are not temporary alliances. They are based on principles and deeply held beliefs. It's just that, depending on how you frame the question, you wind up with "strange bedfellows" as two groups find common ground on on...

re:verb
E82: The Rhetoric of AI Hype (w/ Dr. Emily M. Bender)

re:verb

Play Episode Listen Later Jul 28, 2023 53:02


Are you a writing instructor or student who's prepared to turn over all present and future communication practices to the magic of ChatGPT? Not so fast! On today's show, we are joined by Dr. Emily M. Bender, Professor in the Department of Linguistics at the University of Washington and a pre-eminent academic critic of so-called “generative AI” technologies. Dr. Bender's expertise involves not only how these technologies work computationally, but also how language is used in popular media to hype, normalize, and even obfuscate AI and its potential to affect our lives.Dr. Bender's most well-known scholarly work related to this topic is a co-authored conference paper from 2021 entitled, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In our conversation, Emily explains why she and her co-authors chose the “stochastic parrot” metaphor – how this helps us to understand large language models and other related technologies more accurately than many competing metaphors. We go on to discuss several actual high-stakes, significant issues related to these technologies, before Dr. Bender provides a helpful index of some the most troublesome ways they are talked about in the media: synthetic text “gotcha”s, infancy metaphors, linear models of progress, inevitability framings, and many other troublesome tropes. We conclude with a close reading of a recent piece in the Chronicle of Higher Education about using synthetic text generators in writing classrooms: “Why I'm Excited About Chat GPT” by Jenny Young. Young's article exemplifies many of the tropes Emily discussed earlier, as well as capturing lots of strange prevailing ideas about writing pedagogy, genre, and rhetoric in general. We hope that you enjoy this podcast tour through the world of AI hype media, and we ask that you please remain non-synthetic ‘til next time – no shade to parrots!

Dave Troy Presents
Understanding TESCREAL with Dr. Timnit Gebru and Émile Torres

Dave Troy Presents

Play Episode Listen Later Jun 14, 2023 95:05


Everyone's talking about AI, how it will change the world, and even suggesting it might end humanity as we know it. Dave is joined by Dr. Timnit Gebru and Émile Torres, two prominent critics of AI doomerism, to cut through the noise, and look at where these ideas really came from, and offer suggestions on how we might look at these problems differently. And they also offer a picture of the darker side of these ideas and how they connect to Eugenics and other ideologies historically. Together Émile and Timnit coined an acronym called TESCREAL, which stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism — and yeah, that's a lot of -isms. But it ties into other topics that we have covered in this series, including Russian Cosmism and Longtermism. Dr. Gebru came to prominence in 2020 after she was fired from Google for speaking up about the company's lack of ethical guardrails in its AI development work. Émile Torres studies existential risk and has been a critic of the "longtermist" movement for several years. Dave last spoke with them last year in Season 1, Episode 23. Here are some relevant articles from Timnit and Émile. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender, Gebru, McMillan-Major, Schmitchell) https://dl.acm.org/doi/10.1145/3442188.3445922 The Acronym Behind Our Wildest AI Dreams and Nightmares https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/ Longtermism and Eugenics: A Primer, by Émile Torres https://www.truthdig.com/articles/longtermism-and-eugenics-a-primer/ The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley's Rightward Turn, by Dave Troy https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/ Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I. (New York Times; Cade Metz, Daisuke Wakabayashi) https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html Keywords: existential risk, artificial intelligence, TESCREAL, Yudkowsky, Sam Altman, Elon Musk, Peter Diamandis, Ray Kurzweil, Timnit Gebru, Émile Torres, Gary Marcus, OpenAI, Google, doomerism.

Digital Alchemy
Digital Alchemy - Alex Hanna on Combating AI Injustice

Digital Alchemy

Play Episode Listen Later May 27, 2023 13:20


This episode features Dr. Alex Hanna in conversation with Professor Moya Bailey. Dr. Hanna discusses how the work of the Distributed AI Research Institute (DAIR) has activist applications in seeking to mitigate sociotechnical harms and algorithmic injustice. Dr. Hanna further elaborates on how young professionals interested in AI and machine learning can consciously navigate the industry and work to reconstruct harmful sociotechnical frameworks.Click here for the episode transcript FeaturingMoya Bailey Alex HannaSponsor:Northwestern University School of Communication More from our guests:  Moya BaileyAssociate Professor | Department of Communication StudiesNorthwestern UniversityDigital Alchemist | Octavia E. Butler Legacy NetworkBoard President | Allied Media ProjectsTwitter: @moyazbIG: @transformisogynoirAlex HannaDirector of Research | Distributed AI Research Institute Twitter: @alexhannaWorks Referenced in Episode: Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Digital Alchemy
Digital Alchemy - Timnit Gebru, Interdisciplinary, and Distributed AI Research

Digital Alchemy

Play Episode Listen Later May 25, 2023 15:52


In this episode, Moya Bailey speaks with Timnit Gebru about how her personal life, being born and raised in Ethiopia, and her professional life, most recently working at Google, has prepared her for her most recent efforts as Founder of the Distributed AI Research (DAIR) Institute. She describes DAIR's goals to build a distributed, interdisciplinary and diverse coalition to collectively identity and combat algorithmic bias against marginalized communities.Click here for the episode transcript FeaturingMoya Bailey Timnit GebruSponsor:Northwestern University School of Communication More from our guests:  Moya BaileyAssociate Professor | Department of Communication StudiesNorthwestern UniversityDigital Alchemist, Octavia E. Butler Legacy NetworkBoard President, Allied Media ProjectsTwitter: @moyazbIG: @transformisogynoirTimnit GebruFounder & Executive Director | The Distributed AI Research Institute (DAIR)Cofounder, Black In AITwitter: @TimnitGebruWorks Referenced in Episode: Cade Metz and Daisuke Wakabayashi (December 3, 2020). Google researcher says she was fired over paper highlighting bias in A.I. New York Times, December 3, 2020.Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, E. (2020, February). Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 145-151).Copy and Audio Editor:  Dominic Bonelli Executive Producer:DeVante Brown

LondonLive Church Podcast
Experiencing God's Power Within | Dr Merid Gebru

LondonLive Church Podcast

Play Episode Listen Later Apr 1, 2023 38:40


Mark 10:45 "For even the son of man did not come to be served but to serve, and to give his life a ransom for many." What a wonderful privilege it is to have a servant God. Knowing you are never alone, knowing God is for you and died for you, stabilises you in the journey of life. If you believe in God Almighty should there ever be a situation where you panic? Hear more of what Dr Merid Gebru has to say on this matter and be inspired.. Dr Merid Gebru is married with 2 wonderful daughters and speaks at Londonlive on a regular basis .

Midday
AI & Race: How do we build racial equity into our smart machines?

Midday

Play Episode Listen Later Mar 28, 2023 48:39


Today, a conversation about equity and AI. Tom's guest is Dr. Timnit Gebru, a computer scientist and tech expert who is working to reduce the negative effects of artificial intelligence. In 2021, Dr. Gebru founded the Distributed AI Research Institute (DAIR), a non-profit she currently serves as executive director. Before that, she was employed at Google, where she co-led the Ethical Artificial Intelligence research team. She was fired in 2020 — although the company insists she resigned — after a dispute with Google about its suppression of some of her research, and her criticism of the company's diversity, equity and inclusion (DEI) policies. She had previously done ground-breaking work at Apple and Microsoft. Dr. Gebru is also the co-founder of Black in AI, a nonprofit whose stated mission is "to increase the presence, inclusion, visibility and health of Black people in the field of AI." Her DAIR research includes studying how artificial intelligence often reinforces and amplifies existing prejudices and marginalization. She has looked at how facial recognition programs are much less accurate in analyzing faces of people of color. She has also written about the need for regulation in the tech industry, and the environmental impact of AI. Dr. Timnit Gebru joins us on Zoom from San Francisco.See omnystudio.com/listener for privacy information.

Say what say it again
Say What Say It Again Featuring Andy Gebru and Danny Thompson Season 3 Episode 13

Say what say it again

Play Episode Listen Later Jan 4, 2023 78:32


Join us as we talk about Damar Hamlin, NFL week 17 ,NBA MVP race and much more

Talk to Al Jazeera
Timnit Gebru: Is AI racist and antidemocratic? | Talk to Al Jazeera

Talk to Al Jazeera

Play Episode Listen Later Aug 5, 2022 25:50


Artificial intelligence has become an essential part of our life, though some say there is another more sinister side to it.Computer scientist Timnit Gebru has been one of the most critical voices against the unethical use of AI.Considered one of the 100 most influential people of 2022 by Time magazine, Google asked Gebru to co-lead its unit focused on ethical artificial intelligence.But the tech giant fired her after she criticised the company's lucrative AI work.Who is behind AI technology? Whose interests does it serve? And how democratic is its use?Timnit Gebru talks to Al Jazeera.Subscribe to our channel http://bit.ly/AJSubscribeFollow us on Twitter https://twitter.com/AJEnglishFind us on Facebook https://www.facebook.com/aljazeeraCheck our website: http://www.aljazeera.com/Check out our Instagram page: https://www.instagram.com/aljazeeraenglish/@AljazeeraEnglish#Aljazeeraenglish#News

Community IT Innovators Nonprofit Technology Topics
Community IT Voices: Saba Gebru, Director of Services Operations

Community IT Innovators Nonprofit Technology Topics

Play Episode Listen Later Jul 8, 2022 27:26 Transcription Available


Carolyn talks with Saba Gebru about her role as Director of Service Operations. Like many Community IT employees, Saba learned about Community IT as a client, while she was working at the Ethiopian Community Development Council and assisting refugee resettlement. As she had an aptitude for working with technology and with people, she was able to turn her passion for nonprofits into a career in nonprofit IT support. Saba loves the problem solving her job demands, working with senior engineers and getting the clients back able to work as quickly as possible. She puts the care into working with clients and sees these client partnerships as long term relationships; she has worked with some clients for her whole tenure at Community IT, or almost 15 years. She also is motivated by helping people succeed, so she is very involved in the mentoring programs at Community IT, both formal and informal. ..I find it rewarding to work with a variety of nonprofit organizations to help them meet their needs in terms of technology, so they can go out in the communities they serve to make this world a better place. – Saba Gebru, Director of Service OperationsCommunity IT is the right place for you if you find fulfillment in helping others succeed and love mastering new technologies.Our employees stay and grow with us, and over half of our staff have been with us for over a decade. Community IT is an employee-owned company with a positive, sustainable workplace that promotes professional development and a healthy work/life balance. We have been 100% employee-owned since 2012. Check out careers with us here.

AI with AI
the sentience of the lamdas

AI with AI

Play Episode Listen Later Jul 2, 2022 41:02


Andy and Dave discuss the latest in AI news and research, starting with the Department of Defense releasing its Responsible AI Strategy. In the UK, the Ministry of Defence publishes its Defence AI Strategy. The Federal Trade Commission warns policymakers about relying on AI to combat online problems and instead urges them to develop legal frameworks to ensure AI tools do not cause additional harm. YouTuber Yannic Kilcher trains an AI on 4chan's “infamously toxic” Politically Incorrect board, creating a predictably toxic bot, GPT-4chan; he then uses the bot to generate 15,000 posts on the board, quickly receiving condemnation from the academic community. Google suspends and then fires an engineer who claimed that one of its chatbots, LaMDA, achieving sentience; former Google employees Gebru and Mitchell write an opinion piece saying they warned this would happen. For the Fun Site of the Week, a mini version of DALL-E comes to Hugging Face. And finally, IBM researcher Kush Varshney joins Andy and Dave to discuss his book, Trustworthy Machine Learning, which provides AI researchers with practical tools and concepts when developing machine learning systems. Visit our CNA.org to explore the links mentioned in this episode. 

Business News Leaders
Whose worldview should serve as the template for AI morality?

Business News Leaders

Play Episode Listen Later Jun 15, 2022 25:28


Artificial intelligence (AI) was once the stuff of science fiction. But it's becoming widespread. It is used in mobile phone technology and motor vehicles. It powers tools for agriculture and healthcare. But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Google's Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had showed how facial recognition software was less accurate in identifying women and people of colour than white men. Biases in training data can have far-reaching and unintended effects. Over the past several years, concerns around AI ethics have gone mainstream. The concerns, and the outcomes everyone wants to avoid, are largely agreed upon and well documented. No one wants to push out discriminatory or biased AI. No one wants to be the object of a lawsuit or regulatory investigation for violations of privacy. But once we've all agreed that biased, black box, privacy-violating AI is bad, where do we go from here? The question most every senior leader asks is: How do we take action to mitigate those ethical risks? To talk about this Michael Avery is joined by Emma Ruttkamp-Bloem, Professor and Head of Department of Philosophy at University of Pretoria, AI Ethics Lead at South African Centre for AI Research (CAIR); Dr Tanya de Villiers-Botha, Head: Unit for the Ethics of Technology - Centre for Applied Ethics & Johan Steyn, chair of the special interest group on artificial intelligence and robotics with the Institute of Information Technology Professionals of SA

WashingTECH Tech Policy Podcast with Joe Miller
Joe Miller: Let's Talk About 'Tech Transparency

WashingTECH Tech Policy Podcast with Joe Miller

Play Episode Listen Later Jun 13, 2022 9:50


I decided to do a solo episode this week because I think it's really sort of super important to highlight bias in the public policy profession – because it is a profession. Because over the last 17 years that I have been working on tech and media public policymaking, majority-white organizations have always seemed to think it's totally fine to attack organizations founded and led by people of color, orgs like this one, to pursue their status ambitions. So, let's be transparent, shall we? Just to give you some background – when I started working in this space – there was one organization in Washington – the Multicultural Media & Telecom Council (MMTC) – that focused specifically on telecommunications and media policymaking as they relate to underserved and underrepresented communities. This is where I cut my teeth as a young lawyer, in the ONLY fellowship in town for somebody like me who went to law school at night.  Their members are, and continue to be, some of the finest minds in the business – folks like Ari Fitzgerald, a partner at Hogan Lovells who was on the show last week.  These are lawyers of color primarily, who are at the top of this craft. And let me interject something here – this isn't Oakland. We are not Color of Change.  This is DC, it's a political town, it's buttoned-up, and comparing orgs of color in this town to orgs like Color of Change – is really just not a relevant comparison. A better comparison would be to an organization like NAACP or the National Urban League – both of which have local chapters but they're based in DC. Donations come from 2 primary categories here in DC – corporations and foundations – that's it. If you've been around a long time – like the NAACP, NUL, or AARP – you have members. You can raise money from them, in addition to seeking other forms of support. That is the way this market works.  Back around 2011 and 2012 it was organizations like Public Knowledge and Free Press calling out MMTC for accepting donations from Comcast. Again, one of the worst companies in the world for customer service. But they are internet service providers. It didn't matter if larger, huge nonprofits worked with these same companies – all that mattered was that Free Press and Public Knowledge needed someone to pick on when they were advocating for net neutrality. MMTC opposed net neutrality, which didn't make sense to me, which is why I started this organization when I was laid off from the Joint Center, where I co-led an institute along with Nicol Turner-Lee, which was also focused on telecommunications and media policy at the intersection of communities of color. So-called progressives with super-deep pockets didn't like orgs like MMTC because they were the only game in town – they had too much credibility – and they opposed net neutrality ( for reasons, by the way, I continue to be baffled by – but, in any case, they opposed it). So orgs like Public Knowledge and Free Press called them out – and I called out Public Knowledge and Free Press for having ZERO people of color working there but somehow having the audacity to try to drag MMTC.  And as a result of my advocacy – at least I like to think it was – since no one else was pushing back – Public Knowledge is led by the great Chris Lewis and Free Press, by Jessica Gonazlez, who serves co-president along with Craig Aaron.  These orgs now actively recruit diverse talent – they have changed drastically – and I'm proud of them. Jessica, Craig and Chris are my colleagues – just like you have colleagues in any profession – they fixed their model and stopped attacking MMTC. We'll see what happens when and if the net neutrality debate starts up again. But, for now, we're good. Joe Torres is at Free Press who wrote the book on diversity in the news profession.  SO LET'S forward to 2016. In 2016, when this organization – WashingTech – was still an LLC, for profit, Chanelle Hardy, someone I've known since I first moved to DC in 2005 – who had worked at the National Urban League, on the Hill, and the FCC – joined Google. And, again, as a result of my advocacy, since I was vocal about it, as I am now – one of our taglines then and now – is the Inclusive Voice of Tech Policy, since nobody else cared about that until George Floyd. In 2016, Google started a cohort of folks called Next Gen Policy Leaders – the only PROGRAM IN TOWN AT THE TIME – to engage and involve people of color in tech policy. I continue to participate in the program because it is educational, offers great networking, and, again, continues to fill a need that everyone else just woke up to a couple years ago: the lack of diversity on panels, at networking events, on faculties, you name it, related to tech policy issues.  Google was a first mover, while the rest of these tech companies, and nonprofits, were asleep.  So, whose fault is that? Whose fault is it that they've built loyalty by investing in us. Now, fast forward to today – here comes another organization – the so-called ‘Tech Transparency Project,' which, again, isn't a racially and ethnically diverse organization, attacking Google Next Gen in a mediocre paper suggesting that Google had bought out people of color so they wouldn't speak out about Timnit Gebru's firing – again Dr. Gebru is an engineer whom Google fired for speaking up about bias in one of Google's algorithms.  First, I OPPOSED her firing, and vocally, on a listserv read by many Next Gens and other people of color in this space. I blasted Google for it. I was livid. I was so vocal, no one else in the Next Gen cohort needed to be – which is always the case in this town – and let me tell you something, we get PENNIES compared to some of these larger organizations. Do you know what Google donated to us last year? $35,000. The rest of our funding came from Foundation support. But let's take a look at the Center for Democracy & Technology (CDT), which published their annual report  last week. Let's see what Google donated to CDT. And CDT is a partner of ours. I'm on their advisory council. Many of their fine scholars have been on this show. But let's take a look – How much did Google donate to CDT in 2021? Wait for it! Over $500,000. Two other donors gave that much – the Chan Zuckerberg Initiative and the Knight Foundation. Now, let's use CDT's search feature on its website to see how much work they've done on  “Timnit Gebru.” How many times did CDT so much as mention Dr. Gebru's name, much less call out Google for firing her?  ZERO. How many times has the Tech Transparency Project called out CDT for failing to discuss Timnit Gebru? ZERO.  So come on, let's talk about ‘Tech Transparency,' everybody. I guarantee I've made more sacrifices in the cause of inclusion and so-called “transparency” in this space than most of Washington. Let's talk about it. 

Berkeley Talks
Timnit Gebru on how change happens through collective action

Berkeley Talks

Play Episode Listen Later May 31, 2022 19:31


In a special episode, Timnit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute and one of the most prominent researchers working in the field of ethics in artificial intelligence, gives the keynote address to the UC Berkeley School of Information graduating class on May 16. In the speech, Gebru touches on collective action, interconnectedness and the loneliness that may accompany standing on “the right side of history.”Listen to the episode and read a transcript on Berkeley News.Follow Berkeley Talks and review us on Apple Podcasts.Photo by Noah Berger.Music by Blue Dot Sessions. See acast.com/privacy for privacy and opt-out information.

RSG Geldsake met Moneyweb
Die proses het begin vir privaatoperateurs om Transnet se spoorinfrastruktuur te gebru

RSG Geldsake met Moneyweb

Play Episode Listen Later Apr 11, 2022 12:09


Anton Potgieter – uitvoerende hoof, The Logistic Group

Gee Thanks, Just Bought It
Ep 115: Ikea Home Hacks with Dominique Gebru

Gee Thanks, Just Bought It

Play Episode Listen Later Mar 5, 2022 57:38


Dominique Gebru aka DommDotCom, joins us to talk about her love of interior design, IKEA home hacks, and the best (and only) power tools you need, even if you live in a small space. Dominique is also committed to educating others about the importance of understanding gentrification, the housing market and the housing crisis as we all navigate our worlds to create A Place Called Home (aptly, the name of her newsletter!). Read on for how you can get a discount!She also brings her favorite organizational tool, the Paperlike iPad Sheet!Mentioned on this episode:Paperlike: https://fave.co/375EZQN Apple Pencil: https://amzn.to/3vHpAk2 The Ryobi Powertools: https://amzn.to/35qzkV7 A Place Called Home Newsletter: https://www.dominiquegebru.com/join/ Gee, Thanks! listeners who sign up can use code GEETHANKS for one free month when they sign up by 3/18. This is perfect for people who want to learn more deeply about home – why housing is so expensive in this country, what gentrification really means, and what solutions to the housing crisis might look like.Follow Dominique on Instagram: https://www.instagram.com/dommdotcom/ Shop our Amazon Storefront: https://www.amazon.com/shop/geethanksjustboughtit Gee Thanks! appreciates its supporters, especially Alley Peplinski, Becca Sheaffer, Allie Nagy, Angi James and Erin Gibson! Want to support? Join the Patreon! Sign up: https://www.patreon.com/geethanksandfriends** Join the Facebook group: https://www.facebook.com/groups/geethanks Follow along with recs (and share your own via DM) on the “Gee Thanks, Just Bought It!” Instagram: www.instagram.com/geethanksjustboughtitpod and shop all of our recs here: www.geethanksjustboughtit.com See acast.com/privacy for privacy and opt-out information.

LondonLive Church Podcast
Glorious Inheritance | Dr Merid Gebru

LondonLive Church Podcast

Play Episode Listen Later Feb 26, 2022 41:04


What about, if you had a friend who was better than a brother! Proverbs 18:24 says a good friend IS better than a brother. In God we not only have a friend but a Big Brother Hebrews 2:11, Romans 8:29, Mark 3:34 and because of our Big Brother we are very rich. In Jesus we have all the blessings and inheritance of God 1 Peter 1: 3-4. Listen to this weeks podcast to hear so much more on that Glorious Inheritance that can never perish, spoil or fade. Dr Merid Gebru is a Medical Doctor and trained General Surgeon with a special interest in Kidney transplant. He is passionate about group bible study and married to Dr Meskerem Gelahun with 2 daughters, Bel and Mica, both who serve at Londonlive Church.

Art of Power
How Big Tech silences dissent: Timnit Gebru's insider account

Art of Power

Play Episode Listen Later Feb 10, 2022 46:14


Google hired computer scientist Timnit Gebru to sniff out bias and other unethical practices in the company's sprawling artificial intelligence work. After she drafted a paper that did just that, she says, the company moved to fire her. In this episode of Art of Power, Gebru walks host Aarti Shahani through the twists and turns of life that led her to Silicon Valley. A refugee from Ethiopia, she migrated to Massachusetts as a teen, and then headed to Stanford University (though her college guidance counselor didn't think she was Stanford material). Gebru became a different kind of tech unicorn – a woman and an under-represented minority in the industry. Her departure from Google is one of the most high-profile exits that Big Tech has ever seen. She reflects on what her personal story means for a larger public that's grappling with the unchecked power of a handful of companies. She also explains how she's working to light little fires everywhere.

Marketplace All-in-One
Timnit Gebru envisions a future for smart, ethical AI

Marketplace All-in-One

Play Episode Listen Later Dec 16, 2021 9:50


Artificial intelligence can certainly be used or misused for harmful or illegal purposes, even unintentionally, when human biases are baked into its very code. So, what needs to happen to make sure AI is ethical? Marketplace’s Kimberly Adams speaks with Timnit Gebru, the founder and executive director of the new Distributed AI Research Institute. Gebru said one issue with current AI research is the incentives for doing it in the first place.

Marketplace Tech
Timnit Gebru envisions a future for smart, ethical AI

Marketplace Tech

Play Episode Listen Later Dec 16, 2021 9:50


Artificial intelligence can certainly be used or misused for harmful or illegal purposes, even unintentionally, when human biases are baked into its very code. So, what needs to happen to make sure AI is ethical? Marketplace’s Kimberly Adams speaks with Timnit Gebru, the founder and executive director of the new Distributed AI Research Institute. Gebru said one issue with current AI research is the incentives for doing it in the first place.

Marketplace Tech
Timnit Gebru envisions a future for smart, ethical AI

Marketplace Tech

Play Episode Listen Later Dec 16, 2021 9:50


Artificial intelligence can certainly be used or misused for harmful or illegal purposes, even unintentionally, when human biases are baked into its very code. So, what needs to happen to make sure AI is ethical? Marketplace’s Kimberly Adams speaks with Timnit Gebru, the founder and executive director of the new Distributed AI Research Institute. Gebru said one issue with current AI research is the incentives for doing it in the first place.

Forbes India Daily Tech Brief Podcast
LinkedIn is now also available in Hindi; Big Tech may challenge India's data law; $120mln drained in a crypto heist; plus, Google is making a smartwatch

Forbes India Daily Tech Brief Podcast

Play Episode Listen Later Dec 3, 2021 5:02


LinkedIn has started supporting Hindi—the first Indian regional language on the professional network—to support 600 million Hindi language speakers globally, the company said in a press release. With the launch of Hindi, LinkedIn now supports 25 languages globally. As part of the phase 1 rollout of LinkedIn in Hindi, members will be able to access their feed, profile, jobs, messaging, and create content in Hindi on desktop and their Android and iOS phones. The platform will also continue to add more Hindi publishers and creators in the coming weeks to boost member engagement and conversations in Hindi. Big Tech is preparing to legally challenge certain provisions in India's data protection bill if lawmakers accept and adopt all the recommendations of the Joint Committee of Parliament in the final legislation, Economic Times reports. The biggest point of contention is a proposal to classify social media platforms as publishers as it places the onus for user-generated content on internet companies. That could impact companies including Facebook, Google's YouTube, Twitter, and WhatsApp—all of whom stand to lose the safe harbour or immunity currently provided by the Information Technology Act, 2000. Someone drained funds from multiple cryptocurrency wallets connected to the decentralised finance platform BadgerDAO on Wednesday, The Verge reports. According to the blockchain security and data analytics Peckshield, which is working with Badger to investigate the heist, the various tokens stolen in the attack are worth about $120 million. While the investigation is still ongoing, members of the Badger team have told users that they believe the issue came from someone inserting a malicious script in the UI of their website. Timnit Gebru, the Google AI researcher who was fired controversially a year ago, has launched a new research institute to ask questions about responsible use of artificial intelligence that she says Google and other tech companies won't, Wired reports. Gebru is now the founder and executive director of Distributed Artificial Intelligence Research. “Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” she says. In more Google news, the internet search giant is planning to launch its smartwatch next year, Slashdot reports, citing a report from Insider, which is behind a paywall. "Two employees said a spring launch was possible if the latest testing round is a success, however, all sources stressed that details and timelines were subject to change depending on feedback from employees testing the device," reports Insider. The device, which is internally codenamed ‘Rohan,' (from the Lord of the Rings maybe) will showcase the latest version of Google's smartwatch software to customers and partners, according to Slashdot. Tesla says it plans to ship a smaller version of its Cyberquad EV, named 'Cyberquad for Kids' in four weeks, MobileSyrup reports. While the Cyberquad—which cost $1900—maybe targeted for kids, Tesla's Head of Design says that adults can ride it as well. The company's website states that the little quad has 15 miles (24 km) of range and can go up to 10mph (16km/h). Its battery takes around five hours to charge, according to the report.

The Other Side Of The Firewall
Dr. Gebru Sounds Off

The Other Side Of The Firewall

Play Episode Listen Later Sep 27, 2021 17:34


What's up, everyone! In this episode, Ryan and Shannon discuss Dr. Gebru, Google's former AI ethics subject matter expert, calling for a reconceptualizing of AI and potential government regulation. Please LISTEN

LondonLive Church Podcast
Dr Merid Gebru | Jesus Is The BEST Of Friends We Have

LondonLive Church Podcast

Play Episode Listen Later Sep 4, 2021 53:42


Scripture: Luke 5:17-26 Teaser: Jesus as our best friend, knows and understands our various needs; and He is able to deliver these in rather extraordinary & remarkable ways. Bio: Name:- Merid Gebru An Ethiopian medical doctor. UK trained General surgeon with special interest in Kidney transplant. Passion about group Bible study - married to Dr Meskerem Gelahun with two daughters (Bel & Mica)

Soonish
Goodbye, Google

Soonish

Play Episode Listen Later Jun 25, 2021 42:09


What if a technology company becomes so rich, so powerful, so exploitative, and so oblivious that that the harm it's doing begins to outweigh the quality and utility of its products? What if that company happens to run the world's dominant search, advertising, email, web, and mobile platforms? This month's episode of Soonish argues that it's time to rein in Google—and that individual internet users can play a meaningful part by switching to other tools and providers. It's half stem-winder, half how-to, featuring special guest Mark Hurst of the WFMU radio show and podcast Techtonic.* * *  Back in 2019, in the episode A Future Without Facebook, I explained why I had decided that it was time to delete my Facebook account. In short, I was tired of being part of a system that amplified hateful and polarizing messages in order to keep users engaged and drive more advertising revenue for Zuckerberg & Co. I knew at the time that Google also engages in such practices at YouTube, and that the search giant's whole surveillance capitalism business model rests on tracking user's behavior and serving them targeted ads. But I continued as a customer of Google nonetheless, while keeping one eye on the company to see whether its tactics were growing more toxic, or less.The moment when Google finally exhausted my patience came in December 2020, when the company fired a prominent Black computer scientist and AI ethicist named Timnit Gebru in a dispute over a scholarly paper she'd co-written. Gebru and her co-authors argued in the paper that without better protections, racial and gender bias might seep into Google's artificial intelligence systems in areas like natural language processing and face recognition. Google executives thought the paper was too harsh and forbade Gebru from publishing it; she objected; and things went downhill from there.It was a complicated story, but it convinced me that at the upper echelons of Google, any remnant of a commitment to the company's sweeping motto—"Don't Be Evil"—had given way to bland and meaningless statements about "protecting users" and "expanding opportunity" and "including all voices." In fact, the company was doing the opposite of all of those things. It was time for me to opt out. How I went about doing that—and how other consumers can too—is what this episode is all about. I explain the Gebru case and other problems at Google, and I also speak at length with guest Mark Hurst, a technology critic who runs the product design consultancy Creative Good and hosts the radio show and podcast Techtonic at WFMU. Mark publishes an important site called Good Reports, where consumers can find the best alternatives to the services offered by today's tech giants in areas like search, social media, and mobile technology.Hurst emphasizes—and I agree—that leaving Google isn't an all-or-nothing proposition. The company is so deeply embedded in our lives that it's almost impossible to cut it out entirely. Instead, users can uncouple from Google step by step—first switching to a different search engine, then trying a browser other than Chrome, then switching from Gmail to some other email platform, and so forth."Setting a goal of getting ourselves 100 percent off of Google is is unrealistic," Mark says. "And it's I think it's a little bit of a harmful goal, because it's so hard that people are going to give up early on. But instead, let's let's have a goal of learning what's happening in the world and then making some choices for ourselves, some small choices at first, of how we want to do things differently. If enough of us make the decision to extricate ourselves from Google, we'll form a movement and other companies will see an opportunity to build less exploitative tools for us. You've got to start somewhere!"NotesThe Soonish opening theme is by Graham Gordon Ramsay. All additional music by Titlecard Music and Sound.If you enjoy Soonish, please rate and review the show on Apple Podcasts. Every additional rating makes it easier for other listeners to find the show.Listener support is the rocket fuel that keeps our little ship going! You can pitch in with a per-episode donation at patreon.com/soonish.Follow us on Twitter and get the latest updates about the show in our email newsletter, Signals from Soonish.Chapter Guide0:08 Soonish theme00:21 Time to Find a New Favorite Restaurant02:46 What I'm Not Saying04:01 Re-introducing Mark Hurst07:08 The Ubiquity of Google11:04 Surveillance Capitalism and YouTube Extremism12:29 The Timnit Gebru Case18:01 Hurst: "Let's shut down the entire Google enterprise"19:48 Midroll announcement: Support Soonish on Patreon20:54 10 Steps toward Reducing Your Reliance on Google29:04 Using Google Takeout30:20 The Inevitability of YouTube31:44 Be a Google Reducetarian32:20 Enmeshed in Big Tech37:04 The Value of Sacrifice40:17 End Credits and Hub & Spoke Promo for Open Source

AI with AI
Xenomania

AI with AI

Play Episode Listen Later Apr 16, 2021 37:19


Andy and Dave discuss the latest in AI news, including the resignation of Samy Bengio from Google Brain, which fired ethicists Gebru in December and Mitchell in February. The Joint AI Center releases its request for proposals on Data Readiness for AI Development (DRAID). DARPA prepares for the quantum age with a program for Quantum Computer Benchmarking. And a separate DARPA program seeks to enable fully homomorphic encryption with its Data Protection in Virtual Environments (DPRIVE) program. A poll from Hyland on digital distrust shows that Americans think that over the next decade, AI has the most potential to cause harm. Amazon introduces the next level of “biometric consent” required for its delivery drivers, which includes an always-on camera observing the driver and gathering other data; drivers will lose their jobs if they do not consent to the monitoring. And Josh Bongard of the University of Vermont and Michael Levin of Tufts University along with other researchers from Wyss and Harvard join together to form the Institute for Computationally Designed Organisms (ICDO), which will focus on “AI-driven designs of new life forms.” In research, Bongard publishes the latest iteration of its mobile living machines, with Xenobots II, using frog cells to create life forms capable of motion, memory, and manipulation of the world around them. Researchers from the universities of Copenhagen, York, and Shanghai use neural cellular automata to grow 3D objects and functional machines within the Minecraft world. And OpenAI Robotics demonstrates the ability for a robotic arm to solve manipulation tasks, including tasks with previously unseen goals and objects, with asymmetric self-play. And the Book / Fun Site of the Week comes from the Special Interest Group on Harry Q. Bovik (SIGBOVIK), which presents “April Fools” research, descriptions of truly absurd, but fascinating, research. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.

AI with AI
Diplomachine

AI with AI

Play Episode Listen Later Mar 26, 2021 33:29


Andy and Dave discuss the latest in AI news, including the release of the U.S. Navy and Marine Corps Unmanned Campaign Framework, which describes the desired approach to developing and deploying unmanned systems. Google employees demand stronger laws to protect AI researchers, in the wake of the firings of Gebru and Mitchell. Hour One debuted technology that creates fully digital and photorealistic AI personas for the purposes of content creation, such as welcome receptionist or information desk. Pennsylvania state law now allows for autonomous delivery robots to use sidewalks and operate on roads. The U.S. Army announces the availability of a training set for facial recognition that also includes thermal camera images, which it will make available for “valid scientific research.”  In research, Facebook AI demonstrates an algorithm capable of human-level performance in Diplomacy (no-press), using an equilibrium search to reason about what the other players are reasoning; the algorithm achieved a rank of 23 out of 1,128 human players. Researchers in Helsinki and Germany explore the effects of the Uncanny Valley, suggesting that a robot’s appearance changes how humans judge its decisions. The Resource of the Week comes via Pete Skomoroch, who pointed out that Wikipedia contains a massive list of datasets for machine learning research (along with useful summary details about the dataset). The Book of the Week is Telling Stories, with authors from around the globe bringing culturally different perspectives on tales of AI. And the Videos of the Week come from MIT, which has published its Introduction to Deep Learning course online, with free access. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.

Land of the Giants
Googlers vs. Google

Land of the Giants

Play Episode Listen Later Mar 23, 2021 48:28


On December 2nd, 2020, Dr. Timnit Gebru - co-lead of Google’s Ethical AI team - got an email that said Google had accepted her resignation. A resignation she didn’t think she made. Her exit is just the latest sign of the crisis unfolding within Google — a loss of trust between many of its employees and leadership. This week, what led to Gebru’s exit - and what it means for us, Google’s users. Because when enough people who work inside Google don't even trust each other -- how can we? Hosts: Shirin Ghaffary (@shiringhaffary) and Alex Kantrowitz (@kantrowitz) Enjoyed this episode? Rate us ⭐⭐⭐⭐⭐and leave a review on Apple Podcasts. Want to get in touch? Tweet @recode. Subscribe for free. Be the first to hear next week's episode by subscribing in your favorite podcast app. Learn more about your ad choices. Visit megaphone.fm/adchoices

Haymarket Books Live
The Fight For the Future Organizing In and Around the Tech Industry (1-22-21)

Haymarket Books Live

Play Episode Listen Later Mar 4, 2021 93:18


Join Timnit Gebru, and other important scholars and activists for a discussion of how we resist the corporate power of the tech monopolies. ———————————————— Big Tech touches nearly every part of our lives. From vacuuming massive amounts of information about our movements and collecting images of our faces, to dictating where gigwork drivers should go and pushing warehouse workers to fulfill orders, big tech is pervasive in its reach and pernicious in its effect. But workers, organizers, and scholars are pushing back. We are forming unions and organizing collectives with their colleagues. We are sounding the alarm on the ways these technologies exacerbate structural racism and abate the rise of global fascism. And we are starting to win. In December of 2020 Google fired Timnit Gebru, the co-lead of their Ethical Artificial Intelligence Team, after she refused to accept their attempted censorship of her co-authored article questioning the ethics and environmental impact of largescale AI language models. The termination sparked a new wave of organizing among Tech workers who quickly mobilized to defend Gebru against the corporate giant's efforts to silence criticism of a key part of their business model. This organizing—following on the heels of the walk-outs against defense contracts and preceding this month's announcement that Google workers have formed a union—offers important lessons about workers' power within one of capitalism's most profitable and important sectors. Join Timnit Gebru, and other important scholars, activists, and organizers for a discussion of how we resist the corporate power of the tech monopolies who have increasing levels of control over our day to day lives. ———————————————— Speakers: Dr. Timnit Gebru is a co-founder of Black in AI. She was Staff Research Scientist and Co-Lead of the Ethical Artificial Intelligence team at Google before being terminated for demanding an justification for Google's censorship of her co-authored on article questioning the environmental and ethical implications of large-scale AI language models. Dr. Alex Hanna is a sociologist and Senior Research Scientist on the Ethical AI team at Google. Her work centers on origins of the training data which form the informational infrastructure of AI and the way these datasets exacerbate racial, gender, and class inequality. Charlton Mcilwain (@cmcilwain) is Vice Provost for Faculty Engagement & Development at New York University, Professor of Media, Culture, and Communication, and founder of the Center for Critical Race and Digital Studie Dr. Safiya Umoja Noble is an Associate Professor at the University of California, Los Angeles (UCLA) in the Department of Information Studies where she serves as the Co-Founder and Co-Director of the UCLA Center for Critical Internet Inquiry (C2i2). She also holds appointments in African American Studies and Gender Studies. She is the author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, entitled Algorithms of Oppression: How Search Engines Reinforce Racism. Adrienne Williams is a former charter school junior high teacher and Amazon delivery driver, turned labor organizer. Her ultimate goal is to force the powerful to abide by the same laws as the working class, in hopes that equity will lead to freely organizing and advocating for self which will create a happier society. Meredeith Whittaker is a research professor at New York University, co-founder and faculty director of the AI Now Institute at NYU, and founder of Google's Open Research group. Watch the live event recording: https://youtu.be/vDtOxrV9Bqc Buy books from Haymarket: www.haymarketbooks.org Follow us on Soundcloud: soundcloud.com/haymarketbooks

The Other Side Of The Firewall
Got Caught Slackin'

The Other Side Of The Firewall

Play Episode Listen Later Feb 27, 2021 84:07


What's up, everyone! In this episode Ryan, Shannon, and LeVon we will discuss Google trying to end the Dr. Gebru feud, Airbnb investing in Atlanta tech and the City of Morgan taking cybersecurity more seriously in the wake of Oldsmar's water plant hack. In Topic In Topic 1 LeVon will run down a new browser-tracking hack. In Topic 2 Shannon will break down a new Slack for Android vulnerability. All of this will be followed by “What we've been playing? Please enjoy this jam packed show and leave us your questions, comments, and concerns via the Patreon, Instagram, FB page, new FB group, Twitter and email. Remember you can also leave us voice messages that we can listen to and answer on the show. Thanks! Articles: Google is trying to end the controversy over its Ethical AI team. It's not going well Airbnb to open Atlanta tech hub, bring hundreds of jobs City of Fort Morgan takes steps to protect water supply from cyber attackers New browser-tracking hack works even when you flush caches or go incognito PSA: if you use Slack on Android, you might want to update your password Website - www.theothersideofthefirewall.com --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/theothersideofthefirewall/support

Message à caractère informatique
#33 - Assurer L'explicabilité Des Fonctions Éthiques

Message à caractère informatique

Play Episode Listen Later Feb 26, 2021 96:01


Toutes les notes sont disponibles sur https://www.clever-cloud.com/fr/podcast/episode33 Avec par ordre d'apparition : @ldoguin @hsablonniere @vballu @desmfr 00:04:00 Le NoSQL washing https://www.theolognion.com/nosql-fan-hospitalized-with-alcohol-poisoning-after-discovering-postgresql/ (partagé sur twitter par @CowboyCaramel ) 00:06:35 Video game play i positively correlated with well-being https://royalsocietypublishing.org/doi/10.1098/rsos.202049 00:13:00 La prog fonctionnelle et théorie des catégories avec le jeux factorio - https://bartoszmilewski.com/2021/02/16/functorio/ 00:17:40 Shapash - https://medium.com/oss-by-maif/shapash-une-nouvelle-solution-ossbymaif-pour-une-intelligence-artificielle-plus-transparente-c216f9ddb2e9 Meet up #OSSbyMAIF​ : Shapash pour expliquer vos algorithmes de Machine Learning https://www.youtube.com/watch?v=r1R_A9B9apk 00:33:35 L'éthique chez google Les gens virés, la politique de publication, etc… https://www.zdnet.com/article/google-fires-top-ethical-ai-expert-margaret-mitchell/ Google employees form Alphabet Workers Union to bring back the ‘Don't be evil' motto https://www.zdnet.com/article/google-employees-form-alphabet-workers-union-to-bring-back-the-dont-be-evil-motto/ Vidéo de science4all qui résume bien tous les enjeux en quelques minutes https://www.youtube.com/watch?v=HbFadtOxs4k En effet, dans l'article de Reuters de décembre, il est expliqué que Google qui supervise ce que ses employées peuvent publier pour les sujets sensibles https://www.reuters.com/article/us-alphabet-google-research-focus-idUSKBN28X1CB L'article qui aurait amené à l'éviction de Gebru https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/ 21 States Are Now Vetting Unemployment Claims With a ‘Risky' Facial Recognition System https://onezero.medium.com/21-states-are-now-vetting-unemployment-claims-with-a-risky-facial-recognition-system-85c9ad882b60 La FTC (Federal Trade Court) oblige une boite (Everalbum) à supprimer ses modèles et les résultats ainsi que les données associées suite à un usage abusif https://www.natlawreview.com/article/ftc-settles-facial-recognition-data-misuse-allegations-app-developer Le forum économique mondial qui parle d'ia et d'éthique https://www.weforum.org/agenda/2021/02/we-need-to-talk-about-artificial-intelligence/ Article de Fortune sur la prise de conscience des boîtes des problèmes d'IA

I.A. Café - Enquête au cœur de la recherche sur l’intelligence artificielle

Cet épisode est presque en direct. Nous souhaitions publier, de temps à autre, des épisodes de type actualité. La formule est connue. On regarde un peu ce qui se dit, ce qu'on lit, ce qui se passe dans le monde de l'AI. On vous en fait un résumé. On en discute un peu. Dans cet épisode:Ève explore l'affaire Clearview AI. Un rapport conjoint des commissaires à la protection de la vie privée du Canada et des commissaires de trois provinces a en effet conclu que la société technologique américaine Clearview AI représentait de la «surveillance de masse» et avait commis une violation manifeste du droit à la vie privée des Canadiens.David fait un suivi sur le cas Timnit Gebru. Il a lu deux articles, dont celui publié par la principale concernée.Jean-François un outil d'aide à la décision en matière d'éthique des données massives: le Data Ethics Decision Aid (DEDA).Les articles cités sont ici:Enquête conjointe sur Clearview AI, Inc. par le Commissariat à la protection de la vie privée du Canada, la Commission d'accès à l'information du Québec, le Commissariat à l'information et à la protection de la vie privée de la Colombie-Britannique et le Commissariat à l'information et à la protection de la vie privée de l'Alberta. Rapport de conclusions d'enquête en vertu de la LPRPDE no 2021-001 Tamkin, A., Brundage, M., Clark, J., & Ganguli, D. (2021). Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models. arXiv preprint arXiv:2102.02503. Aline Shakti Franzke, Iris Muis et Mirko Tobias Schäfer, Data Ethics Decision Aid (DEDA): a dialogical framework for ethical inquiry of AI and data projects in the Netherlands, Ethics and Information Technology.  Mörch, C. M., Gupta, A., & Mishara, B. L. (2019). Canada protocol: An ethical checklist for the use of artificial Intelligence in suicide prevention and mental health. arXiv preprint arXiv:1907.07493. Support the show

Hashtag Trending
Hashtag Trending - Google engineers resign over Gebru firing; Amazon’s “Megacycle” shift for warehouse workers; Clearview AI troubles in Canada

Hashtag Trending

Play Episode Listen Later Feb 5, 2021 3:47


Two Google engineers resign over the firing of two women at the company, the internet groans after hearing about Amazon’s "Megacycle" shift, and the controversial tech company Clearview AI is facing some heat in Canada.

I.A. Café - Enquête au cœur de la recherche sur l’intelligence artificielle
Épisode 7 - Le Bye Bye d'Element AI - Nos prix Lovelace et IArk (partie 2)

I.A. Café - Enquête au cœur de la recherche sur l’intelligence artificielle

Play Episode Listen Later Jan 14, 2021 49:02


Deuxième partie de nos prix Lovelace et IArk.Rappelons le concept. Une fois par trimestre, peut-être deux, on enregistre un épisode dans lequel nous ferons une forme de revue de l'actualité. Et nous donnerons des prix: les prix lovelace et les prix IArk. Les prix Lovelace. On choisit nos meilleurs (articles scientifiques, évènements, congrès, prouesses techniques) du trimestre. Et les plus dignes et méritants recevront nos prix Lovelace.La contrepartie: nos prix IArk. Inspiré d'un jeu de mots dégoutant, ce sont des articles scientifiques, événements, congrès, prouesses techniques, qui nous font dire I. A, rk!---Dans cet épisode:Pour Frédérick:Prix Lovelace à Timnit Gebru (encore!). Fred résume deux de ses contributions à la réflexion sur les enjeux éthiques et sociaux de l'intelligence artificielle.Prix IArk à «La réponse Google AI (et Jeff Dean ) dans l'Affaire Gebru vs Google,  et parce qu'il mérite bien un deuxième prix IArk.Pour JF: Prix Lovelace à Deepmind, le laboratoire d'Alphabet (mère de Google). Le laboratoire s'est servi des algorithmes d'AlphaFold (Le jeu de Go : une pierre "blanche" dans l‘histoire de l'AI) pour résoudre un problème insoluble depuis 50 ans: la prédiction de la forme exacte des protéines.Prix IArk à «la vente d'Élément AI au développeur de logiciels californien la compagnie ServiceNow». Articles cités:Gebru et al. (2018). Datasheets for Datasets, ArXiv  Gebru et al. (2020). « Race and Gender » dans Oxford Handbook of Ethics of AI Deepmind, (novembre 2020)  AlphaFold: a solution to a 50-year-old grand challenge in biologyKarim Benessaieh, (novembre 2020)  Element AI acquise par une firme californienneProduction, animation et collaboration: Jean-François Sénéchal, Ph.DCollaborateurs: David Beauchemin, Ève Gaumond, Frédérick PlamondonSupport the show

Trend Lines
A Scandal at Google and the Future of AI

Trend Lines

Play Episode Listen Later Dec 23, 2020 40:54


Earlier this month, Timnit Gebru, the co-leader of a team of researchers at Google specializing in the ethical implications of artificial intelligence, was unceremoniously ousted from her position. Some of the circumstances that led to her departure are disputed, but Gebru—a Black woman in a field that is overwhelmingly white and male—claims she was forced out for drawing unwelcome attention to the lack of diversity in Google’s workforce. She also claims she was “silenced” for her refusal to retract a paper that she had co-authored on ethical problems associated with certain types of AI models that are central to Google’s business. The episode has sparked a fierce backlash across Silicon Valley and beyond, including among current and former Google employees. This week on Trend Lines, WPR’s Elliot Waldman is joined by Karen Hao, the senior AI reporter for MIT Technology Review, to discuss the reaction to Gebru’s dismissal and the troubling issues she has raised around the ethical implications of recent advances in AI. To learn more about this topic, check out Karen’s weekly newsletter, The Algorithm, and the podcast she co-produces, In Machines We Trust. Relevant Articles on WPR:   It Will Take More Than an Antitrust Case to Fix the Problems of Big Tech The Troubling Rise of Facial Recognition Technology in Democracies Are Governments Sacrificing Privacy to Fight the Coronavirus Pandemic? Can New Norms of Behavior Extend the Rules-Based Order Into Cyberspace? Trend Lines is produced and edited by Peter Dörrie, a freelance journalist and analyst focusing on security and resource politics in Africa. You can follow him on Twitter at @peterdoerrie. To send feedback or questions, email us at podcast@worldpoliticsreview.com.

All Of It
Google, Artificial Intelligence, and Injustice

All Of It

Play Episode Listen Later Dec 22, 2020 13:08


Earlier this month, Google ousted Timnit Gebru, an ethicist researching how artificial intelligence can perpetuate systemic biases. Gebru's ouster, which was met with forceful backlash from her team, came after an internal review of a study she did that spotlighted injustices in one of Google's key products, text autocomplete. New York bureau chief at Bloomberg Business, Shelly Banjo, joins us to talk about her reporting on what this means for Google and the future of artificial intelligence in big tech.

Blocked and Reported
On Mental Health, Youth Gender Dysphoria, And Our Own (Arguable) Fallibility

Blocked and Reported

Play Episode Listen Later Dec 21, 2020 61:54


In a true sign that anything is possible in this crazy universe of ours, the hosts note that there were some errors in the previous episode and explain exactly what they were (to see them explained in written form, check out that episode's show notes). Then they discuss some mental-health-talk from Jesse that rubbed one reader the wrong way and continue on to the show's main event: a major court decision on youth gender dysphoria in England and an incredibly shoddy article in Foreign Policy about it. (Corrections: In the course of correcting last week's episode, we managed to introduce not one but two new errors, and uploaded a new version of this episode in which an increasingly untethered-to-sanity Jesse breaks in twice to explain them. First, Timnit Gebru's PhD is not in computer science, but rather in electrical engineering. This was just plain sloppiness on someone's part, possibly Jéssé's. Second, Gebru started at Google in 2018, not 2019. In this case, our excuse is that both The New York Times and Columbia Journalism Review wrote that she was hired "last year." This is false, and thank you to the Google employee who reached out to correct us and pointed us to a 2018 tweet in which Gebru talks about her early days at Google. Wired has this right [see below].) Show notes/Links: Last episode's show notes: https://barpodcast.fireside.fm/42 (https://barpodcast.fireside.fm/42) Bell v Tavistock, the ruling at issue: https://www.judiciary.uk/wp-content/uploads/2020/12/Bell-v-Tavistock-Judgment.pdf (https://www.judiciary.uk/wp-content/uploads/2020/12/Bell-v-Tavistock-Judgment.pdf) A High Court Decision in Britain Puts Trans People Everywhere at Risk: https://foreignpolicy.com/2020/12/15/uk-transphobia-transgender-court-ruling-puberty-blockers/ (https://foreignpolicy.com/2020/12/15/uk-transphobia-transgender-court-ruling-puberty-blockers/) Jesse's newsletter posts on key aspects of this controversy: A Response To Foreign Policy's Deeply Misleading Article, "A High Court Decision in Britain Puts Trans People Everywhere at Risk": https://jessesingal.substack.com/p/a-response-to-foreign-policys-deeply (https://jessesingal.substack.com/p/a-response-to-foreign-policys-deeply) No, desistance hasn't been debunked, and no, there's no evidence it's all that rare: Part: 1 https://jessesingal.substack.com/p/how-science-vs-made-two-gender-dysphoria (https://jessesingal.substack.com/p/how-science-vs-made-two-gender-dysphoria) Part 2: https://jessesingal.substack.com/p/how-science-vs-accidentally-invented (https://jessesingal.substack.com/p/how-science-vs-accidentally-invented) NYT wrong on Gebru's start date: https://web.archive.org/web/20201209213645/https://www.nytimes.com/2020/12/09/technology/timnit-gebru-google-pichai.html (https://web.archive.org/web/20201209213645/https://www.nytimes.com/2020/12/09/technology/timnit-gebru-google-pichai.html) CJR wrong on Gebru's start date: https://web.archive.org/web/20201211222743/https://www.cjr.org/themediatoday/google-researcher.php (https://web.archive.org/web/20201211222743/https://www.cjr.org/the_media_today/google-researcher.php) Wired right on Gebru's start date: https://www.wired.com/story/prominent-ai-ethics-researcher-says-google-fired-her/ (https://www.wired.com/story/prominent-ai-ethics-researcher-says-google-fired-her/) Tweet proving it: https://web.archive.org/web/20181030095756if_/https://twitter.com/timnitGebru (https://web.archive.org/web/20181030095756if_/https://twitter.com/timnitGebru)

The Other Side Of The Firewall
Do you stand with Dr. Gebru?

The Other Side Of The Firewall

Play Episode Listen Later Dec 20, 2020 65:02


Happy Holidays, everyone! In this episode Ryan, Shannon, and LeVon discuss; Google's firing of Dr. Timnit Gebru, former Cisco Engineer, Mr. Ramesh's punishment, the FireEye breach, and Cloudflare and Apple's new DNS protocol. The crew also waxes nostalgic about topics such as Net Zero, Netflix discs and the Sega Dreamcast. Please enjoy this jam packed show and leave us your questions, comments, and concerns via the Patreon, Instagram, FB, Twitter and email. Remember you can also leave us voice messages that we can listen to and answer on the show. Thanks! Articles: Standing with Dr. Timnit Gebru — #ISupportTimnit #BelieveBlackWomen Ex-Cisco Systems employee sentenced for damaging network FireEye, a Top Cybersecurity Firm, Says It Was Hacked by a Nation-State Cloudflare and Apple made a new DNS protocol to protect your data from ISPs --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/theothersideofthefirewall/support

Blocked and Reported
Episode 43: On Mental Health, Youth Gender Dysphoria, And Our Own (Arguable) Fallibility

Blocked and Reported

Play Episode Listen Later Dec 19, 2020 60:17


In a true sign that anything is possible in this crazy universe of ours, the hosts note that there were some errors in the previous episode and explain exactly what they were (to see them explained in written form, check out that episode's show notes). Then they discuss some mental-health-talk from Jesse that rubbed one reader the wrong way and continue on to the show's main event: a major court decision on youth gender dysphoria in England and an incredibly shoddy article in Foreign Policy about it.(Corrections 10:10 pm 12/18/2020: In the course of correcting last week's episode, we managed to introduce not one but two new errors, and just uploaded a new version in which Jesse breaks in twice to explain them. First, Timnit Gebru's PhD is not in computer science, but rather in electrical engineering. This was just plain sloppiness on someone's part, possibly Jéssé's. Second, Gebru started at Google in 2018, not 2019. In this case, our excuse is that both The New York Times and Columbia Journalism Review wrote that she was hired "last year." This is false, and thank you to the Google employee who reached out to correct us and pointed us to a 2018 tweet in which Gebru talks about her early days at Google. Wired has this right.)Show notes/Links:Bell v Tavistock, the ruling at issue: https://www.judiciary.uk/wp-content/uploads/2020/12/Bell-v-Tavistock-Judgment.pdf A High Court Decision in Britain Puts Trans People Everywhere at Risk: https://foreignpolicy.com/2020/12/15/uk-transphobia-transgender-court-ruling-puberty-blockers/ Jesse's newsletter posts on key aspects of this controversy:A Response To Foreign Policy's Deeply Misleading Article, "A High Court Decision in Britain Puts Trans People Everywhere at Risk": https://jessesingal.substack.com/p/a-response-to-foreign-policys-deeply No, desistance hasn't been debunked, and no, there's no evidence it's all that rare:Part: 1 https://jessesingal.substack.com/p/how-science-vs-made-two-gender-dysphoria Part 2: https://jessesingal.substack.com/p/how-science-vs-accidentally-invented  This is a public episode. Get access to private episodes at www.blockedandreported.org/subscribe

Blocked and Reported
What In The Everloving Name Of Diversity, Equity, And Inclusion Is Going On At Google?

Blocked and Reported

Play Episode Listen Later Dec 14, 2020 63:44


A major blowup at at Google led to the firing or resignation (depending on who you ask) of Timnit Gebru and accusations that the company's social-justice efforts are performative at best. But behind the scenes, some Googlers are telling a different, slightly more complicated story. Who's right? Katie and Jesse bring on Jon Stokes, an Ars Technica founder who is much better connected in the tech world than they are, to help explain this tangled controversy. (Correction: In the original version of this episode we posted for our patrons, our guest, Jon Stokes, wrongly described Timnit Gebru's academic background as being in sociology, with her having also done some work in computer science. That should have been flipped: She has a PhD from the Stanford Artificial Intelligence Laboratory, but has done plenty of work that intersects with sociology. Jon also said she was educated at M.I.T., but all her degrees are from Stanford. Whenever an error makes it into the final show, that is 100% the fault of hosts, and we apologize. We've snipped out the audio containing the errors and added a link to Gebru's Stanford page to the top of the show notes.) Show notes/Links: Timnit Gebru's background: https://ai.stanford.edu/~tgebru/ (https://ai.stanford.edu/~tgebru/) Jon Stokes: https://twitter.com/jonst0kes (https://twitter.com/jonst0kes) His site, The Prepared: http://www.theprepared.com (http://www.theprepared.com) The New York Times: Google Chief Apologizes for A.I. Researcher’s Dismissal - https://www.nytimes.com/2020/12/09/technology/timnit-gebru-google-pichai.html (https://www.nytimes.com/2020/12/09/technology/timnit-gebru-google-pichai.html) The New York Times: Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I. - https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html (https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html) Wired: Behind the Paper That Led to a Google Researcher’s Firing - https://www.wired.com/story/behind-paper-led-google-researchers-firing/ (https://www.wired.com/story/behind-paper-led-google-researchers-firing/) Platformer: The withering email that got an ethical AI researcher fired at Google - https://www.platformer.news/p/the-withering-email-that-got-an-ethical (https://www.platformer.news/p/the-withering-email-that-got-an-ethical) MIT Technology Review: We read the paper that forced Timnit Gebru out of Google. Here’s what it says. - https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/ (https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/) Twitter: Yann LeCun on the data problem - https://twitter.com/ylecun/status/1274782757907030016 (https://twitter.com/ylecun/status/1274782757907030016) Twitter: Gebru disagrees - https://twitter.com/timnitGebru/status/1274808654227619840 (https://twitter.com/timnitGebru/status/1274808654227619840) One of the Reddit posts in question critical of Gebru: https://www.reddit.com/r/MachineLearning/comments/k77sxz/dtimnitgebruandgooglemegathread/gepq3u8/?utmsource=reddit&utm_medium=web2x&context=3 (https://www.reddit.com/r/MachineLearning/comments/k77sxz/d_timnit_gebru_and_google_megathread/gepq3u8/?utm_source=reddit&utm_medium=web2x&context=3)

Blocked and Reported
Episode 42: What In The Everloving Name Of Diversity, Equity, And Inclusion Is Going On At Google? (with Jon Stokes)

Blocked and Reported

Play Episode Listen Later Dec 12, 2020 61:17


A major blowup at at Google led to the firing or resignation (depending on who you ask) of Timnit Gebru and accusations that the company's social-justice efforts are performative at best. But behind the scenes, some Googlers are telling a different, slightly more complicated story. Who's right? Katie and Jesse bring on Jon Stokes, an Ars Technica founder who is much better connected in the tech world than they are, to help explain this tangled controversy. (Correction: In the original version of this episode we posted for our patrons, our guest, Jon Stokes, wrongly described Timnit Gebru's academic background as being in sociology, with her having also done some work in computer science. That should have been flipped: She has a PhD from the Stanford Artificial Intelligence Laboratory, but has done plenty of work that intersects with sociology. Jon also said she was educated at M.I.T., but all her degrees are from Stanford. Whenever an error makes it into the final show, that is 100% the fault of hosts, and we apologize. We've snipped out the audio containing the errors and added a link to Gebru's Stanford page to the top of the show notes.)  (More corrections, 12/18/2020: In the original versions of this pocast, both free and patrons-only, Jon also said at approximately 20:40 that Gebru had been at Google since 2016, though he hedged this with an "I think.". She had actually only been at Google since 2019 [YIKES -- actually 2018. Listen to the updated episode for a correction of the correction]. At about 56:30 [free version timestamp], he laid out a storyline, which he presented as informed speculation, in which Jeffrey Dean, head of Google AI, had promised Gebru and some of her allies in 2016 that they could do ethical work. Then, in this telling, Google pivoted to AI, and as a result Gebru and her colleagues' work was seen as a liability and came under more scrutiny. But that storyline doesn’t quite work, as Jon presented it, given that Gebru only arrived at Google in 2019, well after Sundar Pichai announced the company would be “AI first” in 2016. We've snipped out the audio containing these errors and will be discussing all of this episode's errors on the free episode of the podcast set to go live 12/21/2020, and earlier for patrons.)Show notes/Links:Timnit Gebru's background: https://ai.stanford.edu/~tgebru/ Jon Stokes: https://twitter.com/jonst0kes His site, The Prepared: http://www.theprepared.com The New York Times: Google Chief Apologizes for A.I. Researcher’s Dismissal - https://www.nytimes.com/2020/12/09/technology/timnit-gebru-google-pichai.html The New York Times: Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I. - https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html Wired: Behind the Paper That Led to a Google Researcher’s Firing - https://www.wired.com/story/behind-paper-led-google-researchers-firing/ Platformer: The withering email that got an ethical AI researcher fired at Google - https://www.platformer.news/p/the-withering-email-that-got-an-ethical MIT Technology Review: We read the paper that forced Timnit Gebru out of Google. Here’s what it says. - https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/ Twitter: Yann LeCun on the data problem - https://twitter.com/ylecun/status/1274782757907030016 Twitter: Gebru disagrees - https://twitter.com/timnitGebru/status/1274808654227619840 (Note: As of 10:45 a.m. on Saturday I'm having trouble finding the throaway Reddit posts we mention, but when I do I'll add those as well. Update: Thanks, John, for posting this in the comments: https://www.reddit.com/r/MachineLearning/comments/k77sxz/d_timnit_gebru_and_google_megathread/gepq3u8/?utm_source=reddit&utm_medium=web2x&context=3) This is a public episode. Get access to private episodes at www.blockedandreported.org/subscribe

ALLsportsradio
Daniel Abraham Gebru (Paralympisch kampioen Rio 2016) - ALLsportsradio LIVE! 21 oktober 2020

ALLsportsradio

Play Episode Listen Later Oct 21, 2020 12:31


ALLsportsradio LIVE! stond weer in het teken van Fonds Gehandicaptensport! Met FGS-directeur Nike Boor als sidekick, sprak Robert Denneman met Eelke van der Wal en Larissa Havik van de Nederlandse paracycling selectie én met paralympisch kampioen Daniel Abraham Gebru. Bij ALLsportsradio trakteren we je elke werkdag tussen 12:00 en 13:00 uur op een lekkere sportieve lunch in ALLsportsradio LIVE! Met studiogasten, updates, reportages, interviews en bijzondere verhalen achter de sport. Welke sport het ook is en op wat voor een niveau dan ook: als het verhaal goed is, hoor je het hier!

WIRED Business – Spoken Edition
IBM's Withdrawal Won't Mean the End of Facial Recognition

WIRED Business – Spoken Edition

Play Episode Listen Later Jun 12, 2020 7:31


To some in the tech industry, facial recognition increasingly looks like toxic technology. To law enforcement, it's an almost irresistible crime-fighting tool.IBM is the latest company to declare facial recognition too troubling. CEO Arvind Krishna told members of Congress Monday that IBM would no longer offer the technology, citing the potential for racial profiling and human rights abuse. In a letter, Krishna also called for police reforms aimed at increasing scrutiny and accountability for misconduct.“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote Krishna, the first non-white CEO in the company's 109-year history. IBM has been scaling back the technology's use since last year.Krishna's letter comes amid public protest over the killing of George Floyd by a police officer and police treatment of black communities. But IBM's withdrawal may do little to stem the use of facial recognition, as a number of companies supply the technology to police and governments around the world.“While this is a great statement, it won't really change police access to #FaceRecognition,” tweeted Clare Garvie, a researcher at Georgetown University's Center on Privacy and Technology who studies police use of the technology. She noted that she had not so far come across any IBM contracts to supply facial recognition to police.According to a report from the Georgetown center, by 2016 photos of half of American adults were in a database that police could search using facial recognition. Adoption has likely swelled since then. A recent report from Grand View Research predicts the market will grow at an annual rate of 14.5 percent between 2020 and 2027, fueled by “rising adoption of the technology by the law enforcement sector.” The Department of Homeland Security said in February that it has used facial recognition on more than 43.7 million people in the US, primarily to check the identity of people boarding flights and cruises and crossing borders.Other tech companies are scaling back their use of the technology. Google in 2018 said it would not offer a facial recognition service; last year, CEO Sundar Pichai, indicated support for a temporary ban on the technology. Microsoft opposes such a ban, but said last year that it wouldn't sell the tech to one California law enforcement agency because of ethical concerns. Axon, which makes police body cameras, said in June 2019 that it wouldn't add facial recognition to them.But some players, including NEC, Idemia, and Thales, are quietly shipping the tech to US police departments. The startup Clearview offers a service to police that makes use of millions of faces scraped from the web.The technology apparently helped police hunt down a man accused of assaulting protesters in Montgomery County, Maryland.At the same time, public unease over the technology has prompted several cities, including San Francisco, Oakland, and Cambridge, Massachusetts, to ban use of facial recognition by government agencies.Officials in Boston are considering a ban; supporters point to the potential for police to surveil protesters. Amid the protests following Floyd's killing “the conversation we're having today about face surveillance is all the more urgent,” Kade Crockford, director of the Technology for Liberty program at the ACLU of Massachusetts, said at a press conference Tuesday.Timnit Gebru, a Google researcher who has played an important role in revealing the technology's shortcomings, said during an event on Monday that facial recognition has been used to identify black protesters, and argued that it should be banned. “Even perfect facial recognition can be misused,” Gebru said. “I'm a black woman living in the US who has dealt with serious consequences of racism. Facial recognition is being used against the black community.”

WIRED Business – Spoken Edition
IBM's Withdrawal Won't Mean the End of Facial Recognition

WIRED Business – Spoken Edition

Play Episode Listen Later Jun 12, 2020 7:31


To some in the tech industry, facial recognition increasingly looks like toxic technology. To law enforcement, it’s an almost irresistible crime-fighting tool. IBM is the latest company to declare facial recognition too troubling. CEO Arvind Krishna told members of Congress Monday that IBM would no longer offer the technology, citing the potential for racial profiling and human rights abuse. In a letter, Krishna also called for police reforms aimed at increasing scrutiny and accountability for misconduct. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote Krishna, the first non-white CEO in the company’s 109-year history. IBM has been scaling back the technology’s use since last year. Krishna’s letter comes amid public protest over the killing of George Floyd by a police officer and police treatment of black communities. But IBM’s withdrawal may do little to stem the use of facial recognition, as a number of companies supply the technology to police and governments around the world. “While this is a great statement, it won’t really change police access to #FaceRecognition,” tweeted Clare Garvie, a researcher at Georgetown University's Center on Privacy and Technology who studies police use of the technology. She noted that she had not so far come across any IBM contracts to supply facial recognition to police. According to a report from the Georgetown center, by 2016 photos of half of American adults were in a database that police could search using facial recognition. Adoption has likely swelled since then. A recent report from Grand View Research predicts the market will grow at an annual rate of 14.5 percent between 2020 and 2027, fueled by “rising adoption of the technology by the law enforcement sector.” The Department of Homeland Security said in February that it has used facial recognition on more than 43.7 million people in the US, primarily to check the identity of people boarding flights and cruises and crossing borders. Other tech companies are scaling back their use of the technology. Google in 2018 said it would not offer a facial recognition service; last year, CEO Sundar Pichai, indicated support for a temporary ban on the technology. Microsoft opposes such a ban, but said last year that it wouldn’t sell the tech to one California law enforcement agency because of ethical concerns. Axon, which makes police body cameras, said in June 2019 that it wouldn’t add facial recognition to them. But some players, including NEC, Idemia, and Thales, are quietly shipping the tech to US police departments. The startup Clearview offers a service to police that makes use of millions of faces scraped from the web. The technology apparently helped police hunt down a man accused of assaulting protesters in Montgomery County, Maryland. At the same time, public unease over the technology has prompted several cities, including San Francisco, Oakland, and Cambridge, Massachusetts, to ban use of facial recognition by government agencies. Officials in Boston are considering a ban; supporters point to the potential for police to surveil protesters. Amid the protests following Floyd’s killing “the conversation we’re having today about face surveillance is all the more urgent,” Kade Crockford, director of the Technology for Liberty program at the ACLU of Massachusetts, said at a press conference Tuesday. Timnit Gebru, a Google researcher who has played an important role in revealing the technology’s shortcomings, said during an event on Monday that facial recognition has been used to identify black protesters, and argued that it should be banned. “Even perfect facial recognition can be misused,” Gebru said. “I’m a black woman living in the US who has dealt with serious consequences of racism. Facial recognition is being used against the black community.”

Broad Science
What is socially responsible AI?

Broad Science

Play Episode Listen Later Apr 3, 2020 74:21


On this episode, it’s all about ethics in AI! We’ll be sharing different stories about how AI is being used, what the pitfalls are, and who in the field is trying to make changes.  We chat to the next generation of AI experts to understand how their institutions are preparing them(or not) to use AI ethically. Surya Mattu, a data scientist who was part of the Pulitzer nominated Propublica investigation “Machine Bias”, talks to us about the report that jumpstarted a global conversation. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (Machine Bias) To understand the current landscape of Ethics and AI we spoke to one of the most prominent advocates for inclusion and diversity in the field, Dr. Timnit Gebru. Dr. Gebru is a Research Scientist in the Ethical AI team at Google and founder of Black in AI (@black_in_ai). How has our world has come to associate the assistance of AI with women? Dr. Myriam Sweeney, assistant professor of Library and Information Studies at the University of Alabama helps us navigate this. Hear a reenactment of the 1920’s play Rossum’s Universal Robot by Czech playwright Karel Capek, who coined the term robot(acted by Morgan Sweeney and Matt Goldberg). Lastly, Dr. Kirk Bansak highlights the possibilities using of AI for good, including to help place refugees in the best possible host communities. AI guides: https://www.wired.com/story/guide-artificial-intelligence/ https://towardsdatascience.com/ai-machine-learning-deep-learning-explained-simply-7b553da5b960

Broad Science
What is socially responsible AI?

Broad Science

Play Episode Listen Later Apr 3, 2020 74:21


On this episode, it's all about ethics in AI! We'll be sharing different stories about how AI is being used, what the pitfalls are, and who in the field is trying to make changes.  We chat to the next generation of AI experts to understand how their institutions are preparing them(or not) to use AI ethically. Surya Mattu, a data scientist who was part of the Pulitzer nominated Propublica investigation “Machine Bias”, talks to us about the report that jumpstarted a global conversation. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (Machine Bias) To understand the current landscape of Ethics and AI we spoke to one of the most prominent advocates for inclusion and diversity in the field, Dr. Timnit Gebru. Dr. Gebru is a Research Scientist in the Ethical AI team at Google and founder of Black in AI (@black_in_ai). How has our world has come to associate the assistance of AI with women? Dr. Myriam Sweeney, assistant professor of Library and Information Studies at the University of Alabama helps us navigate this. Hear a reenactment of the 1920's play Rossum's Universal Robot by Czech playwright Karel Capek, who coined the term robot(acted by Morgan Sweeney and Matt Goldberg). Lastly, Dr. Kirk Bansak highlights the possibilities using of AI for good, including to help place refugees in the best possible host communities. AI guides: https://www.wired.com/story/guide-artificial-intelligence/ https://towardsdatascience.com/ai-machine-learning-deep-learning-explained-simply-7b553da5b960

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Causality 101 with Robert Osazuwa Ness - #342

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Jan 27, 2020 43:14


Today we’re accompanied by Robert Osazuwa Ness, Machine Learning Research Engineer at ML Startup Gamalon and Instructor at Northeastern University. Robert, who we had the pleasure of meeting at the Black in AI Workshop at NeurIPS last month, joins us to discuss: Causality, what it means, and how that meaning changes across domains and users. Benefits of causal models vs non-causal models. Real-world applications of causality.  Various tools and packages for causality,  Areas where it is effectively being deployed, like ML in production. Our upcoming study group based around his new course sequence, “Causal Modeling in Machine Learning,” for which you can find details at twimlai.com/community. The complete show notes for this episode can be found at twimlai.com/talk/342.

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Trends in Fairness and AI Ethics with Timnit Gebru - #336

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Jan 6, 2020 49:45


Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more. We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via twitter @samcharrington or @twimlai. The complete show notes for this episode can be found at twimlai.com/talk/336. Check out the rest of the series at twimlai.com/rewind19!

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
What Does it Mean for a Machine to "Understand"? with Thomas Dietterich - #315

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Nov 7, 2019 38:09


Today we’re joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. We had the pleasure of discussing Tom’s recent blog post, “What does it mean for a machine to “understand,” in which he discusses: Tom’s position on what qualifies as machine “understanding”, including a few examples of systems that he believes exhibit understanding. The role of deep learning in achieving artificial general intelligence. The current “Hype Engine” that exists around AI Research, and SOOO much more.   Make sure you check out the show notes at twimlai.com/talk/315, where you’ll find links to Tom’s blog post, as well as a ton of other references. 

Good Code
Timnit Gebru on the limits of Artificial Intelligence

Good Code

Play Episode Listen Later Apr 23, 2019 20:12


In this episode, we speak with Timnit Gebru, a research scientist in the Ethical Artificial Intelligence team at Google AI. We talk about algorithmic biases, and Gebru explains why she created the group "Black in AI", to address the lack of diversity in the field. She calls for a standardization of algorithms, and cautions against focusing on making them more fair, without thinking about whether they should exist in the first place.

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Human-Centered Design with Mira Lane - TWiML Talk #233

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Feb 22, 2019 47:04


Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft. Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalably implemented across large engineering organizations. We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai. The complete show notes for this episode can be found at twimlai.com/talk/233. For more information on the AI for the Benefit of Society series, visit twimlai.com/ai4society.

Ipse Dixit
Aman Gebru on Traditional Knowledge as Prior Art

Ipse Dixit

Play Episode Listen Later Feb 17, 2019 43:02


In this episode, Aman Gebru, Visiting Assistant Professor of Law at Yeshiva University Benjamin N. Cardozo School of Law, discusses his article "Patents, Disclosure, and Biopiracy," which will be published in the Denver Law Review. Gebru begins by describing what traditional knowledge is and why it is important to innovation. He the explains the role of disclosure in the patent system and why it currently doesn't work well for traditional knowledge. Then he presents an alternative model for thinking about how the Patent Office should treat disclosure of traditional knowledge. And he explains why this alternative approach is likely to produce better results for both the people who hold traditional knowledge and the companies that want to use it. Gebru is on Twitter at @aman_gebru. See acast.com/privacy for privacy and opt-out information.

Man Glaubt Es Nicht!
Folge 17.04: Das Böse zerquetschen

Man Glaubt Es Nicht!

Play Episode Listen Later May 7, 2017 106:20


Komentare bitte unter https://manglaubtesnicht.wordpress.com/?p=2661 - Mit Gott auf unserer Seite: Bundesinnenminister Thomas de Maizière (CDU) hat einen Zehn-Punkte-Katalog für eine deutsche Leitkultur vorgelegt - Posttraumatische Belastungsstörung? US-Heeresminister Mark Green hat in der Bibel gelesen und will "Menschen, die falsche Dinge tun, zerquetschen." - Kognitive Simplizität: Gründe, warum Menschen offensichtlich falsche Behauptungen in ihr Weltbild aufnehmen - Gott will keinen Flohmarkt! Detaillierte Verbotsgesetze für Sonn- und Feiertage am Beispiel NRW - Schämen sich nicht genug: Evangelische Kirche erleidet Pflaumensturz wegen homosexueller Lebenspartnerschaften in Sachsen-Anhalt - Statistik des Monats: Die Jugend von heute ist klüger als gedacht und fällt nicht mehr auf die ollen Kamellen rein - Wer bestimmt über meinen Tod: Kardinal Marx oder ich selbst? Sterbehilfe in Deutschland - Hörer beschimpfen Podcaster: Triebwerksausfall, Tunnelsysteme und die okkulte Musikindustrie - Ende mit Schrecken oder vorwärts mit Gebrüll: Die erste Staffel MGEN ist komplett

Feisworld Podcast
Ep 53. Karen Fan and Felege Gebru (Part 2): a journey from Newton North High School to the White House

Feisworld Podcast

Play Episode Listen Later Feb 11, 2016 44:37


In 2013, NNHS InvenTeam applied to enter the Lemelson-MIT High School Student Grant. After a year of hard work (including the summer), their project was chosen to be presented at the White House Science Affair in front of President Barack Obama. In Part 2, we set aside glories and trophies and began focusing on the daily lives of two college students:  What is the day in the life of an MIT / Brown student? How to be obtain a "Pirate License" at MIT?  Why does Felege run with a "clicker" in the morning at Brown? What do MIT college students do day in and day out? What are a few things (in Karen and Felege's opinions) that could improve the current state of higher education? Why is it important to learn how to think? Quick final questions - What does success mean? How to live a fulfilling life? What does  a dream project look like to Karen and Felege without concerns of budget and resources How did this all happen? You may be wondering how I got in touch with Mrs. Brooks in the first place. In 2011, I helped organize a field trip for design students at the Commonwealth School and Newton North High School to visit Sapient (an agency I was working for at the time) to explore design opportunities beyond traditional disciplines. Immediately after the field trip, I created the very first High School Internship Program at SapientNitro. Today as part of Feisworld Inc. services and offerings, I continue to mentor high school, college students and young professionals: http://www.feisworld.com/work-with-fei/   --- Send in a voice message: https://anchor.fm/feisworld/message Support this podcast: https://anchor.fm/feisworld/support

mit barack obama white house sapient gebru sapientnitro newton north high school
Feisworld Podcast
Ep 52. Karen Fan and Felege Gebru (Part 1): a journey from Newton North High School to the White House

Feisworld Podcast

Play Episode Listen Later Feb 10, 2016 36:42


Show notes, tools and resources: feisworld.com/blog/karen-fan-felege-gebru I have served on the Advisory Committee for Newton Public Schools Career & Vocational Technical Education Committee since 2011. I met Karen Fan and Felege Gebru through InvenTeam, initiated by an incredible art teacher named Sue Brooks at Newton North High School. InvenTeam functions very much like a small design and technology firm, where every student on the team has a specific role such as a designer, engineer, product manager. InvenTeam chose to participate in the Lemelson-MIT competition. After a year of hard work, their project was chosen to be presented at the White House Science Affair in 2014. President Barack Obama greeted the students and even asked questions about the project. When I received an email from Mrs. Brooks about this update, I almost fell out of my chair at work. As you can imagine, I had to invite them to Feisworld. Today, Karen is a Sophomore at MIT and Felege is a Junior at Brown. In Part 1, we dive right into their visit to the White House: What was it like for them to meet with President Obama, presenting to him and then having an intellectual exchange? How was InvenTeam formed? Who was the key person to make it happen? What was the project they worked on? Who did they collaborate with? How has this experience changed both of their lives? What does design have to do with technology? Why does Mrs. Brooks go out of her way to help students who are interested in arts? --- Send in a voice message: https://anchor.fm/feisworld/message Support this podcast: https://anchor.fm/feisworld/support