POPULARITY
Artificial intelligence made its mark in 2023. News of AI's rapid advancement dominated the headlines, inspiring awe at the technology's potential while raising concerns about the content and information it was spitting out. AI can be used to generate music, videos and pictures — heightening worries of deepfakes circulating the internet. Despite the fears associated with AI, this technology has the potential to transform our lives for the better. But can it be done in an ethical way? MPR News host Angela Davis talks with Elizabeth M. Adams about what to pay attention to when it comes to AI — and how to stay informed as technology rapidly changes. Guest: Elizabeth M. Adams is a distinguished global influencer in Responsible AI. She serves as the CEO and Responsible AI advisor at EMA Advisory Services — leading a revolutionary approach to broader employee stakeholder participation in Responsible AI. Adams collaborates with CEOs and executives to establish robust governance frameworks. As a scholar-practitioner, she brings a unique lens to her research on leadership, yielding findings that shape the future of technology. Adams advocates for ethical artificial intelligence platforms and accountability in technology innovation. Subscribe to the MPR News with Angela Davis podcast on: Apple Podcasts, Google Podcasts, Spotify or RSS. Use the audio player above to listen to the full conversation.
Data is being called the new oil as companies race to get as much of it as much as possible. But without the right measures to keep certain AI-centric biases in check or forethought to incorporate ethical measures early, the real power of data can become too slick to handle. Listen as CEO Elizabeth M. Adams and author Joe Weinman talk about what the future of IT can (and should) look like. From AI bias to unreliable algorithms to the working harmony between humans and machines, see what we should be accounting for as our IT-enabled future fast approaches. Key Takeaways: [2:43] Why does bias happen in IT systems? When there isn't enough diversity in the data sets that the model is being trained on, it takes on this bias in the life of the algorithm. Elizabeth shares an example in facial recognition where the data ends up being sold on the market, a customer uses that data, and makes a decision about whether or not someone is deemed a threat based on biased information. Then, if law enforcement agencies decide to use that data, they can over-police in already over policed communities and cause a systemic problem, all because of that data. [5:35] All areas of our life are impacted by algorithms, from traffic patterns to predictions about who should get a loan, their interest rate, health insurance, and what type of health coverage someone is granted. [6:13] Joe shares two scenarios about how humans will interact with machines over the next coming decades. First, humans are replaced by machines. Second, and the most likely scenario, humans will collaborate with machines to create a better solution and higher productivity. [8:00] Human supervision can be extremely relevant in using information technology and AI. Joe shares some examples from MIT's Kevin Slavin such as flash crashes, caused by program trading. [10:54] Responsibility in AI is a shared responsibility between both the technical and non technical teams. Building ethical technology doesn't eliminate the possibility of unethical results, and we need more resources dedicated to areas like AI Ethics and governance within our companies, especially large ones acting as nation states. [16:27] Elizabeth discusses some best practices that will add ethics into more computer science courses and students get a critical perspective early on. [18:09] Companies that don't consider themselves to be in the tech business will need to play catch up fast and take on that responsibility themselves before the government has to step in. Hopefully, more companies will begin to take a more serious look at the ethical components of the tech they rely on. Elizabeth discusses the long-wave theory, which talks about how long it takes for all of the different revolutions. [23:27] Will we be in a Terminator SkyNet scenario? Quite possibly, says Joe, but we have to figure out where humans are going to be in the loop and understand what our algorithms are doing and how they're training other algorithms. Quotes: [2:25] “Data is what they're calling the new oil, and there's a race to how much data a company can consume.” - Elizabeth [5:39] “All the technologies that make sense of more data in less time and more intricate ways are fueling some of the most exciting and polarizing advancements.” - Jo [7:57] “The best performance sometimes is through a joint human and machine.” - Joe [14:10] “If you look at human behavior, you have a wide spectrum of possibilities, ranging from Mother Teresa to say a dictator that kills millions of people. The way the technology gets employed, and that is not the technology's fault.” - Joe [18:48] “For those companies who are not able to quickly adapt to this digital moment that we are having, I don't think they will be around for long. That's where we are, where we are going to stay, and where the jobs are going to be.” - Elizabeth [25:18] “We have to put ethics at the forefront of all of our business. Whether you think you work in tech or not.” - Jo Continue on your journey: pega.com/podcast Mentioned: Elizabeth Adams: Twitter | LinkedIn EMA Advisory Services Joe Weinman: Twitter | LinkedIn Digital Disciplines Cloudonomics Kevin Slavin
Join us for the world premiere of Black Women In Artificial Intelligence - Beyond the Lab with our special guest author Elizabeth M. Adams a technology integrator, working at the intersection of Cyber Security, AI Ethics and AI Governance, focused on Ethical Tech Design. She also passionately teaches, advises, consults, speaks and writes on the critical subjects within Diversity & Inclusion in Artificial Intelligence, such as racial bias in Facial Recognition Technology, Video Surveillance, Predictive Analytics and Children's Rights. Beyond the Lab she has written several children's books including "Little Miss Minnesota", "Little A.I. and Peety" and the soon to be released book "I'm Beautiful".
In this HRchat episode, Bill Banham talks with Elizabeth M. Adams, Stanford University Fellow: Race & Technology and Co-Chair of the Black Employee Network about the impact of artificial intelligence at work.Elizabeth is a Diversity & Inclusion in Artificial Intelligence practitioner, advisor, consultant, speaker, writer, and author. She has facilitated Diversity & Inclusion learning events focused on racial and gender bias in facial recognition technology, video Surveillance, predictive analytics, and children's rights.Elizabeth was a speaker at the recent HR Innovation and Future of Work Global Online Conference and Workshop from Hacking HR.This episode of the HRchat show was recorded in February 2020 and is supported by Espresa, a firm helping to define and ignite the HR tech space to disrupt culture for good.We do our best to ensure editorial objectivity. The views and ideas shared in this episode are entirely independent of our show sponsors. There is no relationship between the guest and companies advertising within the podcasts published by The HR Gazette or it's partners.
Data collection, privacy and ownership, unbiased train data and auditing algos. Ethics ties in everywhere and that's why it's always a good investment to make. In a lightning talk, Elizabeth M. Adams, a Race and Technology Stanford University Fellow and IEEE P70XX Series on AI Ethics board fellow, shares her views on ethical tech design.