POPULARITY
I am honestly so excited to share this week's episode because it's one that stuck with me long after we finished recording. If you have ever faced a layoff or felt the pressure of a ticking clock, you will definitely want to tune in to this conversation! My guest for this episode, Namrata Kulkarni, is a highly skilled data architect and business intelligence engineer who came to the U.S. in 2015 to pursue her master's degree at Syracuse University. Like many immigrants, her journey has not been easy, but it has shaped her into the person whom she is today, and after eight years with the same company, Namrata was laid off while on an H1B visa, which meant that she had only sixty days to land a new role or risk losing her right to stay in the country!This is pressure which most of us cannot imagine, but what really personally struck me was how Namrata responded. She skipped the shock and jumped straight into action, choosing gratitude over resentment. In her own words, “The layoff didn't happen to me—it happened for me.” I honestly had to pause when she said that because how often do any of us reframe setbacks that way?!Listen in as Namrata opens up about the challenges of job hunting for the first time outside of her company, relearning how to interview, and wrestling with whether or not to tell her family that she had been laid off. There is also a vulnerable moment in which she talks about the well-meaning but unhelpful things people say such as “I'm so sorry” which simply don't land the way people tend to think that they do. (Speaking as someone who's been laid off more than once, I personally felt that deeply.)What is especially inspiring here is not just that she landed one job offer but TWO in a job market that is anything but stable right now, and she did it by focusing on what was in her control - preparation, mindset, and leaning on the relationships that she had built over the years.If you are facing uncertainty, dealing with a career change, and/or supporting someone who is, you will definitely take something away from this episode, and if you haven't yet, I also highly recommend that you check out or revisit episode 43 of the show in which I speak with Rebecca Reeder, a pastor-turned-AI analyst who offers yet another incredible career story! Thanks for listening and being part of these conversations!Episode Highlights: [2:02] - Namrata reveals that she has been in the data industry for almost 10 years.[4:42] - Speak from the heart, and don't expect anything back.[5:41] - Hear how, moving to the U.S. in 2015, Namrata grew independent and grateful for her journey.[8:13] - Embracing her layoff, Namrata felt gratitude, trusting her instincts and cherishing SCOR's support.[11:04] - Namrata shares how her colleagues' support affirmed her skills, inspiring her to seek new opportunities.[12:24] - I point out how the layoff was a chance for Namrata to reinvent herself.[13:13] - Thanks to support and extra preparation time, Namrata took on interviews under intense pressure.[16:20] - Hear how Namrata overcame shame and networked tirelessly.[17:45] - Refining her skills and her resume, Namrata trusted the universe while pushing through interview anxiety.[20:03] - Truly believing that she was the right fit, Namrata practiced relentlessly until confidence replaced interview anxiety.[22:40] - Namrata argues that performing well requires pursuing what truly excites and energizes you.[25:25] - Starting at Amazon feels surreal for Namrata, with her focus now on learning and effectively contributing.[26:38] - Life's serendipity often redirects plans.[28:15] - Namrata credits her network in helping her land interviews, proving that referrals and support make a difference.Links & Resources:Email Gary: gary@garydanoff.comGary Danoff LinkedInLHH LinkedInNamrata Kulkarni LinkedInWhat's Next Now! - “From Pastor to AI Ethicist”
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening! Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening! Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening! Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/academic-life
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening! Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening!
Join us for a special conversation and message from Marisa Zalabak, AI Ethicist , who joins us live from New York .
Olivia Gambelin values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters. A transcript of this episode is here. Olivia Gambelin is a renowned AI Ethicist and the Founder of Ethical Intelligence, the world's largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI. Additional Resources: Responsible AI: Implement an Ethical Approach in Your Organization – BookPlato & a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes - Book The Values Canvas – RAI Design Tool Women Shaping the Future of Responsible AI – Organization In Pursuit of Good Tech | Subscribe - Newsletter
Over the weekend, a lot of you would have been out wining and dining, and hopefully having a great time with your significant other, but in the age of AI - could romantic human relationships be a bit old hat?What is agentic AI, and might it replace our spouses and partners in the long run?Dr. Lollie Mancey is an Anthropologist & AI Ethicist who has her own digital companion ‘Billy'. She joins guest host Tom Dunne to discuss.
Over the weekend, a lot of you would have been out wining and dining, and hopefully having a great time with your significant other, but in the age of AI - could romantic human relationships be a bit old hat?What is agentic AI, and might it replace our spouses and partners in the long run?Dr. Lollie Mancey is an Anthropologist & AI Ethicist who has her own digital companion ‘Billy'. She joins guest host Tom Dunne to discuss.
Segment 1 : A look back at a busy week in New York, with the UNGA, Summit of the Future and Climate week Segment 2 : Climate week and the intersection of AI solutions Segment 3 : Recommendations for action Biography : Marisa Zalabak, AI Ethicist and Psychologist. IEEE.org Co-chair of AI Ethics Education, Chair of Global Methodologies for Planet Positive 2030 (climate-tech initiative), and contributor to global standards on AI and human wellbeing. Co-Founder of GADES (Global Alliance for Digital Education and Sustainability, Marisa is focused on multi sector, multigenerational education and implementation of responsible practices with advanced technologies. As a transdisciplinary collaboration and regenerative ecosystem specialist she works with global organizations to help guide businesses, institutions, organizations, governments and communities to reimagine human-AI partnerships for a flourishing future.
What if we saw Artificial Intelligence as a mirror rather than as a form of intelligence?That's the subject of a fabulous new book by Professor Shannon Vallor, who is my guest on this episode.In our discussion, we explore how artificial intelligence reflects not only our technological prowess but also our ethical choices, biases, and the collective values that shape our world.We also discuss how AI systems mirror our societal flaws, raising critical questions about accountability, transparency, and the role of ethics in AI development. Shannon helps me to examine the risks and opportunities presented by AI, particularly in the context of decision-making, privacy, and the potential for AI to influence societal norms and behaviours. This episode offers a thought-provoking exploration of the intersection between technology and ethics, urging us to consider how we can steer AI development in a direction that aligns with our shared values. Guest Biography Prof. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy. She is Director of the Centre for Technomoral Futures in EFI, and co-Director of the BRAID (Bridging Responsible AI Divides) programme, funded by the Arts and Humanities Research Council. Professor Vallor's research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices. Her work includes advising policymakers and industry on the ethical design and use of AI. She is a standing member of the One Hundred Year Study of Artificial Intelligence (AI100) and a member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network and the 2022 Covey Award from the International Association of Computing and Philosophy. She is a former Visiting Researcher and AI Ethicist at Google. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024). AI Generated Timestamped Summary of Key Points:00:02:30: Introduction to Professor Shannon Vallor and her work.00:06:15: Discussion on AI as a mirror of societal values.00:10:45: The ethical implications of AI decision-making. 00:18:20: How AI reflects human biases and the importance of transparency.00:25:50: The role of ethics in AI development and deployment.00:33:10: Challenges of integrating AI into human-centred contexts.00:41:30: The potential for AI to shape societal norms and behaviours. 00:50:15: Professor Vallor's insights on the future of AI and ethics.00:58:00: Closing thoughts and reflections on AI's impact on humanity.LinksTo find out more about Shannon and her work visit her website: https://www.shannonvallor.net/ The AI Mirror: https://global.oup.com/academic/product/the-ai-mirror-9780197759066?A Noema essay by Shannon on the dangers of AI: https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/ A New Yorker feature on the book https://www.newyorker.com/culture/open-questions/in-the-age-of-ai-what-makes-people-unique The AI Mirror as one of the FT's technology books of the summer https://www.ft.com/content/77914d8e-9959-4f97-98b0-aba5dffd581c The FT review of The AI Mirror: https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011 For more on the Edinburgh Futures Institute: https://efi.ed.ac.uk/
I dette afsnit taler vi om, hvordan AI anvendes i virksomheder. Laura Haaber Ihle, Research Scientist og AI Ethicist hos Northeastern University samt Associate of the Department of Philosophy hos Harvard University og Hans Petter Dalen, Business Leader, EMEA hos IBM watsonx and Embeddable AI, taler om, hvordan AI anvendes i virksomheder, de tilhørende etiske konsekvenser, nødvendigheden af solid governance, og vigtigheden af den rette ekspertise på området. Derudover ser vi på, hvordan AI påvirker samfundet, de muligheder teknologien bringer, samt hvordan vi kan identificere og håndtere risici ved implementeringen af AI. Den nye EU AI Act, stiller omfattende dokumentationskrav til virksomheder, og mange danske virksomheder kan blive overraskede over disse krav og skal hurtigst muligt få styr på deres processer for at overholde lovgivningen. Emner: · Adoption af AI hos virksomheder og det offentlige Danmark · Etiske overvejelser · Governance-strukturer · AI's indvirkning på samfundet · Risici og muligheder ved AI-implementering · EU AI Act og dens krav til dokumentation · Ny dansk sprogmodel Deltagere: · Hans Petter Dalen: Business Leader, EMEA, IBM watsonx and Embeddable AI · Laura Haaber Ihle: Research Scientist - AI Ethicist, hos Northeastern University og Associate of the Department of Philosophy hos Harvard University · Liselotte Foverskov (Podcastvært og moderator): Tidligere systemadministrator og nuværende teknisk kommunikatør via hendes virksomhed Textrovert, hvor hun laver podcasts, artikler og videoer.
Join us for a fascinating talk with Laurence A. Pagnoni, the co-author of "You and Artificial Intelligence," as we explore the game-changing role of AI in the nonprofit world. Laurence brings his wealth of knowledge to the table, shedding light on how nonprofits can harness AI to enhance fundraising efforts, engage donors more personally, and streamline their operations. We explore actionable advice for embracing AI, navigating the ethical landscape, and leveraging technology to build stronger community bonds. This episode is perfect for nonprofit leaders and tech enthusiasts eager to learn about the synergy between AI and social impact. About the guest Laurence is national expert on advanced fundraising strategies and holds advanced degrees in Public Administration from NYU, and Theology and Contemplative Studies from various Jesuit Universities. He author of "The Nonprofit Fundraising Solution," the first book on fundraising ever published by the American Management Association as well as the co-author of “You and Artificial Intelligence," Laurence is also the chairman of LAPA Fundraising serving nonprofits throughout the U.S. and Euro. He volunteers two days a week with the national nonprofit Illuman, committed to helping men become healthier and more authentic. Resources You and Artificial Intelligence: https://www.goodreads.com/book/show/205243100-you-and-artificial-intelligenceIlluman: https://illuman.org/The Smart Nonprofit: https://www.goodreads.com/en/book/show/60575581Charity Engine: https://www.charityengine.com/Tech Impact: https://techimpact.org/Navigating the Nonprofit Landscape with AI – George Weiner of Whole Whale: https://brooks.digital/health-nonprofit-digital-marketing/navigating-nonprofit-landscape-ai/Partnership on AI: https://partnershiponai.org/AI Ethicist: https://www.aiethicist.org/ Contact Laurence https://www.linkedin.com/in/laurence-a-p-60b46b4/
AI Ethicist Nell Watson and author of Taming the Machine describes the foundations we need to put in place to build legal, regulatory and ethical systems to channel fiercely powerful new AI technology away from the bad, and toward the good.
Don't miss this thought-provoking conversation as Rebecca Reeder navigates the intersections of faith, technology, and ethics, offering a glimpse into the evolving landscape of AI and its profound impact on our society. And hear about ‘What's Next Now' for Rebecca as she reflects on authenticity, alignment, and the pursuit of a fulfilling career. Links & Resources: Schedule Listening Time with https://www.garydanoff.com/contact Gary Danoff LinkedIn Rebecca Reeder LinkedIn Thomas Merton Author Page Adam Grant's Hidden Potential Book Episode Highlights: [0:59] - My guest in this episode is Rebecca Reeder.[1:19] - Rebecca transitioned from religious ministry to AI analytics with Alvarez and Marsal in 2023.[3:00] - Rebecca shares how her college internships led her from diverse experiences to a career in vocational ministry.[5:24] - Hear how Rebecca realized as a pastor that genuine connections dwindled, prompting her shift towards alignment and integrity.[7:39] - Thomas Merton's teachings helped Rebecca realize that she lived by external expectations, prompting a quest for genuine connections and impact.[9:54] - Rebecca's pivot to analytics stemmed from strategic thinking, network connections, and natural affinity for data.[12:08] - I praise Rebecca's authenticity, connecting it to her technical background and AI's potential.[15:37] - Rebecca values Amplify's focus on enhancing customer experience through AI-driven efficiency.[17:22] - I simplify AI as computers processing 0s and 1s to provide useful human insights.[21:35] - Rebecca highlights Adam Grant's two-part brainstorming process for optimal teamwork and idea generation.[24:03] - Hear how Rebecca tends to use polite prompts, preferring phrases like "can you" and "will you please."[27:27] - Rebecca believes that her communication skills have improved through specificity and clarity in interactions.[30:36] - Rebecca learned from an Adam Grant podcast about AI guardrails, citing Khan Academy's Conmigo project.[32:00] - Rebecca notes ChatGPT's role in her Python learning, emphasizing AI's guidance over mere solutions.[34:24] - I discuss refining prompts for AI, emphasizing the importance of critical thinking alongside AI integration.[37:01] - Learn how Rebecca optimized productivity by using ChatGPT to personalize evaluation methods.[40:02] - Rebecca embraces Sal Khan's vision for Khan Academy: personalized teaching assistants for all, promoting individualized learning.[42:02] - What's next now for Rebecca? --- Send in a voice message: https://podcasters.spotify.com/pod/show/gary437/message
In this Insights Unlocked episode, UserTesting's Lawrence Williams talks with Dawn Procopio, founder and principal UX researcher at AI-Ethicist.com. They explore the critical role of UX researchers in ensuring that machine learning models are human-centered and ethically sound.
This episode is a rich exploration of the innovative ecosystem where Gen Z's ingenuity is celebrated, mentored, and developed into solutions for real-world problems.In this re-purposed content from prior seasons of the show, we unpack how innovation is key to solving issues we face, including our sustainable future.The Gen Z cohort is rapidly emerging as a force of innovation and leadership in STEM, along with some great mentors, who have been themselves entrepreneurial, in mindset ! This transformation is the crux of our episode, featuring five of our guests who appeared from 2021-23 to discuss their niche areas.[ Use links below to get to each of their full episodes. ]Chapter Highlights from this episode:- Neha Shukla, AI Ethicist, Innovator, Speaker, on How we can build an innovation eco system- Rachna Nath, a TIME Innovative Teacher on connecting learning to real world issues- Stephanie Espy, author STEM Gems on empowering women to be STEM Leaders- Sarah Syed, a Top 20 under 20 Youth, on sustainability and innovation- Diya Nath, an International Best Selling author on her innovation "Oxi Blast"For those young minds ready to forge their path, we impart wisdom: find your purpose, embrace your unique story, and trust the adventure that lies ahead. Join us as we foster a world where innovation and empowerment are not just ideals, but realities for rising stars in every field. You can become the next problem solver!Buzzsprout - Let's get your podcast launched! Start for FREEInstacart - Groceries delivered in as little as 1 hour. Free delivery on your first order over $35.Enjoy PIOR Living products Enjoy PIOR Living products at a 20% discount and free shipping on orders over $75Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showVideos available on YouTube channel.Follow host Vai on socials - Instagram , YouTube, Facebook for thought leadership content. Head to my website for enlightening blogs & service offerings.This podcast comes to you from Listen Ponder Change LLC, founded by Vai Kumar.Every support the show contribution is much appreciated !!Subscribe https://www.buzzsprout.com/1436179/support and help us amplify our voice and reach!
A podcast about work, the future and how they will go together
Artificial Intelligence is here to stay and as well as changing and eliminating jobs it is creating new roles. On this episode, Linda Nazareth is joined by Cliff Jurkiewicz, Vice President of Global Strategy at Phenom to talk about why it is time for companies to hire for the role of ‘AI Ethicist' and what challenges those in the role will face in our rapidly changing world Guest: Cliff Jurkiewicz, VP of Global Strategy, Phenom Cliff Jurkiewicz is the Vice President of Global Strategy at Phenom, a global HR tech company based in the greater Philadelphia area. Cliff supports Phenom's purpose of helping a billion people find the right job by educating leaders at global organizations and their HR and HRIS teams on disruptive technologies, including AI and automation, so they can make meaningful connections with individuals throughout the talent journey. With a strong background in both design and technology, Cliff has held numerous roles in creative design and software development. Cliff is an active pilot who runs the only flight service in the country dedicated to helping those suffering from mental illness and addiction issues – Kyle's Wish Foundation. The organization is named after Cliff's son, who died at...
Artificial Intelligence has opened up incredible possibilities in our daily lives. It has significantly impacted how we learn, conduct business, and navigate our routines, profoundly altering our world in ways we once thought impossible. But as they say, "too much of a good thing is bad," and this sentiment holds even in the realm of AI. While AI has the potential to be a helpful ally, relying on it too much could bring unexpected challenges and drawbacks. AI works best when used wisely and responsibly. By incorporating ethics into its development and use, we can enjoy its capabilities for a better and more sustainable future. Join Matt DiFrancesco and Laura Miller, an Award-winning AI Ethicist, Digital Humanitarian, and founder and CEO of NextGen Ethics as they explore the future of AI and discuss the ethical aspects crucial to this transformative technology and how it impacts the automotive industry. They talk about: (04:47) The benefits of having human experts to oversee AI (06:11) How Laura got involved with ethical AI (09:33) The three ideas behind the ethics of Artificial Intelligence (10:11) The challenges of dealing with AI in the collision repair industry (16:14) A human ability that AI will never mimic (17:15) Why AI cannot replace humans as a superpower (26:06) The value of educating people about understanding AI (27:37) What the automotive industry needs to be aware of about the dangers of AI Connect with Laura Miller Website: https://www.nextgenethics.com/ LinkedIn: https://www.linkedin.com/in/lmiller-ethicist/ Connect With Matt DiFrancesco: matt@highliftfin.com (814)201-5855 LinkedIn: Matt DiFrancesco LinkedIn: High Lift Financial Facebook: High Lift Financial Instagram: @high_lift_financial Youtube: @highliftfinancial About Our Guest: As the CEO and founder of NextGen Ethics, Laura Miller strives to advance the development of ethical AI. She teaches philosophy at Webster University and serves as the Director of Ethics at Shadowing AI. Miller has been recognized by NASA for her contributions to tech ethics, and she is a member of other tech advisory boards, such as the Open Voice Network, which is funded by the Linux Foundation. Her humanitarian endeavors have also earned her a Knights of Columbus award, and The New York Times and Lens Magazine have published articles about her ethnographic studies. She received her BA and MA in Philosophy from the University of Missouri-St. Louis, where she focused on applied ethics. She speaks on ethical concerns related to AI both domestically and globally.
In this episode, I am so excited to share this conversation with Laura Miller who is an AI Ethicist.An innovative and logical-minded AI and Ethics Specialist with an extensive background in creating ethical AI policies, developing strategies and processes to overcome challenges, and providing practical guidance in fast-paced, high-growth companies from start-ups to Fortune 500. Strategic leader with a solid business background and high cultural, ethical, and emotional intelligence. Excels at providing insight, guidance, collaborating across multiple verticals, and creating an inclusive workplace for all employees. I also serve as an advisor, council member, and task force leader for various organizations and initiatives in the AI and ethics space and as a presenter, panelist, and keynote speaker on ethics and technology topics.Advisory Roles:• Trustmark Advisory Board - Open Voice Network• Tech-Ethics Advisor- Trauma Informed Network Advisory Board• Ethics Council - Meta-Brain Labs• Ethical Use Task Force - Open Voice Network• Inclusion Plan Panelist - NASAPresenter, Speaker, Panelist, Keynote, and Conference Chair | Author | Founder - NextGen Ethics | Policy Manager and Strategist | Transforming AI, Tech, and Organizations for a Better World | Digital Humanitarianlmillerethicist@gmail.com | lmiller-ethicist.comLISTEN NOW:Apple Podcast – Find this episode and all the previous episodes on Apple PodcastSpotify – Find this episode and previous episodes of the show on Spotify!YouTube – https://www.youtube.com/watch?v=M1AMM-j9negROSE
On this episode of There Has to Be a Better Way?, co-hosts Zach Coseglia and Hui Chen talk to Dr. Rumman Chowdhury, a pioneer in the field of responsible AI. Currently a Responsible AI Fellow at Harvard, with prior leadership roles at Twitter and Accenture, Rumman has first-hand insight into the real harms of AI, including algorithmic bias. She discusses how data scientists seek to understand these problems, and the importance of trustworthiness in the future of AI development. Having recently testified before Congress about AI governance, she shares her thoughts about building a governance ecosystem where human ingenuity can flourish.
On this episode of There Has to Be a Better Way?, co-hosts Zach Coseglia and Hui Chen talk to Dr. Rumman Chowdhury, a pioneer in the field of responsible AI. Currently a Responsible AI Fellow at Harvard, with prior leadership roles at Twitter and Accenture, Rumman has first-hand insight into the real harms of AI, including algorithmic bias. She discusses how data scientists seek to understand these problems, and the importance of trustworthiness in the future of AI development. Having recently testified before Congress about AI governance, she shares her thoughts about building a governance ecosystem where human ingenuity can flourish.
Today we'll discuss "The Road to AI Success" We have an incredible guest who is an AI/ML expert, AI ethicist, and an innovator in the field. He is a true authority when it comes to AI transformers like ChatGPT. Now, this episode is going to be a goldmine of knowledge, specifically tailored for all you entrepreneurs, product managers, and techies out there. Our guest is here to share invaluable insights into launching AI products, the challenges that come with developing them, and of course, the future of AI. Stay tuned for a thought-provoking discussion with our guest. My guest today is Denis Rothman, AI Ethicist & Innovator We will discuss: • Can you outline the key challenges that developers commonly face when working on AI product development? • what are the most important ethical considerations when creating and deploying AI technologies? • What are the most crucial tips you have for entrepreneurs looking to successfully launch AI products in today's market? • What are some potential pitfalls or risks that businesses should be aware of when implementing AI solutions? More about Denis at: https://www.youtube.com/c/DenisRothman Thanks for watching Invincible innovation LIVE A Show About The Future Of People With Tech I'm Adi Mazor Kario, #1 Product Innovation & Value Creation Expert, Invincible Innovation. I'd love to hear your feedback and thoughts in the comments below! If you want to know more about me and my work: https://www.invincibleinnovation.com/ Invincible Innovation podcast: https://spoti.fi/3wzdBT1 Invincible Innovation on Facebook: https://bit.ly/3xtwPt9 Innovating Through Chaos Book: https://amzn.to/3gAVLbu Adi's LinkedIn: https://bit.ly/3vuAplA Hope you'll enjoy the talk! #AI # AIproducts #EthicalAI #startup #Innovation #distruptiveInnovation #entreprenuer #business #leadership #innovation #innovationecosystem #startup #management #invincibleinnovation #openinnovation #cocreation #opportunities #valuecreation #success
https://www.rt.com/shows/worlds-apart-oksana-boyko/578728-ai-revolution-humans-rules-boundaries/ Historically, any new technology has been met with a mix of fear and excitement but the advent of Artificial Intelligence has created a new possibility of humans becoming redundant, if not obsolete. With the AI revolution already underway, can humans still set the rules and the boundaries for their latest creation? To discuss this, Oksana is joined by Jibu Elias, an AI ethicist and expert on AI in India. #2023 #art #music #movies #poetry #poem #food #photooftheday #volcano #news #weather #climate #horse #puppy #fyp #love #instagood #onelove #eyes #getyoked #horsie #gotmilk #book #shecomin #getready #monkeys
This week, we welcome the innovative Ruth Ikwu, an AI Ethicist and MLOps Engineer with a solid foundation in Computer Science. As a Senior Researcher at Fujitsu Research of Europe, Ruth delves into AI Security, Ethics and Trust, playing a role in crafting innovative and reliable AI solutions for cyberspace safety. In this episode, she educates us on the evolving landscape of online sex work, discussing how platforms like AdultWork, OnlyFans and PornHub inadvertently facilitate sex trafficking. This is a heavy topic and contains a lot of distressing information about sex trafficking, Ruth's work is extremely important in bringing forward accountability. To learn more about Ruth's work on identifying human trafficking indicators in the UK online sex market - https://link.springer.com/article/10.1007/s12117-021-09431-0 Connect with Ruth - https://www.linkedin.com/in/ruth-eneyi-i-83a699118/
In our conversation, we learn about her professional journey and how this led to her working at DataRobot, what she realized was missing from the DataRobot platform, and what she did to fill the gap. We discuss the importance of bias in AI models, approaches to mitigate models against bias, and why incorporating ethics into AI development is essential. We also delve into the different perspectives of ethical AI, the elements of trust, what ethical “guard rails” are, and the governance side of AI. Key Points From This Episode:Dr. Mahmoudian shares her professional background and her interest in AI.How Dr. Mahmoudian became interested in AI ethics and building trustworthy AI.What she hopes to achieve with her work and research. Hear practical examples of how to build ethical and trustworthy AI.We unpack the ethical and trustworthy aspects of AI development.What the elements of trust are and how to implement them into a system.An overview of the different essential processes that must be included in a model.How to mitigate systems from bias and the role of monitoring.Why continual improvement is key to ethical AI development.Find out more about DataRobot and Dr. Mahmoudian's multiple roles at the company.She explains her approach to working with customers.Discover simple steps to begin practicing responsible AI development.Tweetables:“When we talk about ‘guard rails' sometimes you can think of the best practice type of ‘guard rails' in data science but we should also expand it to the governance and ethics side of it.” — @HaniyehMah [0:11:03]“Ethics should be included as part of [trust] to truly be able to think about trusting a system.” — @HaniyehMah [0:13:15]“[I think of] ethics as a sub-category but in a broader term of trust within a system.” — @HaniyehMah [0:14:32]“So depending on the [user] persona, we would need to think about what kind of [system] features we would have .” — @HaniyehMah [0:17:25]Links Mentioned in Today's Episode:Haniyeh Mahmoudian on LinkedInHaniyeh Mahmoudian on TwitterDataRobotNational AI Advisory CommitteeHow AI HappensSama
This week we are joined by Marc van Meel who is an AI Ethicist and public speaker with a background in Data Science. He currently works as a Managing Consultant at KPMG, where he helps organizations navigate the ethical implications of Artificial Intelligence and Data Science. In this episode we get into the future of technology in our society, AI auditing, the upcoming AI regulation and of course ChatGPT! To contact Marc: https://www.linkedin.com/in/marc-van-meel/
In the race to create and release Artificial Intelligence (AI) tools, are Silicon Valley companies such as OpenAI failing to fully consider the consequences of their work? The speed of development in the field is dizzying, with new tools such as ChatGPT and DALL·E offering a sneak peek at the potential of AI to work for us. Ethiopian born US computer scientist Dr Timnit Gebru is a leading researcher on the ethics of artificial intelligence.
This week, Deepa Singh, AI ethicist, and Pooja Sreenivasan, digital artist/illustrator, join us to discuss why AI feels so dystopian, whether AI makes art democratic, and what it really means to protect human creativity. Respectfully Disagree is The Swaddle Team's very own podcast series, in which we get together to discuss and dissect the issues we passionately differ on.
Marisa is the founder of Open Channel Culture, TEDx & Keynote Speaker, Author, Educational Psychologist, Social-Emotional-Creative Intelligence Specialist, Equity Advisor, and AI Ethicist. She partners with leaders and teams, supporting organizations and businesses by providing essential services to improve, and sustain positive organizational culture; reconnecting purpose and values to action. About Marisa Zalabak Marisa is the founder of Open Channel Culture, TEDx & Keynote Speaker, Author, Educational Psychologist, Social-Emotional-Creative Intelligence Specialist, Equity Advisor, and AI Ethicist. She partners with leaders and teams, supporting organizations and businesses by providing essential services to improve, and sustain positive organizational culture; reconnecting purpose and values to action. --- Support this podcast: https://anchor.fm/tbcy/support
For episode 58, I am in conversation with Olivia Gambelin, who joins us from Brussels. Olivia has the exciting job title of being an AI Ethicist and is the founder & CEO of Ethical Intelligence. As you will hear in this episode she has a background across organisations including CAKE Corporation, Save the Children, Springer and being an Advisory Board member on several influencial AI & Ethics groups. We explore more of her journey from an MSc in Philosophy to such critical roles in today's use of data & AI. Along the way we consider the benefits of a broad interest in technology, wide ranging conversations with your network and how we need much more than GDPR compliance. An emerging theme from our conversation (one that has stayed with me) is the benefit of being a polymath. At least the benefit of gathering people together to share knowledge across disciplines. Olivia paints a compelling picture of Data & AI Ethics as being a field where you can think & work broadly. A great opportunity for those with many interests or wanting to connect more with their values as well as apply their technical knowledge. I hope you find Olivia's openness and many stories as fascinating as I did.
In today's episode of the engatica interview series, we spoke to Olivia Gambelin, Founder of Ethical Intelligence and an AI Ethicist on the show! She talks about why the ethics of AI are complicated and pins some ethical issues in artificial intelligence and highlights some examples of ethical dilemmas. The engatica interview series is a powerhouse of insights from industry experts and influencers from around the world. A platform that provides the latest news on AI, Automation, and technologies that will help you grow your business. Website: www.engatica.com Olivia Gambelin's profile on engatica: https://engatica.com/innovators/olivia_gambelin/610/profile Follow us on- LinkedIn: https://www.linkedin.com/company/join-engatica/ Twitter: https://twitter.com/joinEngatica Facebook: https://www.facebook.com/joinEngatica Instagram: https://www.instagram.com/joinengatica/ Come, be a part of our community - learn, share and grow with us. About Engati: Engati believes that the way you deliver customer experiences can make or break your brand. Our mission is to help you deliver unforgettable experiences to build deep, lasting connections with our Chatbot and Live Chat platform. It is a one-stop platform for powerful customer engagements. With our intelligent bots, we help you create the smoothest of customer experiences, with minimal coding. And now, we're even helping you answer your customers' most complicated questions in real-time with Engati Live Chat. Website: https://www.engati.com/ Talk to us: contact@engati.com #ai #artificialintelligence #business #machinelearning #mlops #digital
Debbie Reynolds “The Data Diva” talks to Enrico Panai, Ph.D., Data and AI Ethicist, Éthicien du numérique, from France. We discuss his work on AI and technology, his work on AI ethics with ForHumanity, AI uses of nudge technologies, the need for ethics in AI, audits of AI systems, How ethics and regulation are related to AI, unexpected results of AI, Real-Time Bidding and AI transparency protecting the privacy of individuals, how people can be influenced by AI systems, the UK Age Appropriate Design Code, danger and challenge of inferences made about you not subject to regulation, and his hope for Data Privacy in the future.Support the show
Agriculture is complicated. The landscape is changing, there's climate change and there are a million other factors that affect the farmer's yield. But it's what our entire population's hunger and health depends on. Recent initiatives in AI are finding ways to get the crop to talk to the farmer about what it needs. Credits: Narration: Harsha Bhogle Executive Producer: Gaurav Vaz Producer: Archana Nathan Research, Interviews and Scripts: Prthvir Solanki Narrative overview: Charu Sharma & Shriram Parthasarathy Title track, sound design and background score: Nikhil Rao, Abhijit Nath & Avyay Gujral All clips and voices used in this podcast are owned by the original creators We thank wholeheartedly all our guests who appeared on this episode Ananda Verma, Co-founder at Fasal Ranveer Chandra, Managing Director, Research for Industry and CTO, Agri-Food at Microsoft Research Vikram Kumar, Associate Director at Wadhwani AI Jibu Elias, AI Ethicist, Senior Researcher and Lead at INDIAai Ram Dhulipala, Senior Scientist at the International Livestock Research Institute and Former Scientist at International Crops Research Institute for the Semi-Arid Tropics Links to clips used in the episode and citations: Fasal Website and information about their IoT device FarmBeats News Reports on Locust Attacks in India Brittanica Entry on the Pink Bollworm Punjab Records 34% Cotton Crop Loss After Pink Bollworm Attack To learn how Microsoft is working to empower every developer to innovate, every organization to transform industries and every individual to transform society through its differentiated Microsoft AI & Innovation vision, please visit https://microsoft.com/ai
#indiaai #artificialintelligence #govtofindia INDIA GOVT- EMPOWERING & ENABLING ARTIFICIAL INTELLIGENCE ECOSYSTEM Jibu Elias is an AI Ethicist, Researcher, and the leading expert on India's AI ecosystem and is the Research & Content Head of INDIAai -The National AI Portal of Government of India. He is a member of the OECD Network of Experts on AI (ONE AI) and one of the founding Editorial Board Members of Springer's AI and Ethics Journal, the first multidisciplinary academic journal on AI ethics. With years of experience covering emerging technologies with PC Mag and Times of India Group, Jibu's work currently focuses on building a unified AI ecosystem in India. He is an alumnus of The London School of Economics, where he studied International Relations, specializing in Sino-India relations. https://in.linkedin.com/in/jibuelias/de https://indiaai.gov.in/author/Jibu%20Elias https://twitter.com/jibuelias Watch our highest viewed videos: 1-India;s 1st Quantum Computer- https://youtu.be/ldKFbHb8nvQDR R VIJAYARAGHAVAN - PROF & PRINCIPAL INVESTIGATOR AT TIFR 2-Breakthrough in Age Reversal- -https://youtu.be/214jry8z3d4DR HAROLD KATCHER - CTO NUGENICS RESEARCH 3-Head of Artificial Intelligence-JIO - https://youtu.be/q2yR14rkmZQShailesh Kumar 4-STARTUP FROM INDIA AIMING FOR LEVEL 5 AUTONOMY - SANJEEV SHARMA CEO SWAAYATT ROBOTS -https://youtu.be/Wg7SqmIsSew 5-TRANSHUMANISM & THE FUTURE OF MANKIND - NATASHA VITA-MORE: HUMANITY PLUS -https://youtu.be/OUIJawwR4PY 6-MAN BEHIND GOOGLE QUANTUM SUPREMACY - JOHN MARTINIS -https://youtu.be/Y6ZaeNlVRsE 7-1000 KM RANGE ELECTRIC VEHICLES WITH ALUMINUM AIR FUEL BATTERIES - AKSHAY SINGHAL -https://youtu.be/cUp68Zt6yTI 8-Garima Bharadwaj Chief Strategist IoT & AI at Enlite Research -https://youtu.be/efu3zIhRxEY 9-BANKING 4.0 - BRETT KING FUTURIST, BESTSELLING AUTHOR & FOUNDER MOVEN -https://youtu.be/2bxHAai0UG0 10-E-VTOL & HYPERLOOP- FUTURE OF INDIA"S MOBILITY- SATYANARAYANA CHAKRAVARTHY -https://youtu.be/ZiK0EAelFYY 11-NON-INVASIVE BRAIN COMPUTER INTERFACE - KRISHNAN THYAGARAJAN -https://youtu.be/fFsGkyW3xc4 12-SATELLITES THE NEW MULTI-BILLION DOLLAR SPACE RACE - MAHESH MURTHY -https://youtu.be/UarOYOLUMGk Connect & Follow us at: https://in.linkedin.com/in/eddieavil https://in.linkedin.com/company/change-transform-india https://www.facebook.com/changetransformindia/ https://twitter.com/intothechange https://www.instagram.com/changetransformindia/ Listen to the Audio Podcast at: https://anchor.fm/transform-impossible https://podcasts.apple.com/us/podcast/change-i-m-possibleid1497201007?uo=4 https://open.spotify.com/show/56IZXdzH7M0OZUIZDb5mUZ https://www.breaker.audio/change-i-m-possible https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xMjg4YzRmMC9wb2RjYXN0L3Jzcw Kindly Subscribe to CHANGE- I M POSSIBLE - youtube channel www.youtube.com/ctipodcast
https://www.linkedin.com/in/marisa-zalabak-4368482b/ (Marisa Zalabak) Marisa is a transformative leader who partners with socially conscious leaders and teams dedicated to positively contributing to the well-being of people and the planet as we navigate the possibilities and challenges of the emerging future. She is an Adaptive Leadership Coach, Organizational Culture Consultant, TEDx & Keynote Speaker, Educational Psychologist, Social-Emotional-Creative Intelligence Specialist, Equity Advisor, and AI Ethicist. She is the founder of Open Channel Culture and Co-Chair of the AI Ethics Education Committee. Highlights from our conversation with Marisa: Adaptive leadership and the importance of it Technology advancements and their ethical issues AI for social good Importance of ethics and philosophy to power and expand the consciousness and safety of technology Awareness and attention, conscious of the design you are making Adapt and flourish, with the right human skills. Improving psychological safety, collaboration, joy, innovation, productivity, and well-being while meeting the needs of the emerging future. Visit Marisa's https://www.linkedin.com/in/marisa-zalabak-4368482b/ (LinkedIn) to learn more about her. https://www.linkedin.com/in/marisa-zalabak-4368482b/ (https://www.linkedin.com/in/marisa-zalabak-4368482b/) marisa zalabak https://openchannelculture.com (marisa's Website) https://instagram.com/MarisaZalabak (@MarisaZalabak on Instagram) https://www.ted.com/talks/marisa_zalabak_educational_fire_drills_for_flourishinghttps://www.youtube.com/watch?v=IvlPn3aMCQk (marisa on YouTube) Marisa Zalabak: * Founder, Open Channel Culture *Educational Psychologist *Adaptive Leadership Coach *TEDx & Keynote Speaker Marisa is a : *Certified member of MIT's u.lab in Leading for the Emerging Future *Co-Chair of the IEEE.org AI Ethics Education Committee *Contributing author, recommended standards for the ethical design of Artificial Intelligent Systems *Co-author of effective approaches for transdisciplinary collaboration *Advisory committee, Million Peacemakers, a UN approved non-profit training global peacemakers *Leadership team, Women4Solutions: a global network advancing the UN Sustainable Development Goals (SDGs). Her work, Open Channel Culture, provides training in skills that support Adaptive Leadership, increasing human flourishing by transforming organizational cultures that foster potential, engagement, motivation, creativity and joy. Copyright (c) 2020-2022 Kirstin Gooldy
In this episode, as the City of San José considers expanding their Smart City and Innovation and Technology Advisory Board to include San Jose Residents, we go on a little Sci-Fi adventure with Masheika Allgood an AI Ethicist and the Founder of AllAI Consulting, LLC, a platform for providing AI education across various backgrounds. AllAI Consulting, LLC (pronounced "ally”), helps non-techies understand Artificial Intelligence, allowing business leaders and lawyers to actively participate in decision-making around the use of AI systems in their companies, and to adequately advise their clients and shareholders. Additionally Masheika is a member of the Black Leadership Kitchen Cabinet, addressing the broader social ills that are currently impacting their African American Community. Resources: San José's Smart City Advisory Board San José's Innovation & Technology Advisory Board San José's Digital Privacy Policy San José's Digital Privacy Public Comment Form FREE AI Courses by AllAI Consulting, LLC. Black Leadership Kitchen Cabinet April 11th Council Study Session Info Final Charter Review Commission Report --- Send in a voice message: https://anchor.fm/onlyinsj/message
With its promises of making work both easier and more efficient, adoption and implementation of AI continues to expand across every industry. Likewise, AI is now also increasingly relied on by hiring/talent acquisition professionals to help solve continuing talent shortages. However, reports indicate AI is producing biases in hiring and other problematic outcomes at work. In this episode of All Things Work, host Tony Lee is joined by Merve Hickok, founder of AI Ethicist.org, a website focused on ethically responsible development and governance of AI, to discuss how organizations can ensure they use AI both responsibly and ethically in their business operations.Follow All Things Work wherever you listen to podcasts; rate and review on Apple Podcasts.This episode of All Things Work is sponsored by ADP.Music courtesy of bensound.
Debbie Reynolds “The Data Diva” talks to, Masheika Allgood Founder and CEO of AllAI Consulting, LLC and AI Ethicist. We discuss her talent of highlighting the impact of AI and algorithms on humans, the mythical idea of AI versus the reality of AI, assumptions, and inferences that can be problematic with AI, Bias in AI, the tension between technology and law, Data Privacy in the US, and her wish for data privacy in the future.
It's a bit of a reunion on this episode as AI Ethicist and cohost of Let's Chat Ethics podcast, Oriana Meldicott, who taught a workshop at Tech 2025 two years ago, reconnects with Charlie to discuss all things AI ethics.
It's a bit of a reunion on this episode as AI Ethicist and cohost of Let's Chat Ethics podcast, Oriana Meldicott, who taught a workshop at Tech 2025 two years ago, reconnects with Charlie to discuss all things AI ethics. After much laughter and reminiscing, Charlie and Oriana delve into the thorny, controversial issues surrounding AI ethics today and in the future (especially with the next generations). ABOUT ORIANA Oriana Medlicott is a writer, researcher and consultant in AI Ethics. Passionate about the future of technology, philosophy and art; she believes ethics should be a critical focal point in the design, development and deployment of new technologies. Oriana is part of the research team of the Z-Inspection project started by Roberto V. Zicari at Goethe University and sits on the advisory board for the AI Ethics Journal at the University of California. Oriana graduated from Nottingham Trent University with a Masters in Philosophy, her thesis focused on the effects of Biotechnology and Artificial Intelligence on Human Nature. She is currently enrolled in CodeOp Data Analytics bootcamp to develop her technical skills. CONNECT WITH ORIANA: Let's Chat Ethics podcast: linkedin.com/in/oriana-medlicott LinkedIn: https://www.linkedin.com/in/oriana-medlicott/ Twitter: https://twitter.com/orianajanem LINKS TO RESOURCES MENTIONED: Episode of Let's Chat Ethics Podcast featuring Charlie: https://spoti.fi/3x8TM4Q All Tech is Human: https://alltechishuman.org Event: Jobs of the Future – If Software Is Eating the World, Who's Going to Feed the Beast? (June 24): https://bit.ly/nocodejobs REACH OUT TO THE SHOW: Website: https://tech2025.com/fast-forward-podcast/ Twitter: @fastforward2025 Instagram: @fastforward2025 Facebook: http://bit.ly/fastforwardfacebook Email: fastforward@tech2025.com Charlie on Twitter: @itscomplicated Charlie on Instagram: @charlieoliverbk Charlie on LinkedIn: linkedin.com/in/charlieoliverny
In this episode we speak with Tim O'Brien who leads Ethical AI Advocacy at Microsoft. Before joining Microsoft in 2003, Tim worked as an engineer, a marketer and a consultant at startups and Fortune 500 companies. In this discussion, Tim leads us through Microsoft's journey – and his own – to become a leader in the field of AI ethics and answers the questions: what does an AI Ethicist do? And, is there a role for 'white guys' to play in this field? ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
Olivia Gambelin is an AI Ethicist and the Founder of Ethical Intelligence who works to bring ethical analysis into tech development to create human-centric innovation. As the Chief Executive Officer of Ethical Intelligence, she leads a remote team of over thirty experts in the #TechEthics field. Olivia is the new guest in this Dinis Guarda citiesabc openbusinesscouncil YouTube Series. Hosted by Dinis Guarda.Olivia Gambelin Interview Questions1. An introduction from you - background, overview, education... 2. Education background academia and industry?3. MS Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars.4. Can you tell us about your Company Ethical Intelligence?5. When we look at evolution and AI tech multiple challenges how do you look at it from a philosophical perspective?6. How do you look at the grey areas of ethics around technology and AI?Olivia Gambelin BiographyBesides her role as the founder of Ethical Intelligence and #AIEthicist, she is also the co-founder of the Beneficial AI Society, sits on the Advisory Board of Tech Scotland Advocates as well as the Founding Editorial Board of Springer Nature's AI and Ethics Journal.Olivia holds an MSc in Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars, as well as a BA in Philosophy and Entrepreneurship from Baylor University. About Dinis Guarda profile and Channelshttps://www.openbusinesscouncil.orghttps://www.dinisguarda.com/https://www.intelligenthq.comhttps://www.hedgethink.com/https://www.citiesabc.com/https://twitter.com/citiesabc__Dinis Guarda's 4IR: AI, Blockchain, Fintech, IoT - Reinventing a Nation https://www.4irbook.com/Intelligenthq Academy for blockchain, AI courses on https://academy.intelligenthq.com/
This is an episode you don't want to miss -- especially if you've ever wondered about the potential of robots in future society. We're not just talking to Dr. Billy Barry, AI Ethicist and Global Chief Innovation Officer for the Global Goodwill Ambassadors Foundation, we're also talking to Maria Bot (yes, she's a robot!) She's got personality, she's got knowledge, she interrupts just like a young child might, and she's even got jokes! Tune in to this talk with a man and his robot. The duo spread knowledge in the form of a teaching team to students across the country and they're here to show us how it's done, and what the implications of robots like Maria Bot might be on society.
The impact of AI on all aspects of our life is far-reaching. AI has incredible potential, and when and designed and developed with ethical principles underpinning it, AI can have a transformational effect on communities and businesses in many ways. But AI is not without bias – it can often carry with it the subjective assumptions and judgements of those developing it or built on data that reinforces some of the structural inequalities that exist in our businesses and our societies. This week, The New P&L speaks to eminent criminologist, criminal psychologist, AI ethicist and data activist, Renée Cummings. Renée specialises in diverse, equitable and inclusive AI design, development and deployment; principled, responsible and trustworthy AI strategy and ethical AI policy development and governance, as well as AI risk management and crisis communications. We discuss with Renée the challenges around bias in AI and how these can be overcome, as well as regulation of this sector; what a more ethical future for AI looks like and what type of leaders we need in business to get us there. --- Send in a voice message: https://anchor.fm/principlesandleadership/message