In AI We Trust?

Follow In AI We Trust?
Share on
Copy link to clipboard

In AI We Trust?, is a new podcast with Miriam Vogel of EqualAI and Mark Caine of the World Economic Forum that surveys the global landscape for inspiration and lessons in developing responsible, trustworthy artificial intelligence. Each episode aims to answer a ‘big question' in ethical AI with prominent lawmakers, leading thinkers, and internationally renowned authors.

Miriam Vogel


    • Apr 23, 2025 LATEST EPISODE
    • every other week NEW EPISODES
    • 40m AVG DURATION
    • 108 EPISODES


    Search for episodes from In AI We Trust? with a specific topic:

    Latest episodes from In AI We Trust?

    AI Literacy Series Ep. 7: Dr. Andrew Ng on Scaling AI for Social Good

    Play Episode Listen Later Apr 23, 2025 53:10


    In this episode of In AI We Trust?, Dr. Andrew Ng joins co-hosts Miriam Vogel and Rosalind Wiseman to discuss AI literacy and the need for widespread AI understanding. Dr. Ng makes a call to action that everyone should learn to code, especially with AI-assisted coding becoming more accessible. The episode also addresses AI fears and misconceptions, highlighting the importance of learning about AI for increased productivity and potential career growth. The conversation further explores AI's potential for positive large-scale social impact, such as in climate modeling, and the challenges of conveying this potential amidst widespread fears across the general public. The discussion addresses AI's potential for social good, the urgent need for AI education and upskilling, and the complexities of AI integration in education. This important episode underscores Andrew, Miriam & Rosalind's belief in the transformative potential of AI when individuals are empowered to manage, adapt and build with it, fostering innovation across various sectors and in user's daily lives.

    AI Literacy Series: Bridging the Gap Between Technology and Communities with Susan Gonzalez

    Play Episode Listen Later Apr 8, 2025 51:19


    This episode of In AI We Trust? features co-hosts Miriam Vogel and Rosalind Wiseman continuing their AI literacy series with Susan Gonzalez, CEO of AI&You. The discussion centers on the critical need for basic AI literacy within marginalized communities to create opportunities and prevent an "AI divide." Susan emphasizes overcoming fear, building foundational AI knowledge, and understanding AI's impact on jobs and small businesses. She stresses the urgency of AI education and AI&You's role in providing accessible resources.  The episode highlights the importance of dialogue and strategic partnerships to advance AI literacy, ensuring that everyone can benefit from AI's opportunities before the "window" closes.

    AI Literacy Series Ep. 6: Bridging the Gap Between Technology and Communities with Susan Gonzalez

    Play Episode Listen Later Apr 7, 2025 51:19


    This episode of In AI We Trust? features co-hosts Miriam Vogel and Rosyn Man continuing their AI literacy series with Susan Gonzalez, CEO of AI&You. The discussion centers on the critical need for basic AI literacy within marginalized communities to create opportunities and prevent an "AI divide." Susan emphasizes overcoming fear, building foundational AI knowledge, and understanding AI's impact on jobs and small businesses. She stresses the urgency of AI education and AI&You's role in providing accessible resources.  The episode highlights the importance of dialogue and strategic partnerships to advance AI literacy, ensuring that everyone can benefit from AI's opportunities before the "window" closes.

    AI Literacy Series Ep. 5 with Judy Spitz: Fixing the Tech Talent Pipeline

    Play Episode Listen Later Mar 25, 2025 75:59


    Description: In this episode of In AI We Trust?, co-hosts Miriam Vogel and Rosalind Wiseman speak with Dr. Judith Spitz, Founder and Executive Director of Break Through Tech sheds light on the blind spots within the industry, and discusses how Break Through Tech is pioneering innovative programs to open doors to talented individuals. She gets more young people from a broad array of backgrounds to study technology disciplines, ensures they learn leadership and other skills critical to their success, and gets them into industry- she is single-handedly building a more robust and prepared tech ecosystem.

    AI Literacy Series Ep. 4: Mason Grimshaw on AI Literacy and Data Sovereignty for Indigenous Communities

    Play Episode Listen Later Mar 11, 2025 56:19


    Mason Grimshaw: the Power of Identity, Community, and RepresentationDescription: Co-hosts of EqualAI's AI Literacy Series, Miriam Vogel and Rosalind Wiseman are joined by Mason Grimshaw, data scientist at Ode Partners and VP at IndigiGenius. Grimshaw discusses his roots growing up on a reservation, and what led him to the field of AI. He explains why it's his mission to bring AI education and tools back to his community. Grimshaw articulates how AI literacy is essential for Indigenous communities to ensure they retain data sovereignty and benefit from these game-changing tools.Literacy Series Description: The EqualAI AI Literacy podcast series builds on In AI We Trust?'s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series will provide listeners with valuable insights and discussions around AI's impact on society, leading efforts in this area of AI literacy, and how listeners can benefit from these experts and tools.

    AI Literacy Series Ep. 3: danah boyd on Thinking Critically about the Systems That Shape Us

    Play Episode Listen Later Feb 27, 2025 75:24


    Description: Co-hosts of EqualAI's AI Literacy Series, Miriam Vogel and Rosalind Wiseman sit down with danah boyd, Partner Researcher at Microsoft Research, visiting distinguished professor at Georgetown, and founder of Data & Society Research Institute, to explore how AI is reshaping education, social structures, and power dynamics. boyd challenges common assumptions about AI, urging us to move beyond simplistic narratives of good vs. bad and instead ask: Who is designing these systems? What are their limitations? And what kind of future are we building with them?Literacy Series Description: The EqualAI AI Literacy podcast series builds on In AI We Trust?'s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series will provide listeners with valuable insights and discussions around AI's impact on society, leading efforts in this area of AI literacy, and how listeners can benefit from these experts and tools.

    AI Literacy Series Ep. 2 with Dewey Murdick (CSET): Centering People in AI's Progress

    Play Episode Listen Later Feb 11, 2025 40:05


    Description: In this episode of EqualAI's AI Literacy Series, co-hosts Miriam Vogel and Rosalind Wiseman sit down with AI policy expert Dewey Murdick, Executive Director at Georgetown's Center for Security and Emerging Technology (CSET) who shares his hopes for AI's role in personal development and other key areas of society. From national security to education, Murdick unpacks the policies and international collaboration needed to ensure AI serves humanity first.Literacy Series Description: The EqualAI AI Literacy podcast series builds on In AI We Trust?'s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series will provide listeners with valuable insights and discussions around AI's impact on society, leading efforts in this area of AI literacy, and how listeners can benefit from these experts and tools.

    AI Literacy Series Ep. 1: What is AI and Why Are We Afraid of It?

    Play Episode Listen Later Jan 29, 2025 25:45


    Miriam Vogel and Rosalind Wiseman break down the basics, the limitations, the power, and the fear surrounding AI – and how you can transform it from a concept to a tool in the first episode of the In AI We Trust? AI Literacy series.The EqualAI AI Literacy podcast series builds on In AI We Trust?'s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series will provide listeners with valuable insights and discussions around AI's impact on society, who is leading in this area of AI literacy, and how listeners can benefit from these experts.

    Senator Mike Rounds (R-SD): Sen. Rounds' 2025 Message: Why Every Senate Committee is Talking AI This Congress

    Play Episode Listen Later Jan 17, 2025 30:22


    In this episode of #InAIWeTrust, Senator Mike Rounds (R-SD) discusses the transformative role of AI and the Senate's efforts to support its innovation and development. From working to advance AI-driven health care solutions to ensuring U.S. leadership in innovation, he shares legislative priorities and insights from the Senate AI Insight Forums and underscores the importance of AI literacy and collaboration across industry, reminding us: “AI is real, it's here, it's not going away.”

    Vilas Dhar (McGovern Foundation): AI for the people and by the people: Year-in-Review and 2025 Predictions

    Play Episode Listen Later Dec 20, 2024 36:34


    In this 2024 year-end episode of In AI We Trust?, Vilas Dhar of the Patrick J. McGovern Foundation and Miriam Vogel of EqualAI review 2024 and discuss predictions for the year ahead. 

    Elizabeth Kelly (AISI): How will the US AI Safety Institute lead the US and globe in AI safety?

    Play Episode Listen Later Oct 31, 2024 26:26


    In this episode of #InAIWeTrust Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute (AISI) explains the significance of last week's National Security Memorandum (NSM) on AI, shares her experience working on the Biden Executive Order on AI, and provides insight into the US AISI including: recent guidance for companies to mitigate AI risks, partnerships with Anthropic and Open AI; the upcoming inaugural convening of International Network of AI Safety Institutes. 

    Michael Chertoff (Chertoff Group) and Miriam Vogel (EqualAI): Is your AI use violating the law?

    Play Episode Listen Later Oct 17, 2024 28:55


    In this special edition of #InAIWeTrust?, EqualAI President and CEO Miriam Vogel and former Secretary of Homeland Security Michael Chertoff sit down to discuss their recent co-authored paper, Is Your Use of AI Violating the Law? An Overview of the Current Legal Landscape. Special guest Victoria Espinel, CEO of BSA | The Software Alliance, moderates the conversation with the co-authors to explore key findings, current laws on the books, and potential liabilities from AI deployment and use that lawyers, executives, judges, and policy makers need to understand in our increasingly AI-driven world. The article can be found on our website here.Read the Axios exclusive here.

    Dr. Brennan Spiegel (Cedars-Sinai): AI in healthcare: Will AI help humans to thrive?

    Play Episode Listen Later Oct 3, 2024 22:41


    In this episode of #InAIWeTrust, Dr. Brennan Spiegel, Cedars-Sinai Director of Health Services Research and Chair of Digital Health Ethics, discusses his use of AI for increased efficiencies and to improve patient care, including co-founding Xaia, an AI mental health tool. He talks about the importance of human-centered design and how AI can enable doctors to better serve and care for patients. 

    Russell Wald (HAI): Innovating for the future - Can academia bring the next wave of AI innovation and train our future generations?

    Play Episode Listen Later Jul 30, 2024 41:24


    In this episode, Russell Wald, Deputy Director at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) underscores the importance of academic research around AI, key lessons from the AI Index Report, the need for uniform AI benchmarks, and the value of AI education for policy makers.Resources mentioned in this episode:2024 AI Index ReportAI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries

    Dmitri Alperovitch (Silverado Policy Accelerator): The role of AI in “Cold War II”?

    Play Episode Listen Later May 29, 2024 30:56


    Dmitri Alperovitch, Co-Founder and Chairman of Silverado Policy Accelerator and Co-Founder of CrowdStrike, joins this week's episode of #InAIWeTrust to share his view that we are in the “Second Cold War” with China, the role of AI in this battle as well as in bio tech and other key sectors, and the role of government in this arena. To hear more from Dmitri, tune into his podcast Geopolitics Decanted: https://podcast.silverado.org/episodes Dmitri's new book, "World on the Brink: How America Can Beat China in the Race for the Twenty-First Century" can be found here: https://www.amazon.com/dp/B0CF1TKHY2 

    Scott Galloway (NYU): Wealth, influence, and AI in America: Will AI be a tool of societal control and who (should be) minding the store?

    Play Episode Listen Later May 8, 2024 40:42


    Scott Galloway, professor, entrepreneur, and best selling author, joins this week's episode of In AI We Trust? to cover hot topics including: impact of AI on businesses as a “corporate ozempic,” the political influence of “shallow fakes,” the dangerous threat of AI on our increasingly vulnerable and lonely population, the role of business executives and regulators in guaranteeing our safety, and the potential of AI to unlock physical and mental health care. --Resources mentioned in this episode:Algebra of WealthAlgebra of Happiness 

    Ylli Bajraktari (SCSP): Will the U.S. be AI-ready by 2030?

    Play Episode Listen Later May 2, 2024 38:21


    In this week's episode of In AI We Trust? Ylli Bajraktari, President and CEO of the Special Competitive Studies Project, joins us to discuss the implications of AI on national security, geopolitical competition, what the US government can do to establish a foundation for success in AI leadership, and the upcoming SCSP AI expo in DC (May 7-8). ―Resources mentioned in this episode:Ylli Bajraktari Testimony at the Second Senate AI Insight Forum The Next Chapter in AI2023 Year in Review: Six Items To Watch In 2024

    K.J. Bagchi (The Leadership Conference Education Fund Center for Civil Rights and Tech) Encoding Justice: Can we protect our civil rights in the AI age?

    Play Episode Listen Later Apr 10, 2024 33:17


    In this episode of In AI We Trust?, Koustubh “K.J.” Bagchi, VP of the recently established Center for Civil Rights and Technology, founded by The Leadership Conference Education Fund, discusses the impact of AI on democracy, including deepfakes and elections; the interplay between AI and privacy; and the state of federal civil rights actions on AI.

    Shelley Zalis, the Female Quotient (FQ): Can we achieve equality through algorithms?

    Play Episode Listen Later Mar 27, 2024 27:00


    In this episode of In AI We Trust? Shelley Zalis, founder and CEO of The Female Quotient (FQ), joins us in celebration of Women's History Month. Tune in to learn about the FQ's Algorithm for Equality Manifesto, how AI can help close the gender gap and the whys of championing women in industry.  ―Resources mentioned in this episode:The Algorithm for Equality® Manifesto

    Helen Toner (CSET): How to govern AI in the face of uncertainty?

    Play Episode Listen Later Mar 13, 2024 35:04


    This week Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET), joins In AI We Trust? to discuss decoding China's AI policies, AI's role in warfare, the potential impact of AI agents, challenges around regulating changing technology, and how to approach AI evaluations. ―Resources mentioned in this episode:Regulating the AI Frontier: Design Choices and ConstraintsWill China Set Global Tech Standards?The rise of artificial intelligence raises serious concerns for national security

    Micky Tripathi (HHS): Is AI good for your health?: How HHS is approaching AI use to support innovation and reduce harms & inefficiencies in our health care system.

    Play Episode Listen Later Mar 7, 2024 35:57


    In this episode, Dr. Micky Tripathi, National Coordinator for Health Information Technology at the Department of Health and Human Services (HHS), shares how AI can improve patient care, current work at HHS to implement the WH Executive Order on AI, the potential risks that AI presents to the healthcare system and how transparency can improve AI outcomes in the healthcare space. 

    Dr. Athina Kanioura (PepsiCo): How to change your employee “DNA” to harness the power of AI (Hint: upskilling)

    Play Episode Listen Later Feb 21, 2024 39:28


    In this episode, Dr. Athina Kanioura, Executive Vice President and Chief Strategy and Transformation Officer at PepsiCo, updates us on Pepsico's pioneering steps in providing technology and opportunities to its workers and partners of all sizes, her wish list for AI and privacy regulation, and the measures she has instilled at Pepsico to establish accountability, transparency, and success in developing responsible AI practices.

    Andrew Ng: Should we fear an AI-driven existential crisis?

    Play Episode Listen Later Jan 24, 2024 43:52


    Join us this week with AI-pioneer, Andrew Ng (Founder of DeepLearning.AI, Landing AI, Coursera, General Partner at the AI Found, adjunct professor at Stanford University) as we discuss the likelihood of AI's existential threat, the merits of regulation, the transformative power of generative AI, and the need for greater AI literacy.―Resources mentioned in this episode:Written Statement of Andrew Ng Before the U.S. Senate Insight Forum

    Kent Walker (Google & Alphabet): How do we make AI safety the new norm? Google's approach to AI safety by design

    Play Episode Listen Later Jan 10, 2024 21:14


    Join us for our first episode of In AI We Trust in 2024 featuring Kent Walker, President of Global Affairs and General Counsel at Google and Alphabet. In this episode, we examine the evolving global regulatory landscape, discuss the launch of Gemini – Google's latest and most advanced AI model, analyze emerging trends in AI capabilities, and delve into the development of Google's AI principles. Tune in to hear Kent share his thoughts on responsibility by design, the creation of AI safety norms, and how Google has worked to ensure safety in the midst of AI innovation.

    Raffi Krikorian (Emerson Collective): How to unleash AI “superpowers” for good?

    Play Episode Listen Later Dec 20, 2023 35:24


    Join us for a thought-provoking episode as we delve into empowering society with AI “superpowers.” In this final episode of the year of In AI We Trust?, our guest, Raffi Krikorian, CTO of Emerson Collective, shares his insights into the broader landscape of AI using technology to amplify societal impact. Discover his vision on how AI will impact elections and democracy, his call for increased government and academic support for AI's success, how he advocates for widespread AI and tech education, redefines success metrics in AI, and more. Tune in to explore the transformative potential of AI when harnessed for positive change.—Resources Mentioned this Episode:Emerson Collective 2023 Demo Day

    Decoding Big Tech's Impact on AI: Insights with Ross Andersen of The Atlantic

    Play Episode Listen Later Dec 13, 2023 35:57


    Join us this week as we delve into the pivotal role played by big tech and its CEOs in shaping AI development and policies. Ross Andersen, staff writer at The Atlantic, offers exclusive insights into the recent changes at OpenAI and discusses AI's historical significance, China's geopolitical influence, and the phenomenon of “foomscrolling.”

    Amanda Levendowski (Georgetown University): Can AI and copyright law coexist?

    Play Episode Listen Later Nov 22, 2023 38:50


    Georgetown University Law Center Associate Professor Amanda Levendowski, and guest co-host Karyn Temple, Senior Executive Vice President and Global General Counsel for the Motion Picture Association (MPA) and EqualAI board member, join In AI We Trust? to explore the protections and limits of copyright law. Tune in to learn more about training AI systems and methods for evaluating the potential harms.

    Vijay Karunamurthy (Scale AI): How do companies safely unlock the value of AI? (Hint – through human touch!)

    Play Episode Listen Later Nov 1, 2023 38:35


    Vijay Karunamurthy, field Chief Technology Officer at Scale AI, joins this week's episode of In AI We Trust? to discuss how companies can responsibly harness the benefits of AI, the necessary role humans play in that process, and how a diversity of expertise is key to developing functional guardrails. Tune in to hear more on evaluating AI systems at DEFCON 31, the potential to expand our horizons through public-private partnerships, and the Biden-Harris Administration Voluntary AI Commitments.

    Paul Rennie (British Embassy): Why do we need the UK AI Safety Summit (next week)?

    Play Episode Listen Later Oct 25, 2023 39:00


    Paul Rennie, the Head of the Global Economy Group at the British Embassy in Washington D.C., joins this week's episode of In AI We Trust? to discuss the upcoming U.K. AI Safety Summit, the U.K.'s approach to AI regulation, and the international regulatory landscape of AI. Tune in to learn more about who is participating in the upcoming Summit, what it means to be a responsible AI actor today, and how AI can be used to promote global good.

    Victoria Espinel (BSA), Reggie Townsend (SAS), and Dawn Bloxwich (Google Deepmind): How can companies responsibly integrate AI into their businesses? (Part 2)

    Play Episode Listen Later Oct 11, 2023 39:18


    In part two of our special episode of In AI We Trust?, EqualAI advisors Victoria Espinel and Reggie Townsend discuss how they got into the field of AI, their involvement in the EqualAI Badge Program and their experiences guiding its participants, and, along with Dawn Bloxwich, discuss how companies can benefit from their co-authored white paper: An Insider's Guide to Designing and Operationalizing a Responsible AI Governance Framework.

    Xuning (Mike) Tang (Verizon), Diya Wynn (AWS), and Catherine Goetz (LivePerson): How can companies responsibly integrate AI into their businesses? (Part 1)

    Play Episode Listen Later Oct 3, 2023 38:35


    In this week's special episode of In AI We Trust?, we interview three of our EqualAI Badge Program alumni—Xuning (Mike) Tang (Verizon), Diya Wynn (AWS), and Catherine Goetz (LivePerson)—to discuss their journey's in the responsible AI field, share their highlights from the EqualAI Badge Program and AI Summit, and underscore the main takeaways from our co-authored white paper: An Insider's Guide to Designing and Operationalizing a Responsible AI Governance Framework.

    Representative Ted Lieu (D-CA): Can Congress regulate AI?

    Play Episode Listen Later Sep 27, 2023 28:49


    Representative Ted Lieu (D-CA) joins this week's episode of In AI We Trust? to discuss how Congress should approach AI legislation, the impact of generative AI, and U.S. AI efforts on the global stage. Tune in to learn more about Representative Lieu's computer science focused approach to AI policy and more.

    Secretary Michael Chertoff and Lucy Thomson (ABA): How is AI reshaping the legal landscape?

    Play Episode Listen Later Aug 30, 2023 50:42


    Tune into this week's episode of In AI we Trust? with former United States Secretary of Homeland Security, Michael Chertoff, and Lucy Thomson (American Bar Association), to learn the ways in which AI is changing the legal landscape, how the ABA is tackling this issue (spoiler alert: we applaud the launch of the new AI TF), and to learn the Secretary's “Three D's” of AI governance.

    Sarah Hammer (Wharton School) and Dr. Philipp Hacker (European University Viadrina): Can AI accelerate the UN Sustainable Development Goals (SDGs)?

    Play Episode Listen Later Jul 26, 2023 59:22


    Professor Sarah Hammer, Executive Director at the Wharton School of the U. of Penn and leads Wharton Cypher Accelerator and Dr. Philipp Hacker, Chair for Law and Ethics of the Digital Society at the European New School of Digital Studies at European University join this week on In AI We Trust? to debrief their recent #AIforGood Conference. Listen to the discussion for insights on how financial regulation, sustainability in AI, content moderation, and other opportunities for international collaboration around AI will help advance UN SDG goals.—Resources Mentioned This Episode:AI for Good Global SummitAI for Good Global Summit 2023: Input Statement by Professor Philipp HackerRegulating ChatGPT and other Large Generative AI ModelsThe European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the FutureTeaching Fairness to Artificial Intelligence: Existing and Novel Strategies Against Algorithmic Discrimination Under EU LawSustainable AI RegulationLegal and technical challenges of large generative AI modelsRegulating ChatGPT and other Large Generative AI Models

    Chair Charlotte Burrows (EEOC): Is your AI system violating civil rights laws?

    Play Episode Listen Later Jul 12, 2023 45:40


    In this week's episode, we are joined by Chair of the U.S. Equal Employment Opportunity Commission (EEOC) Charlotte Burrows, who highlights the EEOC's work to address AI proliferation in the employment sphere. She discusses the need to increase education of the public on how AI is being used, EEOC guidance on key civil rights bills such as the Americans with Disabilities Act (ADA) and Title VII of the Civil Rights Act of 1964 (Title VII), as well as key points employers should be aware of when deploying AI.

    Kevin McKee (DeepMind): How does AI influence the core of being human?

    Play Episode Listen Later Jul 5, 2023 35:57


    Tune in to this week's episode of In AI We Trust, where Kevin McKee, Senior Research Scientist at Google DeepMind, discusses issues of AI fairness, AI's impact on the LGBT+ community, and the balance between developing AI that humans can trust and the anthropomorphization of technology. Kevin leads research projects focused on machine learning, social psychology, and sociotechnical systems and has worked on algorithmic development and evaluation, environment design, and data analysis.—Resources Mentioned this Episode:Humans may be more likely to believe disinformation generated by AICountries Must Act Now Over Ethics of Artificial IntelligenceOnline hate and harassment continues to rise

    Chris Wood (LGBT Tech): How can we ensure our LGBT+ voices are heard through our data?

    Play Episode Listen Later Jun 28, 2023 45:00


    This week on In AI we Trust? Executive Director of LGBT Tech, Chris Wood, joins Miriam Vogel and guest-co-host Kathy Baxter for a special episode in celebration of Pride Month. Join this week's conversation on the duality of technology for the LGBT community – how it can be an impactful medium to foster connection in the LGBT+ community or a harmful tool leveraged against the same individuals – the significance of diversity in tech, the complexity of representation in our datasets, as well as his important research and other initiatives that range from broadband access in rural communities to building an AI of their own. —Resources Mentioned this Episode:LGBT Tech WebsiteVision For Inclusion: An LGBT Broadband FutureLGBT Tech Programs

    Gilman Louie (America's Frontier Fund, CEO of In-Q-Tel, NSCAI Comm'r): How will we respond to this ‘Sputnik' moment?

    Play Episode Listen Later May 24, 2023 53:24


    Gilman Louie is CEO and co-founder of America's Frontier Fund, CEO of In-Q-Tel, and an NSCAI Commissioner. Tune into this week's episode of In AI we Trust, where Gilman shares his thoughts on the government's role in regulating, funding, and convening key stakeholders to promote responsible AI. Gilman invokes similar moments of technological innovation in our history to contextualize the opportunity in the U.S. at this moment to set the standards in the AI race; and considers the challenges that derive from our “click economy”. Hear these thoughts and more in this great episode.

    Rep. Chrissy Houlahan (D-PA): How do we prepare Congress for the age of AI?

    Play Episode Listen Later May 3, 2023 30:01


    Meet one of the Bad A#%* women in Congress, Representative Chrissy Houlahan (D-PA). She is a trailblazer: a strong advocate for and accomplished practitioner in STEAM (science, technology, engineering, art and math) as an engineer, Air Force veteran, successful entrepreneur and former chemistry teacher. This week on In AI we Trust? Miriam Vogel and special guest co-host Victoria Espinel of #BSA ask Representative Houlahan to share her unique perspective on why – and how – Congress must do more to support our veterans, women, entrepreneurship and how this relates to her work in Congress on AI policy.

    Dr. Haniyeh Mahmoudian (DataRobot): Who should be involved in AI ethics?

    Play Episode Listen Later Apr 26, 2023 41:40


    In this episode of In AI we Trust? Dr. Haniyeh Mahmoudian, Global AI Ethicist at DataRobot, provides insight into the timely and critical role of an AI ethicist. Haniyeh explains how culture is a key element of responsible AI development. She also reflects on the questions to ask in advance of designing an AI model and the importance of engaging multiple stakeholders to design AI effectively. Tune in to this episode to learn these and other insights from an industry thought leader.—Resources mentioned in this episode: How to Tackle AI Bias (Haniyeh Mahmoudian, PhD)

    Justin Hotard (Hewlett Packard Enterprise): Are local communities and data the key to unlocking better AI?

    Play Episode Listen Later Apr 5, 2023 40:14


    Justin Hotard leads the High Performance Computing (HPC) & AI business group at Hewlett Packard Enterprise (HPE). Tune in to In AI we Trust? this week as he discusses supercomputing, HPE's commitment to open source models for global standardization and using responsible data to ensure responsible AI. –Resources mentioned in this episode: What are supercomputers and why are they important? An expert explains (Justin Hotard & the World Economic Forum) Fueling AI for good with supercomputing (Justin Hotard & HPE) Hewlett Packard Enterprise ushers in next era in AI innovation with Swarm Learning solution built for the edge and distributed sites (HPE)

    Jordan Crenshaw (U.S. Chamber of Commerce): Can your company survive without AI adoption?

    Play Episode Listen Later Mar 10, 2023 38:01


    Based on the testimony of 87 witnesses from 5 field hearings across the US, the U.S. Chamber of Commerce bipartisan AI Commission on Competition, Inclusion, and Innovation released a report yesterday, addressing the state of AI. Tune in this week to hear the U.S. Chamber's Technology Engagement Center (C_TEC) VP, Jordan Crenshaw share key takeaways from this and other recent C_TEC reports, why tech issues are business issues, the importance of digitizing government data, and the critical impact of tech on small businesses. —Materials mentioned in this episode:The U.S. Chamber's AI Commission report (U.S. Chamber of Commerce)Investing in Trustworthy AI (U.S. Chamber of Commerce & Deloitte)U.S. Chamber Artificial Intelligence Principles (U.S. Chamber of Commerce)Impact of Technology on U.S. Small Businesses (U.S. Chamber of Commerce Technology Engagement Center)

    Elham Tabassi and Reva Schwarz (NIST): What's the big deal about the NIST AI Risk Management Framework (AI RMF)?

    Play Episode Listen Later Feb 6, 2023 50:21


    Elham Tabassi and Reva Schwartz – two AI leaders from the National Institute of Standards and Technology (NIST) – join us this week to discuss the AI Risk Management Framework #AIRMF released on January 26th thanks to the herculean efforts of our guests. Tune in to find out why Miriam Vogel and Kay Firth-Butterfield believe the AI RMF will be game changing. Learn the purpose behind the AI RMF; the emblematic 18-month multi (multi)-stakeholder, transparent process to design it; how they made it ‘evergreen' at a time when our AI progress is moving at a lightning speed pace and much more.—Materials mentioned in this episode:AI Risk Management Framework, (NIST)NIST AI Risk Management Framework Playbook, (NIST)Perspectives about the NIST Artificial Intelligence Risk Management Framework, (NIST)

    Davos in Review: Should we hit 'pause' on generative AI?

    Play Episode Listen Later Feb 2, 2023 34:35


    The annual World Economic Forum (WEF) at Davos gathers leading thinkers in government, business and civil society annually to discuss current global economic and social challenges. This week, listen to WEF Executive Committee Member, our own co-host Kay Firth-Butterfield, and Miriam Vogel discuss why this was Kay's “best Davos yet”. Not surprisingly, generative AI and ChatGPT were among the hottest topics. Learn insights gleaned on generative AI's power and limitations, the key role that investors plan in development and deployment of responsible AI, and how AI can predict wildfires and help fight the climate crisis. Leave a 5 star rating!—Davos discussions and materials mentioned in this episode:A Conversation with Satya Nadella, CEO of MicrosoftGenerative AIInvesting in AI, with CareAI for Climate AdaptationHow AI Fights WildfiresSatya Nadella Says AI Golden Age Is Here and ‘It's Good for Humanity'These were the biggest AI developments in 2022. Now we must decide how to use them, (Kay Firth-Butterfield)

    Dr. Stuart Russell (UC Berkeley): Are we living in an AGI world?

    Play Episode Listen Later Jan 18, 2023 51:03


    Dr. Stuart Russell (CS Prof, UC Berkeley) has kept us current on AI developments for decades and in this week's episode, prepares us for the headlines we'll hear about this week @Davos and in the coming year. He shares his thoughts and concerns on ChatGPT, Lethal Autonomous Weapons Systems, how the future of work might look through an AI lens, and a human compatible design for AI. Listen to this episode here and subscribe to ensure you catch other important upcoming discussions.—Materials mentioned in this episode:Davos 2023, the World Economic ForumRadio Davos, A World Economic Forum Podcast

    2022 Year in Review: Are we ready for what's coming in AI?

    Play Episode Listen Later Jan 11, 2023 33:53


    In this special year-in-review edition of "In AI we Trust?", co-hosts Kay Firth-Butterfield (@KayFButterfield) and Miriam Vogel (@VogelMiriam) take a look back at the key themes and insights from their conversations. From interviews with thought leaders, government officials and senior executives in the field, we explore progress and challenges from the past year in the quest for trustworthy AI. We also look ahead to what you can expect to see and encounter, including key issues that are likely to emerge in AI in 2023. Join us as we reflect and gear up for an exciting year in the accelerated path toward game-changing and responsible AI.—Materials Mentioned in this Episode:Davos 2023, the World Economic ForumA 72-year-old congressman goes back to school, pursuing a degree in AI, The Washington PostBoard Responsibility for Artificial Intelligence Oversight, Miriam Vogel and Robert G. Eccles, Harvard Law School Forum on Corporate Governance5 ways to avoid artificial intelligence bias with 'responsible AI', Miriam Vogel and Kay Firth-Butterfield

    Dr. Suresh Venkatasubramanian (White House OSTP/Brown University): Can AI be as safe as our seatbelts?

    Play Episode Listen Later Dec 19, 2022 46:00


    In this episode, we are joined by Dr. Suresh Venkatasubramanian, a former official at the White House Office of Science and Technology Policy (OSTP) and CS professor at Brown, to discuss his work in the White House developing policy, including the AI Bill of Rights Blueprint. Suresh also posits on the basis for current AI challenges as failure of imagination, the need to engage diverse voices in AI development, and the evolution of safety regulations for new technologies. —Materials mentioned in this episode:Blueprint for an AI Bill of Rights (The White House)

    Joaquin Quiñonero Candela (LinkedIn): Can we meet business goals AND attain responsible AI? (spoiler: we can and must)

    Play Episode Listen Later Dec 7, 2022 43:56


    This week, Joaquin Quiñonero Candela (LinkedIn, formerly at Facebook and Microsoft) joins us to discuss AI storytelling; ethics by design; the imperative of diversity to create effective AI; and strategies he uses to make responsible AI a priority for the engineers he manages, policy-makers he advises, and other important stakeholders.—Materials mentioned in this episode:Technology Primer: Social Media Recommendation Algorithms (Harvard Belfer Center)Finding Solutions: Choice, Control, and Content Policies; a conversation between Karen Hao and Joaquin Quiñonero Candela hosted live by the Harvard Belfer Center

    Deputy Secretary Graves (DOC) answers the question: Can We Maintain Our AI Lead? (spoiler alert: We are AI Ready!)

    Play Episode Listen Later Nov 16, 2022 38:22


    The Department of Commerce plays a key role in the USG's leadership in AI given the multiple ways AI is used, patented and governed by the Department. In this special episode, hear from Commerce Deputy Secretary Don Graves on how the US intends to maintain leadership in AI, including through its creation of standards to attain trustworthy AI, working with our allies and ensuring an inclusive and ready AI workforce. —Materials mentioned in this episode:Proposed Law Enforcement Principles on the Responsible Use of Facial Recognition Technology Released from the World Economic Forum Artificial Intelligence: Detecting Marine Animals with Satellites (NOAA Fisheries)

    Carl Hahn (NOC): When your AI reaches from the cosmos to the seafloor, and the universe in between, how can you ensure it is safe and trustworthy?

    Play Episode Listen Later Nov 2, 2022 44:22


    Carl Hahn, Vice President and Chief compliance officer at Northrop Grumman, one of the world's largest military technology providers, joins us on this episode to help answer this question that he addresses daily. Carl shares his perspective on the impact of the DoD principles, how governments and companies need to align on the “how” of developing and using AI responsibly, and much more. ---------------Materials mentioned in this episode:NAIAC Field Hearing @ NIST YouTube Page“DOD Adopts 5 Principles of Artificial Intelligence Ethics” (Department of Defense)“Defense AI Technology: Worlds Apart From Commercial AI” (Northrop Grumman)Smart Toys (World Economic Forum): Smart Toy Awards

    Mark Brayan (Appen): For whom is your data performing?

    Play Episode Listen Later Oct 12, 2022 28:52


    In this episode, Mark Brayan focuses on a key ingredient for responsible AI: ethically sourced, inclusive data. Mark is the CEO and Managing director of Appen, which provides training data for thousands of machine learning and AI initiatives. Good quality data is imperative for responsible AI (garbage in, garbage out), and part of that equation is making sure that it is sourced inclusively, responsibly, and ethically. When developing and using responsible AI, it's critically important to get your data right by asking the right questions; for whom is your data performing – and for whom could it fail?— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.

    Claim In AI We Trust?

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel