Podcasts about trustworthy ai

  • 136PODCASTS
  • 189EPISODES
  • 38mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 23, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about trustworthy ai

Latest podcast episodes about trustworthy ai

Alter Everything
183: Getting Real Business Value from AI

Alter Everything

Play Episode Listen Later Apr 23, 2025 33:12


This week on Alter Everything, we chat with Scott Jones and Treyson Marks from DCG Analytics about the history and misconceptions of AI, the importance of data quality, and how Alteryx can serve as a powerful tool for pre-processing AI data. Topics of this episode include the role of humans in auditing AI outputs and the critical need for curated data to ensure trustworthy results. Through real-world use cases, this episode explores how AI can significantly enhance analytics and decision-making processes in various industries.Panelists: Treyson Marks, Managing Partner @ DCG Analytiocs - LinkedInScott Jones, Principal analytics consultant @ DCG Analytics - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: DCG Analytics Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.

Pondering AI
AI Literacy for All with Phaedra Boinodiris

Pondering AI

Play Episode Listen Later Apr 2, 2025 43:24


Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn't enough; and the hard work required to develop good AI.  Phaedra Boinodiris is IBM's Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. A transcript of this episode is here.    Additional Resources: Phaedra's Website -  https://phaedra.ai/ The Future World Alliance - https://futureworldalliance.org/ 

Microsoft Business Applications Podcast
Is AI over-hyped? Is this AI wave going to become something else?

Microsoft Business Applications Podcast

Play Episode Listen Later Mar 18, 2025 35:24 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/666  We explore whether AI is overhyped or dangerously underhyped, examining the disconnect between those creating AI technology and those selling it without adequately addressing trustworthy AI concerns.TAKEAWAYS• The Microsoft AI Tour event demonstrated excellent technical content with a strong focus on trustworthy AI• There's a dangerous disconnect between people who make AI technology and those who sell it regarding responsible AI implementation• Trustworthy AI doesn't mean stopping innovation but preventing potential calamities• The scale of AI's impact may be drastically underestimated, similar to our inability to truly comprehend "65 million years since dinosaurs"• AI enables processing information at unprecedented scale, creating extraordinary risks in surveillance and human rights contexts• Corporate discussions about completely replacing customer service departments with AI raise serious socioeconomic concerns• Shadow AI applications being developed without proper governance represent significant risks• Containing AI's risks while harnessing its benefits requires education, curiosity, and political wisdom• Book recommendations: "The Coming Wave" by Mustafa Suleiman and "Origin" by Dan BrownGet educated and don't rely on echo chambers or news articles - read in-depth material from experts to form your own opinions about AI's trajectory and implications.OTHER RESOURCES90 Day Mentoring Challenge - https://www.90daymc.com/  Support the show - https://www.buymeacoffee.com/nz365guy This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening

Le chemin de ma philosophie
54. Ethical AI's Dirty Secret

Le chemin de ma philosophie

Play Episode Listen Later Mar 10, 2025 3:34


Every “trustworthy” AI system quietly betrays at least one sacred principle. Ethical AI forces brutal trade-offs: Prioritizing any one aspect among fairness, accuracy, and transparency compromises the others. It's a messy game of Jenga: pull one block (like fairness), and accuracy wobbles; stabilize transparency, and performance tumbles. But why can't you be fair, accurate, AND transparent? And is there a solution? The Trilemma in Action Imagine you try to create ethical hiring algorithms. Prioritize diversity and you might ghost the best candidates. Obsess over qualifications and historical biases sneak in like uninvited guests. Same with chatbots. Force explanations and they'll robot-splain every comma. Let them “think” freely? You'll get confident lies about Elvis running a B&B on a Mars colony. Why Regulators Won't Save Us Should we set up laws that dictate universal error thresholds or fairness metrics? Regulators wisely steer clear of rigid one-size-fits-all rules. Smart move. They acknowledge AI's messy reality where a 3% mistake margin might be catastrophic for autonomous surgery bots but trivial for movie recommendation engines. The Path Forward? Some companies now use “ethical debt” trackers, logging trade-offs as rigorously as technical debt. They document their compromises openly, like a chef publishing rejected recipe variations alongside their final dish. Truth is: the real AI dilemma is that no AI system maximizes fairness, accuracy, and transparency simultaneously. So, what could we imagine? Letting users pick their poison with trade-off menus: “Click here for maximum fairness (slower, dumber AI)” or “Turbo mode (minor discrimination included)”? Or how about launching bias bounties: pay hackers to hunt unfairness and turn ethics into an extreme sport? Obviously, it's complicated. The Bullet-Proof System Sorry, there's no bullet-proof system since value conflicts will always demand context-specific sacrifices. After all, ethics isn't about avoiding hard choices, it's about admitting we're all balancing on a tightrope—and inviting everyone to see the safety net we've woven below. Should We Hold Machines to Higher Standards Than Humans? Trustworthy AI isn't achieved through perfect systems, but through processes that make our compromises legible, contestable, and revisable. After all, humans aren't fair, accurate, and transparent either.

Off the Radar
The Artificial Future of Forecasting

Off the Radar

Play Episode Listen Later Feb 25, 2025 45:10


Artificial Intelligence has become a hot-button issue, with questions about AI accuracy and precision. But this week, we're exploring the role of artificial intelligence in weather forecasting! Come Off the Radar with us as we learn about how generative AI modeling can now use historical weather data to make hyper-local predictions about future weather probabilities. We'll be talking to Dr. Amy McGovern from the National Science Foundation's AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. We'll also be chatting with Ilan Price, a Senior Research Scientist at Google DeepMind whose work centers around using AI in weather forecasting. If you rely on your phone to check the weather forecast, you won't want to miss this one!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

CISO Tradecraft
#220 - Executive Updates to AI

CISO Tradecraft

Play Episode Listen Later Feb 17, 2025 43:04 Transcription Available


In this CISO Tradecraft episode, host G. Mark Hardy delves into the recent U.S. presidential executive orders impacting AI and their implications for cybersecurity professionals. Learn about the evolution of AI policies from various administrations and how they influence national security, innovation, and the strategic decisions of CISOs. Discover key directives, deregulatory moves, and practical steps you can take to secure your AI systems in an era marked by rapidly changing regulations. Plus, explore the benefits of using AI tools like ZeroPath to bolster your cybersecurity efforts. Big Thanks to our Sponsors: ZeroPath - https://zeropath.com/ Transcripts: https://docs.google.com/document/d/1Nv27tpDQs2fjdOedJOi0LhlkyQ5N5dKt Links:  https://www.americanbar.org/groups/public_education/publications/teaching-legal-docs/what-is-an-executive-order-/  https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence https://www.csis.org/analysis/made-china-2025 https://www.researchgate.net/publication/242704112_China's_15-year_Science_and_Technology_Plan  https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government  https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence  https://www.presidency.ucsb.edu/documents/executive-order-14148-initial-rescissions-harmful- executive-orders-and-actions https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting- innovation-in-the-nations-cybersecurity https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting- innovation-in-the-nations-cybersecurity  https://www.cisecurity.org/controls/cis-controls-list  Chapters  00:00 Introduction to AI Policy Shifts 00:23 AI Tool for Cybersecurity: ZeroPath 01:12 Understanding Executive Orders 02:44 EO 13859: Maintaining American Leadership in AI 05:42 EO 13960: Trustworthy AI in Federal Government 07:10 EO 14028: Strengthening U.S. Cybersecurity 09:38 EO 14110: Safe and Trustworthy AI Development 11:09 EO 14148: Rescinding AI Policies 12:21 EO 14179: Removing Barriers to AI Innovation 15:26 EO 14144: Strengthening Cybersecurity Innovation 37:19 Mapping Executive Orders to CIS Controls 40:15 Conclusion and Key Takeaways

Digital Pathology Podcast
120: DigPath Digest #21 | AI's Role in Prostate & Breast Cancer Diagnosis and Collaborative Annotation Tools

Digital Pathology Podcast

Play Episode Listen Later Jan 26, 2025 46:03 Transcription Available


Send us a textWelcome to the 21st edition of DigiPath Digest! In this episode, together with Dr. Aleksandra Zuraw you will review the latest digital pathology abstracts and gain insights into emerging trends in the field. Discover the promising results of the PSMA PET study for prostate cancer imaging, explore the collaborative open-source platform HistioColAI for enhancing histology image annotation, and learn about AI's role in improving breast cancer detection. Dive into topics such as the role of AI in renal histology classification, the innovative TrueCam framework for trustworthy AI in pathology, and the latest advancements in digital tools like QuPath for nephropathology. Stay tuned to elevate your digital pathology game with cutting-edge research and practical applications.00:00 Introduction to DigiPath Digest #2101:22 PSMA PET in Prostate Cancer06:49 HistoColAI: Collaborative Digital Histology12:34 AI in Mammogram Analysis17:21 Blood-Brain Barrier Organoids for Drug Testing22:02 Trustworthy AI in Lung Cancer Diagnosis30:09 QuPath for Nephropathology35:30 AI Predicts Endocrine Response in Breast Cancer40:04 Comprehensive Classification of Renal Histologic Types45:02 Conclusion and Viewer EngagementLinks and Resources:Subscribe to Digital Pathology Podcast on YouTubeFree E-book "Pathology 101"YouTube (unedited) version of this episodeTry Perplexity with my referral linkMy new page built with PerplexityHistoColAI Github PagePublications Discussed Today:

AI For Pharma Growth
E148 | What is trustworthy AI and how do we achieve it?

AI For Pharma Growth

Play Episode Listen Later Jan 15, 2025 35:00


In this episode, we explore the principles of trustworthy AI with Pamela Gupta, Chief AI Governance and Trust Officer and founder of Trusted AI.Pamela shares her journey of building a company dedicated to responsible and ethical AI. She introduces the eight essential pillars of trustworthy AI, focusing on critical aspects such as transparency, fairness, bias mitigation, privacy, and security. The conversation delves into real-world examples of AI failures, like bias in healthcare models, and how these lessons underline the importance of robust, human-centric design in AI systems.We discuss the challenges of ensuring diverse data representation, the dangers of AI hallucinations, and the strategies businesses can adopt to ensure ethical AI implementation. Pamela highlights the importance of accountability and the role of strategic governance in achieving resilient and trustworthy AI, especially in high-risk areas like healthcare.Guest: Pamela Gupta – Founder & CEO of Trusted AI, expert in AI governance, cybersecurity, and risk management.Topics Covered:What is trustworthy AI and why it matters.The eight pillars of trustworthy AI.The risks of bias and fairness in AI systems.Transparency and explainability in AI models.Privacy and security in AI frameworks.Accountability in AI development and deployment.Real-world examples of AI failures and lessons learned.Human-centric design for ethical AI.How to avoid pitfalls when adopting AI in healthcare and other high-risk industries.Connect with Pamela Gupta:Website:Trusted AILinkedIn: Pamela Gupta LinkedInAbout the Podcast:AI For Pharma Growth is the podcast from pioneering pharma artificial intelligence entrepreneur Dr. Andree Bates. This show aims to demystify AI for all pharma professionals, from biotech startups to Big Pharma. Dr Bates shares her expertise and insights into how AI-powered tools can save time, improve outcomes, and grow pharma businesses.Each episode features industry leaders and innovators showcasing how AI can be applied to areas like sales, marketing, production, social media, customer insights, and much more.For more resources and information: AI For Pharma Growth

Tech It Out
ENCORE PRESENTATION! SanDisk launches first-of-its kind drive + Amazon talks pet tech, celeb Kevin Wendt on air purifiers, and IBM on AI

Tech It Out

Play Episode Listen Later Jan 3, 2025 39:08


SanDisk Desk Drive fuses huge capacity with fast speeds! Learn all about it here first, with Christina Garza, Director of Product Marketing at Western DigitalBachelorette alum and firefighter Kevin Wendt drops by to talk about the first air purifier from LGHave dogs or cats? Amazon's Melissa Mohr, Director of Smart Home, is a guest on Tech It Out, to share what's new and innovative for your furry friendsPhaedra Boinodiris, IBM Consulting's Global Leader for Trustworthy AI, stop by to talk “generative AI” ethics and governanceThank you to Intel and Visa for your incredible support!

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 417: Using Microsoft AI to Create Real Magic with Coca-Cola

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Dec 6, 2024 24:30


Send Everyday AI and Jordan a text messageDid you see that Coca-Cola holiday AI commercial!? It's been all the buzz lately. There were more than a dozen humans that worked on the project. Think that's shocking? Wait until you hear the REAL story and a new way that Coca-Cola is partnering with Microsoft to create some real magic.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan, Pratik and Marco questions on AI Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. AI and Coca Cola2. AI beyond Chatbots3. AI and Microsoft Azure Foundry4. AI Usage in Coca Cola CampaignsTimestamps:00:00 AI empowers creativity and consumer connection daily.05:57 Custom snow globes created using Microsoft technology.10:11 Discussing Coca-Cola's nostalgic Santa Claus depiction challenges.12:19 Reimagining Santa authentically with cutting-edge technology.15:13 Started Gen AI and OpenAI collaboration in 2022.18:30 Combining human creativity and AI technology.22:27 Trustworthy AI is crucial for responsible deployment.23:50 Generative AI offers unique consumer connections.Keywords:AI, generative AI, Copilot, productivity, consumers, everyday AI, artificial intelligence, Microsoft WorkLab, Azure AI, Microsoft Ignite conference, Pratik Thakar, Coca Cola Company, digital twin, Leonardo AI, chatbot, Outlook calendar, Marco Casalaina, Create Real Magic campaign, Microsoft, marketing, commercials, AI commercials, AI Foundry, programming model, trustworthy AI, ethical prompting strategy, technology, digital marketing, nostalgia, Coca Cola Santa.

BlockHash: Exploring the Blockchain
Ep. 459 Salman Avestimehr | The AI Economy with ChainOpera

BlockHash: Exploring the Blockchain

Play Episode Listen Later Dec 2, 2024 28:39


For episode 459, Co-founder & CEO Salman Avestimehr joins Brandon Zemp to discuss ChainOpera, a decentralized AI platform and a community-driven generative AI application ecosystem. He is also the Dean's Professor of ECE and CS at the University of Southern California (USC), the inaugural director of the USC-Amazon Center on Trustworthy AI, and co-founder of TensorOpera. He is an expert in machine learning, information theory, security/privacy, and blockchain systems, with more than 10 years of R&D leadership in both academia and industry. He is a United States Presidential award winner for his profound contributions in information technology, and a Fellow of IEEE. He received his PhD from UC Berkeley/EECS in 2008, and has held Advisory positions at various Tech companies, including Amazon. ⏳ Timestamps: 0:00 | Introduction 0:55 | Who is Salman Avestimehr? 5:04 | Web3 interest on College campuses 6:00 | What is ChainOpera? 8:01 | What is the AI Economy? 13:18 | AI Agents 14:14 | ChainOpera's Decentralized AI Platform 17:44 | ChainOpera's infrastructure 20:43 | Security & Privacy of Decentralized AI 25:46 | ChainOpera 2025 Roadmap 27:28 | ChainOpera website, social media & community

Leadership Reimagined
Leadership for a Trustworthy AI World

Leadership Reimagined

Play Episode Listen Later Nov 27, 2024 36:21


Janice is joined by Lara Abrash, Chair at Deliotte US, the largest multiprofessional services network in the US, where they talk about how to connect ideas, innovations, and industries to create prosperity for clients, society and the planet. They discuss how Laras family links to her success within the company today, and the new upcoming AI implications.Tags: janice, ellig, lara, abrash. chair, deloitte, us, ai, professional, technology, clients, society, family, lesson, mentor

Practical AI
The path towards trustworthy AI

Practical AI

Play Episode Listen Later Oct 29, 2024 51:46


Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NIST's 'AI Risk Management Framework' (AI RMF) within the context of the White House's 'Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence'.

Changelog Master Feed
The path towards trustworthy AI (Practical AI #293)

Changelog Master Feed

Play Episode Listen Later Oct 29, 2024 51:46


Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NIST's 'AI Risk Management Framework' (AI RMF) within the context of the White House's 'Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence'.

Eye On A.I.
#213 Mark Surman: How Mozilla Is Shaping the Future of Open-Source AI

Eye On A.I.

Play Episode Listen Later Oct 13, 2024 47:14


This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere.   If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai     In this episode of the Eye on AI podcast, we sit down with Mark Surman, President of Mozilla, to explore the future of open-source AI and how Mozilla is leading the charge for privacy, transparency, and ethical technology.   Mark shares Mozilla's vision for AI, detailing the company's innovative approach to building trustworthy AI and the launch of Mozilla AI. He explains how Mozilla is working to make AI open, accessible, and secure for everyone—just as it did for the web with Firefox. We also dive into the growing importance of federated learning and AI governance, and how Mozilla Ventures is supporting groundbreaking companies like Flower AI.   Throughout the conversation, Mark discusses the critical need for open-source AI alternatives to proprietary models like OpenAI and Meta's LLaMA. He outlines the challenges with closed systems and highlights Mozilla's work in giving users the freedom to choose AI models directly in Firefox.   Mark provides a fascinating look into the future of AI and how open-source technologies can create trillions in economic value while maintaining privacy and inclusivity. He also sheds light on the global race for AI innovation, touching on developments from China and the impact of public AI funding.   Don't forget to like, subscribe, and hit the notification bell to stay up to date with the latest trends in AI, open-source tech, and machine learning!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) Introduction to Mark Surman and Mozilla's Mission (02:01) The Evolution of Mozilla: From Firefox to AI (04:40) Open-Source Movement and Mozilla's Legacy (06:58) The Role of Open-Source in AI (11:06) Advancing Federated Learning and AI Governance (14:10) Integrating AI Models into Firefox (16:28) Open vs Closed Models (22:09) Partnering with Non-Profit AI Labs for Open-Source AI (25:08) How Meta's Strategy Compares to OpenAI and Others (27:58) Global Competition in AI Innovation (31:17) The Cost of Training AI Models (33:36) Public AI Funding and the Role of Government (37:40) The Geopolitics of AI and Open Source (41:35) Mozilla's Vision for the Future of AI and Responsible Tech

The Azure Security Podcast
Episode 102: Entra ID Purple-teaming with Dr Azure AD

The Azure Security Podcast

Play Episode Listen Later Oct 7, 2024 36:42


In this episode Michael and Sarah talk to Nestori Syynimaa about Entra ID security and his purple-team tool, AADInternals. We also cover the latest security news about Secure Future Initiative (SFI), MFA for Azure Portal, Playright, WordPress, NSG, Bastion, Azure Functions, MS Ignite, App Service, Defender for Cloud, Containers, Azure Monitor, AKS, Trustworthy AI and Azure AI Content Safety.https://aka.ms/azsecpod

The Road to Accountable AI
Reggie Townsend: The Deliberate and Intentional Path to Trustworthy AI

The Road to Accountable AI

Play Episode Listen Later Sep 26, 2024 37:15 Transcription Available


In this episode, Kevin Werbach is joined by Reggie Townsend, VP of Data Ethics at SAS, an analytics software for business platform. Together they discuss SAS's nearly 50-year long history of supporting business's technology and the recent implementation of responsible AI initiatives. Reggie introduces model cards and the importance of variety in AI systems across diverse stakeholders and sectors. Reggie and Kevin explore the increase in both consumer trust and purchases when they feel a brand is ethical in its use of AI and the importance of trustworthy AI in employee retention and recruitment. Their discussion approaches the idea of bias in an untraditional way, highlighting the positive humanistic nature of bias and learning to manage the negative implications. Finally, Reggie shares his insights on fostering ethical AI practices through literacy and open dialogue, stressing the importance of authentic commitment and collaboration among developers, deployers, and regulators. SAS adds to its trustworthy AI offerings with model cards and AI governance services Article by Reggie Townsend: Talking AI in Washington, DC Reggie Townsend oversees the Data Ethics Practice (DEP) at SAS Institute. He leads the global effort for consistency and coordination of strategies that empower employees and customers to deploy data driven systems that promote human well-being, agency and equity. He has over 20 years of experience in strategic planning, management, and consulting focusing on topics such as advanced analytics, cloud computing and artificial intelligence. With visibility across multiple industries and sectors where the use of AI is growing, he combines this extensive business and technology expertise with a passion for equity and human empowerment.   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.  

Disruption / Interruption
Disrupting heart disease as the top killer of women: Trustworthy AI with Dino Martis to the rescue

Disruption / Interruption

Play Episode Listen Later Sep 12, 2024 31:50


Dino Martis is the Chief Executive Officer & Founder at Genexia LLC. Genexia provides a platform that enables simultaneous mammogram and coronary artery disease (CAD) risk diagnosis using the same images, with no alteration to workflow. In this episode, KJ and Dino discuss the severe underdiagnosis of this condition in women, the clinical bias in healthcare, and Genexia's innovative approach of integrating coronary artery disease diagnosis with routine mammograms. Dino shares his personal motivation behind this venture, emphasizing the impact of cardiovascular disease on women's lives and how early detection can prevent heart attacks and strokes, ultimately aiming for a 50% reduction in deaths.   Key Takeaways: 03:09 The Importance of Women's Health 05:49 Challenges in Diagnosing Women's Heart Health 16:27 Innovative Solutions in Health Tech 23:02 The Broader Impact of Preventative Care   Quote of the Show (10:00): "We believe the disruption that we are bringing to healthcare is one of health equity and democratization. Women are central to the family, central to the community." – Dino Martis   Join our Anti-PR newsletter where we're keeping a watchful and clever eye on PR trends, PR fails, and interesting news in tech so you don't have to. You're welcome.   Want PR that actually matters? Get 30 minutes of expert advice in a fast-paced, zero-nonsense session from Karla Jo Helms, a veteran Crisis PR and Anti-PR Strategist who knows how to tell your story in the best possible light and get the exposure you need to disrupt your industry. Click here to book your call: https://info.jotopr.com/free-anti-pr-eval   Ways to connect with Dino MartisLinkedIn: https://www.linkedin.com/in/dino-martis/   X:https://twitter.com/Dino_Martis   Company Website: https://genexia.co/ WCPO Interview:https://www.wcpo.com/news/local-news/finding-solutions/cincinnati-based-company-using-ai-to-diagnose-coronary-artery-disease-risk-during-a-mammogram How to get more Disruption/Interruption:  Amazon Music - https://music.amazon.com/podcasts/eccda84d-4d5b-4c52-ba54-7fd8af3cbe87/disruption-interruption Apple Podcast - https://podcasts.apple.com/us/podcast/disruption-interruption/id1581985755 Spotify - https://open.spotify.com/show/6yGSwcSp8J354awJkCmJlDSee omnystudio.com/listener for privacy information.

B The Way Forward
Ethical Artificial Intelligence with Deloitte AI Institute's Beena Ammanath

B The Way Forward

Play Episode Listen Later Aug 28, 2024 50:47


On this episode of “B The Way Forward,” Host Brenda Darden Wilkerson is joined by Beena Ammanath, an executive, author, advocate, AnitaB.org board member, and nonprofit founder, who aims to increase awareness on the use, risks, and benefits of artificial intelligence, all while promoting diversity in this niche tech space. Beena is the Executive Director at the Deloitte Global AI Institute, where she helps companies and businesses learn how to leverage AI in the most practical and safe ways possible. Through this conversation, Beena offers listeners insight on how to utilize AI in every aspect of business and in our own personal career paths. As a computer scientist by trade, there was nothing in Beena's education or curriculum about ethics in the AI space, which led her into forging her own unique path to incorporate them into her career. Beena penned Trustworthy AI, a book that bridges the gap for readers on ethics and AI, and Zero Latency Leadership, which looks at other new emerging technologies that are on the horizon. Through all of this work, she also became an advocate for women and minorities in the AI realm, knowing that in order for AI to be successful, it needs to have diverse voices at the table. Brenda and Beena discuss how more people can become “AI Fluent”, why diversity in technology is crucial, and how to raise your voice to make the best use of these technologies. “Diversity has so many different angles. It's the culture, the experience, the education, age, the geographic location you come from. There are so many nuances to diversity, and for your AI products to be robust, you have to factor in. Start with the largest demographic, but try to bring in as much diversity to your AI teams as you can, because it's only going to make your product better and make more profit.” For more, check out Been and Delloitte... On LinkedIn - /bammanath | /delloitte On the Web - https://beenammanath.com/ | Deloitte AI Institute - AI Insights --- At AnitaB.org, we envision a future where the people who imagine and build technology mirror the people and societies for whom they build it. Find out more about how we support women, non-binary individuals, and other underrepresented groups in computing, as well as the organizations that employ them and the academic institutions training the next generations. --- Connect with AnitaB.org Instagram - @anitab_org Facebook - /anitab.0rg LinkedIn - /anitab-org On the web - anitab.org  --- Our guests contribute to this podcast in their personal capacity. The views expressed in this interview are their own and do not necessarily represent the views of Anita Borg Institute for Women and Technology or its employees (“AnitaB.org”). AnitaB.org is not responsible for and does not verify the accuracy of the information provided in the podcast series. The primary purpose of this podcast is to educate and inform. This podcast series does not constitute legal or other professional advice or services. --- B The Way Forward Is… Produced by Dominique Ferrari and Paige Hymson Sound design and editing by Neil Innes and Ryan Hammond  Mixing and mastering by Julian Kwasneski  Associate Producer is Faith Krogulecki Executive Produced by Dominique Ferrari, Stacey Book, and Avi Glijansky for Riveter Studios and Frequency Machine  Executive Produced by Brenda Darden Wilkerson for AnitaB.org Podcast Marketing from Lauren Passell and Arielle Nissenblatt with Riveter Studios and Tink Media in partnership with Coley Bouschet at AnitaB.org Photo of Brenda Darden Wilkerson by Mandisa Media Productions For more ways to be the way forward, visit AnitaB.org

AI, Government, and the Future by Alan Pentz
Navigating the EU AI Act with Giorgos Verdi of the European Council on Foreign Relations

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Aug 7, 2024 42:26


In this episode of AI, Government, and the Future, host Marc Leh is joined by Giorgos Verdi, Distinguished Policy Fellow at the European Council on Foreign Relations, to discuss the EU's pioneering AI Act, its implications for innovation, and Europe's role in shaping global AI standards. Giorgos shares insights on the challenges and opportunities facing European tech companies, the geopolitical factors influencing AI development, and the potential for AI to transform government services.

160 Characters
The Battle for Trustworthy AI: Microsoft, OpenAI, and Zoom

160 Characters

Play Episode Listen Later Aug 7, 2024 11:04


Resources:OpenAI co-founder leaves for AnthropicMicrosoft says OpenAI is now a competitor in AI and searchZoom Is Going After Google and Microsoft With AI-Driven DocsMethod prevents an AI model from being overconfident about wrong answersConnect with Jill: linkedin.com/in/jill-berkowitzConnect with Will: linkedin.com/in/william-jonathan-bowen___Check out Will's AI Digest___160 Characters is powered by Clerk Chat. 

Campus Technology Insider
New ED Guidelines for Designing Trustworthy AI Tools in Education

Campus Technology Insider

Play Episode Listen Later Jul 30, 2024 21:28


The United States Department of Education recently released a new report called "Designing for Education with Artificial Intelligence: An Essential Guide for Developers." The guide seeks to inform ed tech developers as they create AI products and services for use in education — and help them work toward AI safety, security, and trust. We spoke with Kevin Johnstun, education program specialist in ED's Office of Educational Technology, about the ins and outs of the report and what it means for education institutions. Resource links: Designing for Education with Artificial Intelligence: An Essential Guide for Developers Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations Music: Mixkit Duration: 21 minutes Transcript

HRchat Podcast
Trustworthy AI at Work: Avoiding Bias with Jo Stansfield

HRchat Podcast

Play Episode Listen Later Jul 16, 2024 21:07 Transcription Available


In this episode, we consider ways to monitor and avoid AI bias in the workplace. The guest this time is Jo Stansfield, Trustee at BCS, The Chartered Institute for IT. Jo is also the Founder and Director of Inclusioneering Limited, offering Inclusive Innovation consultancy to support tech and engineering organizations with data-led culture transformation for diversity, equity and inclusion, integrally connected with the innovation process, for fair and equitable outcomes by design of products, operations and services.Questions for Jo include:  You're on the Board of ForHumanity. Tell me about the org and it's missionYou're also a Trustee at BCS, The Chartered Institute for IT. Tell me about the Institute and how it helps to raise professional standards and support career progressionYou recently spoke at the first Cambridge AI Summit. Your talk was called Trustworthy AI: Avoiding Bias and Embracing Responsibility. Tell us more. Fairness, Non-Bias, and Non-Discrimination: What are some best practices for (HR) professionals to ensure fairness and non-discrimination in AI tools used for recruitment and employee evaluation?How can users be confident in their AI tools? Privacy, Data Protection, and Safety: With increasing concerns about data privacy, what steps should companies take to protect employee data when using AI?We do our best to ensure editorial objectivity. The views and ideas shared by our guests and sponsors are entirely independent of The HR Gazette, HRchat Podcast and Iceni Media Inc.Feature Your Brand on the HRchat PodcastThe HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score. Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here. Follow us on LinkedIn Subscribe to our newsletter Check out our in-person events

The BlueHat Podcast
Unlocking Backdoor AI Poisoning with Dmitrijs Trizna

The BlueHat Podcast

Play Episode Listen Later Jul 10, 2024 45:21


Dmitrijs Trizna, Security Researcher at Microsoft joins Nic Fillingham on this week's episode of The BlueHat Podcast. Dmitrijs explains his role at Microsoft, focusing on AI-based cyber threat detection for Kubernetes and Linux platforms. Dmitrijs explores the complex landscape of securing AI systems, focusing on the emerging challenges of Trustworthy AI. He delves into how threat actors exploit vulnerabilities through techniques like backdoor poisoning, using gradual benign inputs to deceive AI models. Dmitrijs highlights the multidisciplinary approach required for effective AI security, combining AI expertise with rigorous security practices. He also discusses the resilience of gradient-boosted decision trees against such attacks and shares insights from his recent presentation at Blue Hat India, where he noted a strong interest in AI security. In This Episode You Will Learn: The concept of Trustworthy AI and its importance in today's technology landscape How threat actors exploit AI vulnerabilities using backdoor poisoning techniques The role of frequency and unusual inputs in compromising AI model integrity Some Questions We Ask: Could you elaborate on the resilience of gradient-boosted decision trees in AI security? What interdisciplinary approaches are necessary for effective AI security? How do we determine acceptable thresholds for AI model degradation in security contexts? Resources: View Dmitrijs Trizna on LinkedIn View Wendy Zenone on LinkedIn View Nic Fillingham on LinkedIn Related Microsoft Podcasts: Microsoft Threat Intelligence Podcast Afternoon Cyber Tea with Ann Johnson Uncovering Hidden Risks Discover and follow other Microsoft podcasts at microsoft.com/podcasts Hosted on Acast. See acast.com/privacy for more information.

The BlueHat Podcast
Unlocking Backdoor AI Poisoning with Dmitrijs Trizna

The BlueHat Podcast

Play Episode Listen Later Jul 10, 2024 46:53


Dmitrijs Trizna, Security Researcher at Microsoft joins Nic Fillingham on this week's episode of The BlueHat Podcast. Dmitrijs explains his role at Microsoft, focusing on AI-based cyber threat detection for Kubernetes and Linux platforms. Dmitrijs explores the complex landscape of securing AI systems, focusing on the emerging challenges of Trustworthy AI. He delves into how threat actors exploit vulnerabilities through techniques like backdoor poisoning, using gradual benign inputs to deceive AI models. Dmitrijs highlights the multidisciplinary approach required for effective AI security, combining AI expertise with rigorous security practices. He also discusses the resilience of gradient-boosted decision trees against such attacks and shares insights from his recent presentation at Blue Hat India, where he noted a strong interest in AI security.       In This Episode You Will Learn:       The concept of Trustworthy AI and its importance in today's technology landscape  How threat actors exploit AI vulnerabilities using backdoor poisoning techniques  The role of frequency and unusual inputs in compromising AI model integrity      Some Questions We Ask:        Could you elaborate on the resilience of gradient-boosted decision trees in AI security?  What interdisciplinary approaches are necessary for effective AI security?  How do we determine acceptable thresholds for AI model degradation in security contexts?       Resources:   View Dmitrijs Trizna on LinkedIn   View Wendy Zenone on LinkedIn   View Nic Fillingham on LinkedIn    Related Microsoft Podcasts:     Microsoft Threat Intelligence Podcast   Afternoon Cyber Tea with Ann Johnson   Uncovering Hidden Risks       Discover and follow other Microsoft podcasts at microsoft.com/podcasts   The BlueHat Podcast is produced by Microsoft and distributed as part of N2K media network.  

DEEPTECH DEEPTALK
Big Tech und KI: Datenhoheit und Regulierung im digitalen Zeitalter

DEEPTECH DEEPTALK

Play Episode Listen Later Jul 3, 2024 32:40


Inhaltsbeschreibung: Willkommen zur ersten Folge von "DEEPTECH DEEPTALK", moderiert von Oliver Rößling und Alois Krtil, direkt aus dem Hamburger DEEPTECH Campus. Diese Episode beleuchtet die Entwicklung und Zukunft der Künstlichen Intelligenz (KI) und bietet einen tiefen Einblick in die verschiedenen Facetten dieses bahnbrechenden Themas. Einleitung: Oliver Rößling und Alois Krtil starten die Folge mit einer kurzen Begrüßung und setzen den Rahmen für die Diskussion: die historische Entwicklung der KI, aktuelle Trends und Zukunftsaussichten. Historischer Rückblick: Frühe Anfänge der KI: Ein Blick auf die 1950er und 1960er Jahre, als Visionäre wie Alan Turing die ersten Konzepte für KI entwickelten und sich einen universellen Computer vorstellten, der komplexe Probleme lösen kann. Entwicklung der Rechenleistung: Diskussion über die Bedeutung der Rechenleistung für die Entwicklung der KI, von frühen binären Rechnern bis zu heutigen Hochleistungscomputern. Aktuelle Entwicklungen: Quantencomputing und Neuromorphes Lernen: Quantencomputer kennen nicht nur die Zustände 0 und 1, sondern alle Zustände dazwischen, was die Berechnungszeiten für komplexe Aufgaben drastisch reduziert. Vergleich mit herkömmlicher Binärrechentechnik und das revolutionäre Potenzial des Quantencomputings. Generative KI: Erklärung der generativen KI, die Musik, Texte, Bilder und Videos in Echtzeit generieren kann. Diskussion über Einsatzmöglichkeiten und Potenziale dieser Technologie. Zukünftige Perspektiven und Herausforderungen: Kausale KI und Reasoning-Kapazitäten: Fortschritte in der kausalen KI und Reasoning-Kapazitäten, die Sprachmodelle immer besser machen. Implikationen dieser Entwicklung. Autonome Systeme und Ethik: Ethische Fragen und Herausforderungen beim Einsatz autonomer Systeme. Verantwortung von Entwicklern und Gesellschaften, Maßnahmen zur Gewährleistung ethischer Standards. Gesellschaftliche Implikationen: Ängste und Erwartungen der Gesellschaft gegenüber KI. Rolle von Big Tech und Herausforderungen der Datenhoheit durch KI. Regulierung und Datenschutz: Regulierung von KI: Erklärung des European AI Act und globaler Bewegungen zur Regulierung von KI. Beitrag dieser Regelungen zur sicheren und ethisch korrekten Entwicklung der Technologie. Datenschutz und Datensicherheit: Erläuterung, wie Unternehmen und Einzelpersonen ihre Daten schützen können. Strategien zur sicheren Nutzung von Daten und die Bedeutung von Trustworthy AI. Europas Vorreiterrolle bei strengen Datenschutzgesetzen. Schlussfolgerungen und Ausblick: Zusammenfassung der wichtigsten Punkte: Die wesentlichen Erkenntnisse der Folge werden zusammengefasst, wobei die historische Entwicklung, aktuelle Trends und zukünftige Herausforderungen der KI betont werden. Ausblick auf die nächsten Folgen: Ein kurzer Teaser gibt den Zuhörern einen Ausblick auf kommende Episoden. Rößling und Krtil versprechen, weiterhin spannende und relevante Themen aus der Welt des Deep Tech zu diskutieren und interessante Gäste einzuladen. Lernziele der Folge: Verstehen der historischen Entwicklung der KI: Einblick in die Ursprünge der KI in den 1950er und 1960er Jahren und die Visionen der frühen Pioniere. Einblick in aktuelle Trends wie Quantencomputing und generative KI: Verständnis der Funktionsweise und Bedeutung dieser Technologien. Kenntnis der Herausforderungen und ethischen Fragen rund um autonome Systeme: Diskussion über die Verantwortung und ethischen Standards im Umgang mit KI. Verständnis der globalen Regulierung und Datenschutzmaßnahmen für KI: Einblick in den European AI Act und andere globale Regulierungsmaßnahmen. Bewusstsein für die gesellschaftlichen Implikationen und die Rolle von Big Tech in der KI-Entwicklung: Diskussion über die Herausforderungen der Datenhoheit und die Rolle großer Technologieunternehmen. OLIVER @ LinkedIn: https://www.linkedin.com/in/oliverroessling/ ALOIS @ LinkedIn: https://www.linkedin.com/in/alois-krtil-2985471b4/

Word To Your Mama
EP 175 Update 3 on The Future of BIPOC, Disabled and LGBTQ+ Artists with Colony Little and Evonne Gallardo

Word To Your Mama

Play Episode Listen Later Jun 24, 2024


Update 3 on The Future of BIPOC, Disabled and LGBTQ+ Artists with Colony Little and Evonne GallardoIn this episode we discuss:The Importance of Community: Delving into social weaving and the necessity of creating spaces to go deep rather than wide.History, Land, and Care: Examining the intricate relationship between history, land, and the need for foregrounded care.Cultural and Societal Shifts: Discussing how recent changes have impacted the visibility and acceptance of marginalized artists.Underground Spaces: Exploring whether artists today are creating their own modern version of the “underground.”Supporting Artists: Strategies for protecting and supporting artists and culture bearers in today's world.Generational Perspectives on Tech: Investigating whether younger generations are embracing or rejecting social media and technology.Technology and Representation: Analyzing the role of technology in shaping the future of representation for marginalized artists.Creative Ecosystem Navigation: Offering practical advice for creatives of any age navigating the current ecosystem.Honoring Debra Padilla: Giving Debra Padilla her well-deserved flowers.Hip Hop and LA's Success: Our love for Hip Hop and discussing how LA is winning right now.Episode linksFanshen Cox's WTYM EpisodeThe Institute for Trustworthy AI in Law & Society (TRAILS)WTYM EP 57 Evonne Gallardo: Latinx Arts and Cultural ManagementWTYM EP 67 Colony Little: Seeing Yourself in ArtWord To Your Mama Guest Hype Songs PlaylistWTYM LINKSRitzy PeriwinkleBook Ritzy P as a SpeakerWord To Your Mama Store: Use code WTYM at check out to receive 10% off any order YouTubeMental Health ResourcesWTYM Patreon PageDONATEMEDIA KITWTYM was recorded using Riverside.FM TRY NOWAVAILABLE WHERE EVER YOU CONSUME PODCASTS on socials @wtymama | email: hola@wordtoyourmama.com

Outcomes Rocket
Microsoft's Collaboration with Healthcare Leaders for Trustworthy AI with Dr. David Rhew from Microsoft, along with Marcella Dalrymple and Dr. Michael Pencina from Duke Health

Outcomes Rocket

Play Episode Listen Later Jun 21, 2024 20:31


AI's potential to revolutionize healthcare requires a focus on responsible and trustworthy implementation. In this episode, Dr. David Rhew from Microsoft, along with Marcella Dalrymple and Dr. Michael Pencina from Duke Health, discuss the collaboration between Microsoft and Duke Health to explore the transformative potential of artificial intelligence (AI) in healthcare. Dr. Rhew emphasizes the importance of responsible and trustworthy AI, acknowledging its limitations and the need for operationalizing key principles. Dr. Pencina outlines four principles for trustworthy AI: prioritizing the human person, defining AI use cases, anticipating consequences, and establishing governance. Marcella Dalrymple, with her community perspective, highlights the necessity of addressing public uncertainty and mistrust regarding AI development. The partnership aims to form a Center of Excellence for trustworthy AI, focusing on collaborative efforts to align with ethical values and engage the community bidirectionally. The guests stress the importance of a robust governance system, automation for efficiency, and continuous monitoring to ensure AI's intended impact. Tune in and learn how this collaboration strives to revolutionize healthcare responsibly through AI! Resources: Connect with and follow David Rhew on LinkedIn. Follow Microsoft on LinkedIn and visit their website. Connect with and follow Marcella Dalrymple on LinkedIn. Connect with and follow Michael Pencina on LinkedIn. Follow Duke Health AI on LinkedIn and visit their website.

Tech'ed Up with Niki Christoff
AI Mythbusting • Nikki Pope (NVIDIA)

Tech'ed Up with Niki Christoff

Play Episode Play 60 sec Highlight Listen Later May 30, 2024 26:19 Transcription Available


NVIDIA's Head of AI & Legal Ethics, Nikki Pope, talks about why how we talk about artificial intelligence matters and why making the tech representative of more people is important. She and Niki break down some of the myths surrounding the tech and re-examine what regulators should be focused on when they think of “existential threat” and AI. Spoiler alert - it's not what Hollywood thinks! “...democratization of AI means making sure that we don't leave cultures and languages and communities behind as we all go running breakneck into the future.” -Nikki PopeFollow Nikki Pope on LinkedInRead more about Te Hiku MediaLearn more about NVIDIA's Trustworthy AI initiative Learn More at www.techedup.com Follow us on Instagram Check out video on YouTube Follow Niki on LinkedIn

The Disrupted Workforce
AI Biases EXPLAINED: When Can We Trust AI? | Phaedra Boinodiris, IBM Consulting's Global Leader for Trustworthy AI

The Disrupted Workforce

Play Episode Listen Later May 22, 2024 53:02


Anyone who has experimented with generative AI knows the tech is still flawed. Despite massive investments in AI models, tools, and applications, the fact that AI outputs are still biased and inconsistently accurate raises global concerns regarding trustworthiness and who is responsible for making AI safe as it evolves at earth-shattering speed. The unfortunate truth is that presently, the majority of AI models only reflect a narrow sample of our collective humanity, inevitably reinforcing the existing biases of those who programmed AI and the narrow data sets used, making today's AI models inept at delivering diverse perspectives. Unpacking the ethics and path to a safer, more responsible, and representative AI future is Phaedra Boinodiris, IBM Consulting's Global Leader for Trustworthy AI. Phaedra is a top voice, author, speaker, and one of the earliest leaders responsible for reimagining AI initiatives. Her recent book, “AI for the Rest of Us,” and her role as co-founder of the Future World Alliance highlight her commitment to integrating ethics into AI education and development. She's here to discuss the need for inclusive AI that represents all of humanity, outlining the important considerations leaders should take into account to ensure their AI initiatives are ethical, inclusive, and able to effectively augment our capabilities without compromising human values. Further, what AI governance models look like at IBM and how to develop the right teams to develop truly groundbreaking AI solutions without compromise.  We talk about how AI ethics intersects with broader societal issues, including education, corporations, and parental responsibilities. Phaedra also shares IBM's approaches to AI training, tools, teams, and transparency as well as the importance of AI literacy in different fields, and why diversity is crucial in AI development. Tune in to understand why we must approach AI with the intentionality it demands so it can work for humanity, and not against it. — Key Takeaways: Introducing Phaedra Boinodiris & Her Take On Ethical AI (00:00) IBM's Ethical Governance Model For Training AI (12:58) AI Transparency & Accountability vs. The AI Arms Race (17:11) Is Decentralized AI The Way Forward? (22:26) What C-Suite Leaders Need To Know & Preventing Misinformation (24:43) AI Regulation, Intellectual Property & Tips for Parents (32:13) What Sparked Phaedra's Passion for AI & Tech (42:39) Speed Round Questions (46:25) — ADDITIONAL RESOURCES Connect with Phaedra Boinodiris: Website: https://phaedra.ai/ LinkedIn: https://www.linkedin.com/in/phaedra/ Pick up Phaedra's book, “AI for the Rest of Us,” for insights on developing inclusive and responsible AI: https://aifortherestofus.us Learn about the Future World Alliance here: https://futureworldalliance.org Subscribe to our YouTube channel: https://bit.ly/44ieyPB Follow our podcast: Apple Podcasts: https://apple.co/44kONi6 Spotify: https://spoti.fi/3NtVK9W Join the TDW tribe and learn more: https://disruptedwork.com

The Road to Accountable AI
Dominique Shelton Leipzig: Building Trust When Every Company is an AI Company

The Road to Accountable AI

Play Episode Listen Later May 9, 2024 34:44 Transcription Available


Join Professor Kevin Werbach and Dominique Shelton Leipzig, an expert in data privacy and technology law, as they share practical insights on AI's transformative potential and regulatory challenges in this episode on The Road to Accountable AI. They dissect the ripple effects of recent legislation, and why setting industry standards and codifying trust in AI are more than mere legal checkboxes—they're the bedrock of innovation and integrity in business. Transitioning from theory to practice, this episode uncovers what it truly means to govern AI systems that are accurate, safe, and respectful of privacy. Kevin and Dominique navigate through the high-risk scenarios outlined by the EU and discuss how companies can future-proof their brands by adopting AI governance strategies.  Dominique Shelton Leipzig is a partner and head of the Ad Tech Privacy & Data Management team and the Global Data Innovation team at the law firm Mayer Brown. She is the author of the recent book Trust: Responsible AI, Innovation, Privacy and Data Leadership. Dominique co-founded NxtWork, a non-profit aimed at diversifying leadership in corporate America, and has trained over 50,000 professionals in data privacy, AI, and data leadership. She has been named a "Legal Visionary" by the Los Angeles Times, a "Top Cyber Lawyer" by the Daily Journal, and a "Leading Lawyer" by Legal 500.  Trust: Responsible AI, Innovation, Privacy and Data Leadership Mayer Brown Digital Trust Summit A Framework for Assessing AI Risk Dominique's Data Privacy Recommendation Enacted in Biden's EO  

Mixed Up
Dating apps are modern day segregation: a look into how the algorithms keep us apart

Mixed Up

Play Episode Listen Later May 1, 2024 51:12


The one where Hinge needs to drop their location Emma and Nicole speak to Apryl Williams, an assistant professor of communication and digital studies at the University of Michigan, senior fellow in Trustworthy AI at the Mozilla Foundation, and faculty associate at Harvard University's Berkman Klein Center for Internet & Society. She's the author of Not My Type: Automating Sexual Racism in Online Dating. They discuss Apryl's research into about dating app inequality and sexual racism in online dating and how prejudice and bias gets baked into modern day dating culture through algorithms and AI. Pre-order our book The Half Of It: https://lnkfi.re/nf0upC Apryl's Twitter: https://twitter.com/AprylW  Instagram: https://instagram.com/mixedup.podcast  Website: https://www.mixedup.co.uk/ Substack: https://mixeduppod.substack.com 

Science (Video)
Trustworthy AI in Healthcare: Whose Trust Needs to be Earned and How

Science (Video)

Play Episode Listen Later Apr 19, 2024 50:34


As AI becomes more prevalent, many people are asking how it will impact health care. In this program, Dr. Ida Sim, Professor of Medicine and Computational Precision Health at UCSF and Cora Han, attorney and Chief Health Data Officer for University of California Health, discuss the issues surround health care and AI. Sim outlines the current thinking around the role of transparency and explainability in AI governance and oversight, and in earning and maintaining trust of various stakeholder communities. Han discusses AI governance efforts across UC Health, and state and federal efforts to develop resources for ensuring that AI systems are developed, integrated, and deployed in a trustworthy manner. Series: "UC Center Sacramento" [Health and Medicine] [Science] [Show ID: 39603]

Health and Medicine (Video)
Trustworthy AI in Healthcare: Whose Trust Needs to be Earned and How

Health and Medicine (Video)

Play Episode Listen Later Apr 19, 2024 50:34


As AI becomes more prevalent, many people are asking how it will impact health care. In this program, Dr. Ida Sim, Professor of Medicine and Computational Precision Health at UCSF and Cora Han, attorney and Chief Health Data Officer for University of California Health, discuss the issues surround health care and AI. Sim outlines the current thinking around the role of transparency and explainability in AI governance and oversight, and in earning and maintaining trust of various stakeholder communities. Han discusses AI governance efforts across UC Health, and state and federal efforts to develop resources for ensuring that AI systems are developed, integrated, and deployed in a trustworthy manner. Series: "UC Center Sacramento" [Health and Medicine] [Science] [Show ID: 39603]

Public Health (Audio)
Trustworthy AI in Healthcare: Whose Trust Needs to be Earned and How

Public Health (Audio)

Play Episode Listen Later Apr 19, 2024 50:34


As AI becomes more prevalent, many people are asking how it will impact health care. In this program, Dr. Ida Sim, Professor of Medicine and Computational Precision Health at UCSF and Cora Han, attorney and Chief Health Data Officer for University of California Health, discuss the issues surround health care and AI. Sim outlines the current thinking around the role of transparency and explainability in AI governance and oversight, and in earning and maintaining trust of various stakeholder communities. Han discusses AI governance efforts across UC Health, and state and federal efforts to develop resources for ensuring that AI systems are developed, integrated, and deployed in a trustworthy manner. Series: "UC Center Sacramento" [Health and Medicine] [Science] [Show ID: 39603]

The Tech Blog Writer Podcast
2861: Beyond Hallucinations: The Role of Retrieval-Augmented Generation (RAG) in Trustworthy AI

The Tech Blog Writer Podcast

Play Episode Listen Later Apr 12, 2024 34:26


Are AI hallucinations undermining trust in machine learning, and can Retrieval-Augmented Generation (RAG) offer a solution? As we invite Rahul Pradhan, VP of Product and Strategy at Couchbase, to our podcast, we delve into the fascinating yet challenging issue of AI hallucinations—situations where AI systems generate plausible but factually incorrect content. This phenomenon poses risks to AI's reliability and threatens its adoption across critical sectors like healthcare and legal industries, where precision is paramount. In this episode, Rahul will explain how these hallucinations occur in AI models that operate on probability, often simulating understanding without genuine comprehension. The consequences? A potential erosion of trust in automated systems is a barrier that is particularly significant in domains where the stakes are high, and errors can have profound implications. But fear not, there's a beacon of hope on the horizon—Retrieval-Augmented Generation (RAG).  Rahul will discuss how RAG integrates a retrieval component that pulls real-time, relevant data before generating responses, thereby grounding AI outputs in reality and significantly mitigating the risk of hallucinations. He will also show how Couchbase's innovative data management capabilities enable this technology by combining operational and training data to enhance accuracy and relevance. Moreover, Rahul will explore RAG's broader implications. From enhancing personalization in content generation to facilitating sophisticated decision-making across various industries, RAG stands out as a pivotal innovation in promoting more transparent, accountable, and responsible AI applications. Join us as we navigate the labyrinth of AI hallucinations and the transformative power of the Retrieval-Augmented Generation. How might this technology reshape the landscape of AI deployment across different sectors? After listening, we eagerly await your thoughts on whether RAG could be the key to building more trustworthy AI systems.  

SHIFT
Building Trustworthy AI

SHIFT

Play Episode Listen Later Mar 27, 2024 14:12


The current AI ecosystem has plenty in common with the early days of the web. So, what have we learned? We Meet:  Mark Surman, President of Mozilla Credits: This episode of SHIFT was produced by Jennifer Strong with help from Emma Cillekens. It was mixed by Garret Lang, with original music from him and Jacob Gorski. Art by Anthony Green.

Interviews: Tech and Business
How to Lead Practical, Ethical, and Trustworthy AI

Interviews: Tech and Business

Play Episode Listen Later Mar 21, 2024 63:12


Join host Michael Krigsman and prominent guests David Bray and Anthony Scriffignano for an insightful CXOTalk discussion on navigating the complex landscape of AI ethics and governance. As AI rapidly advances and becomes more influential in our daily lives, it's crucial to ensure these powerful technologies are developed and used in a trustworthy, responsible manner.*Key topics include:*

ai practical ethical trustworthy ai david bray michael krigsman anthony scriffignano
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Trustworthy AI Series: Responsible AI Concepts [AI Today Podcast]

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Feb 14, 2024 14:09


What is a responsible AI? AI systems have the potential to provide great value, but also the potential to cause great harm. Knowing how to build or use AI systems is simply not going to be enough. You need to know how to build, use, and interact with these systems ethically and responsibly. Additionally you need to understand that Trustworthy AI is a spectrum that addresses various aspects relating to societal, systemic, and technical areas. Continue reading Trustworthy AI Series: Responsible AI Concepts [AI Today Podcast] at Cognilytica.

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Trustworthy AI Series: Ethical AI Concepts [AI Today Podcast]

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Feb 7, 2024 19:21


What are the 5 ethics in artificial intelligence? AI systems have the potential to provide great value, but also the potential to cause great harm. Knowing how to build or use AI systems is simply not going to be enough. You need to know how to build, use, and interact with these systems ethically and responsibly. Continue reading Trustworthy AI Series: Ethical AI Concepts [AI Today Podcast] at Cognilytica.

WeatherBrains
WeatherBrains 941: Two Doctors And A Mic

WeatherBrains

Play Episode Listen Later Jan 30, 2024 86:51


Our episode tonight is all about the AMS Annual Meeting 2024 live from Baltimore. First up from the Conference is Ryan Lagerquist, NOAA employee and is a Research Scientist at CIRA (Cooperative Institute for Research in the Atmosphere).  He is a meteorologist by training and is heavily involved in machine learning research. Joining us next on the show is the Chair for Coastal Artificial Intelligence at Texas A@M University-Corpus Christi and a Co-PI for the National Science Foundation Artificial Intelligence Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography, or AI2ES.  Dr. Phillipe Tissot, thanks for dropping by tonight. Our email officer Jen is continuing to handle the incoming messages from our listeners. Reach us here: email@weatherbrains.com. Early reflections on AMS Annual Meeting (13:15) AI and the future of meteorology in general (16:00) Broad overview of AI and NWS Operations/Numerical weather prediction models (24:00) Community modeling/EPIC/UFS (32:00) The Astronomy Outlook with Tony Rice (No segment this week) This Week in Tornado History With Jen (53:03) National Weather Round-Up (01:02:45) E-Mail Segment (55:00) and more! Web Sites from Episode 941: 2024 AMS Annual Meeting Picks of the Week: James Aydelott - Brian Brettschneider on X: Fairbanks upper air sounding Jen Narramore - Foghorn Rick Smith - NWS SPC on X: 9 Years of SPC Outlooks Neil Jacobs - Foghorn Troy Kimmel - Foghorn Kim Klockow-McClain - Foghorn Bill Murray - Out James Spann - A Change in the Weather: Understanding Public Usage of Weather Apps James Spann - Kevin Kloesel on X: Rick Smith photo/meme The WeatherBrains crew includes your host, James Spann, plus other notable geeks like Troy Kimmel, Bill Murray, Rick Smith, James Aydelott, Jen Narramore, Dr. Neil Jacobs, and Dr. Kim Klockow-McClain. They bring together a wealth of weather knowledge and experience for another fascinating podcast about weather.  

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
The Layers of Trustworthy AI Revisited [AI Today Podcast]

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Jan 26, 2024 17:02


How can AI be trustworthy? AI systems have the potential to provide great value, but also the potential to cause great harm. Knowing how to build or use AI systems is simply not going to be enough for organizations. You need to know how to build, use, and interact with these systems ethically and responsibly. Continue reading The Layers of Trustworthy AI Revisited [AI Today Podcast] at Cognilytica.

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Overview of the Comprehensive Trustworthy AI Framework [AI Today Podcast]

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Jan 19, 2024 11:56


In today's fast paced landscape, AI has emerged as a transformative technology. It's being used as an augmented intelligence tool to help people do their job and tasks better. AI is also being used to help people create content and art. It's being used to make decisions, both big and small. Loan decisions, product recommendations, movie recommendations, and so much more. Continue reading Overview of the Comprehensive Trustworthy AI Framework [AI Today Podcast] at Cognilytica.

Crypto Unplugged
#58: "OriginTrail Deep Dive: Web3, AI, Sustainable Solutions with Žiga Drev & Branimir Rakic"

Crypto Unplugged

Play Episode Listen Later Jan 18, 2024 60:27


In Episode 58 of the Crypto Unplugged Podcast, join Doc for an in-depth discussion with Žiga Drev and Branimir Rakic, founders of OriginTrail, a leading blockchain platform for verifiable data and global trade optimisation. This episode delves into the diverse applications of OriginTrail's technology, focusing on:Web3 Integration and DKG v6: Unpacking OriginTrail's role in facilitating secure data access and collaboration within the decentralised web ecosystem.Trustworthy AI and ChatDKG: Examining the potential of blockchain-powered AI for enhanced data verification and mitigating misinformation risks.Sustainable Medicines Partnership and Beyond: Exploring how OriginTrail contributes to improved accessibility and sustainability within the healthcare sector.Laboratory Data Marketplace and Frictionless Trade: Analysing the impact of blockchain technology on data privacy and streamlining international trade processes.The Future of Trace Labs and OriginTrail: Gaining insights into the upcoming advancements and OriginTrail's vision for a more transparent and trustworthy digital landscape.This episode offers valuable insights for professionals and enthusiasts interested in:Blockchain technology and its potential applicationsWeb3 development and secure data sharingTrustworthiness and ethical considerations in AISustainability initiatives within the technology and healthcare sectorsGlobal trade optimisation and supply chain transparencyDate of podcast recording: Friday 12th January '24About Origin TrailOriginTrail is an ecosystem dedicated to making global supply chains work together by enabling a universal, collaborative, and trusted data exchange.https://origintrail.io/https://tracelabs.io/Twitter:Origin Trail - @origin_trailTrace Labs - @TraceLabsHQŽiga Drev - @DrevZigaBranimir Rakic - @BranaRakicSubscribe to The Markets Unplugged - A Bitcoin, Blockchain and Crypto Educational Hub:https://www.themarketsunplugged.com/Crypto Unplugged Social MediaTwitter:Doc - @DrCrypto47Oz - @AskCryptoWealthUnearth Crypto Gems with AI Analytics! TOKEN METRICS - Experience AI-driven investments. Navigate crypto markets with confidence.Token Metrics plans, pricing, offerings! Token Metrics - Discover the most robust and inclusive crypto research platform.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showIf you would like to leave a tip, please use our wallet addresses below. BTC: 1FwmHZMrq6qcNsGFv8NHTXkRDZdrtGMNCw ETH: 0xFba740B8dC981A461D4e7aD0be79879782996B85 DOT: 16dKAZgkSwrDQMArctsquwLDPEcpaZZHEk6AT9Jf3HeMx5pF If you send us a tip, please tweet us so we can send you our thanks! Thanks for listening!

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Trustworthy AI Best Practices: Lessons Learned from the Rite Aid Facial Recognition Ban [AI Today Podcast]

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Jan 10, 2024 24:53


Facial recognition falls under the recognition pattern of AI. On it's own the technology is neutral. However, it's the application and use (or misuse) of this technology that can have severe consequences. Rite Aid is one of the most recent companies to come under fire for their use of facial recognition. In this episode of the AI Today podcast we explore what happened that that caused the FTC to give retailer Rite Aid fa facial recognition ban for 5 years. Continue reading Trustworthy AI Best Practices: Lessons Learned from the Rite Aid Facial Recognition Ban [AI Today Podcast] at Cognilytica.

Notes To My (Legal) Self
Season 6, Episode 15: NIST Workshop (with Kassi Burns)

Notes To My (Legal) Self

Play Episode Listen Later Dec 29, 2023 42:37


Kassi Burns is an attorney who has been using AI and Machine Learning for 10 years. Her legal career is defined by a continued curiosity in emerging technologies, technical and client services skills, understanding of data's impact to legal issues, and a desire to engage with others in and beyond the eDiscovery community. Over the years she has built a unique and diverse background in transactional law, eDiscovery, privacy law, data breach response, information law, and legal operations and regular collaborations with Partner/GC/C-suite clients. Her curiosity about emerging technologies, such as blockchain, NFTs and web3, and their impact to the legal profession, puts her on a continued path of constant discovery and educational growth. Her social media presence, including content through @eDiscoverist, is motivated by a desire to teach non-practitioners about eDiscovery and the legal community about web3. This episode is a deep dive into the world of AI, sparked by the insights from NIST's recent workshop on AI safety and trustworthiness. NIST, a leader in the world of standards and technology, recently hosted the "A USAISI Workshop: Collaboration to Enable Safe and Trustworthy AI." This event wasn't just about advancements in AI; it was a forward-thinking exploration into making AI safe, reliable, and ethical. We're talking about a future where AI not only propels us forward but does so with a strong moral compass. What we explore: Innovative strategies for marrying AI's potential with ethical practices. How do we navigate this new frontier responsibly? The synergy between tech leaders and policymakers in shaping AI's trajectory. It's a collaboration that's setting new standards! Cutting-edge tools and models are the new benchmarks in AI safety. These are the innovations that will define our future!

IRL - Online Life Is Real Life

From Hollywood to Hip Hop, artists are negotiating new boundaries of consent for use of AI in the creative industries. Bridget Todd speaks to artists who are pushing the boundaries.It's not the first time artists have been squeezed, but generative AI presents new dilemmas. In this episode: a member of the AI working group of the Hollywood writers union; a singer who licenses the use of her voice to others; an emcee and professor of Black music; and an AI music company charting a different path.Van Robichaux is a comedy writer in Los Angeles who helped craft the Writers Guild of America's proposals on managing AI in the entertainment industry. Holly Herndon is a Berlin-based artist and a computer scientist who has developed “Holly +”, a series of deep fake music tools for making music with Holly's voice.Enongo Lumumba-Kasongo creates video games and studies the intersection between AI and Hip Hop at Brown University. Her alias as a rapper is Sammus. Rory Kenny is co-founder and CEO of Loudly, an AI music generator platform that employs musicians to train their AI instead of scraping music from the internet.*Thank you to Sammus for sharing her track ‘1080p.' Visit Sammus' Bandcamp page to hear the full track and check out more of her songs.*

IRL - Online Life Is Real Life
Lend Me Your Voice

IRL - Online Life Is Real Life

Play Episode Listen Later Nov 21, 2023 22:36


Big tech's power over language, means power over people. Bridget Todd talks to AI community leaders paving the way for open voice tech in their own languages and dialects.In this episode: AI builders and researchers in the US, Kenya and New Zealand who say the languages computers learn to recognize today will be the ones that survive tomorrow — as long as communities and local startups can defend their data rights from big AI companies.Halcyon Lawrence was a researcher of information design at Towson University in Maryland (via Trinidad and Tobago) who did everything Alexa told her to for a year.*Keoni Mahelona is a leader of Indigenous data rights and chief technology officer of Te Hiku Media, a Māori community media network with 21 local radio stations in New Zealand. Kathleen Siminyu is an AI grassroots community leader in Kenya and a machine learning fellow with Mozilla's Common Voice working on Kiswahili voice projects. IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.*Sadly, following the recording of this episode, Dr. Halcyon Lawrence passed away. We are glad to have met her and pay tribute to her legacy as a researcher and educator. Thank you, Halcyon. 

IRL - Online Life Is Real Life
Crash Test Dummies

IRL - Online Life Is Real Life

Play Episode Listen Later Nov 7, 2023 22:27


Why does it so often feel like we're part of a mass AI experiment? What is the responsible way to test new technologies? Bridget Todd explores what it means to live with unproven AI systems that impact millions of people as they roll out across public life. In this episode: a visit to San Francisco, a major hub for automated vehicle testing; an exposé of a flawed welfare fraud prediction algorithm in a Dutch city; a look at how companies comply with regulations in practice; and how to inspire alternative values for tomorrow's AI.Julia Friedlander is senior manager for automated driving policy at San Francisco Municipal Transportation Agency who wants to see AVs regulated  based on safety performance data.Justin-Casimir Braun is a data journalist at Lighthouse Reports who is investigating suspect algorithms for predicting welfare fraud across Europe. Navrina Singh is the founder and CEO of Credo AI, a platform that guides enterprises on how to ‘govern' their AI responsibly in practice.Suresh Venkatasubramanian is the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University and he brings joy to computer science. IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd shares stories about prioritizing people over profit in the context of AI.

The Daily Article
Is “safe, secure, and trustworthy” AI possible? Biden issues Executive Order on artificial intelligence

The Daily Article

Play Episode Listen Later Nov 3, 2023 6:37


With much of the world's focus still residing in the Middle East—and the rest rightly directed toward Arlington, Texas, for the Rangers World Series parade later today—the Executive Order concerning artificial intelligence issued by President Biden earlier this week has gone largely unnoticed. But perhaps the most noteworthy aspect of the story is the way in which tech companies have willingly partnered with the government in developing these new regulations. It also serves as a useful parable for our relationship with God.  Author: Ryan Denison, PhD Narrator: Chris Elkins Subscribe: http://www.denisonforum.org/subscribe Read The Daily Article: https://www.denisonforum.org/daily-article/biden-issues-executive-order-on-artificial-intelligence/