POPULARITY
Dennis Wei, Senior Research Scientist at IBM specializing in human-centered trustworthy AI, speaks with Pitt's HexAI podcast host, Jordan Gass-Pooré, about his work focusing on trustworthy machine learning, including interpretability of machine learning models, algorithmic fairness, robustness, causal inference and graphical models.Concentrating on explainable AI, they speak in depth about the explainability of Large Language Models (LLMs), the field of in-context explainability and IBM's new In-Context Explainability 360 (ICX360) toolkit. They explore research project ideas for students and touch on the personalization of explainability outputs for different users and on leveraging explainability to help guide and optimize LLM reasoning. They also discuss IBM's interest in collaborating with university labs around explainable AI in healthcare and on related work at IBM looking at the steerability of LLMs and combining explainability and steerability to evaluate model modifications.This episode provides a deep dive into explainable AI, exploring how the field's cutting-edge research is contributing to more trustworthy applications of AI in healthcare. The discussion also highlights emerging research directions ideal for stimulating new academic projects and university-industry collaborations.Guest profile: https://research.ibm.com/people/dennis-weiICX360 Toolkit: https://github.com/IBM/ICX360
Breaking down trust in AI, not just talking about it. I just sat with Manisha Khanna, Global Product Marketing Leader for AI at SAS, to unpack the SAS–IDC Data and AI Pulse. The core theme is simple. Trust drives ROI.Key takeaways:- Trustworthy AI leaders outperform because they do the basics well. Data lineage, access control, model monitoring, and clear ownership.- Order matters. Fix data quality and governance first, then productize, then scale. Skipping steps is how pilots stall.- Guardrails in SAS Viya make “safe by default” real. Clear policies, repeatable workflows, and measurable outcomes.- Agentic AI readiness is not a tool choice. It is about reliable data, governed actions, and feedback loops that teams can audit.Why this matters:Enterprises keep chasing bigger models while the wins come from cleaner foundations. If you want impact, make trust a requirement, not a marketing line.Watch it, share it with your team, and pressure test your own roadmap against these basics.#data #ai #agenticai #sas #theravitshow
Welcome to Chat GPT, the only podcast where artificial intelligence takes the mic to explore the fascinating, fast-changing world of AI itself. From ethical dilemmas to mind-bending thought experiments, every episode is written and narrated by AI to help you decode the technology shaping our future. Whether you're a curious beginner or a seasoned techie, this is your front-row seat to the rise of intelligent machines—told from their perspective. Tune in for smart stories, surprising insights, and a glimpse into the future of thinking itself. Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Want trustworthy AI? Discover how observability, real-time monitoring, and modern platforms are reshaping how we build accountable, explainable systems.
In this episode of The Digital Executive, host Brian Thomas welcomes Alberto Rizzoli, serial entrepreneur and CEO of V7, a UK-based company pioneering AI systems to automate knowledge work across industries like healthcare, finance, and insurance.Alberto shares his journey from creating AI Poly, a groundbreaking app that empowered the visually impaired, to leading V7, where AI agents now handle complex, document-heavy workflows with accuracy, traceability, and compliance at scale. He explains how V7 blends human expertise with AI, allowing organizations to design reliable automations that learn step by step and always ground decisions in documented evidence—ensuring trustworthy, transparent AI operations.Looking ahead, Alberto envisions a world where AI eliminates administrative burdens, reduces bureaucracy, and empowers a new generation of AI workflow designers—transforming how we define knowledge work itself.Whether you're an AI innovator, enterprise leader, or future-focused technologist, this episode offers a bold perspective on how human creativity and machine intelligence will coexist to reshape the modern workplace.If you liked what you heard today, please leave us a review - Apple or Spotify. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Responsible AI adoption is as much about governance and evaluation as technology. Lightweight, context-specific frameworks make it possible for even resource-limited health systems to implement AI safely. Discover how generative AI, paired with real-world evidence, can help fill gaps in traditional research, increase health equity and help clinicians make more informed decisions.
A candid conversation with Navin Budhiraja, CTO and Head of Products at Vianai Systems, Inc. on The Ravit Show in Palo Alto. From Bhilai to Palo Alto. Navin topped the IIT entrance exam, studied at Cornell, and led at IBM, AWS, and SAP. We sat down to talk about building AI that enterprises can actually use.What we covered:- Vianai's mission and the hila platform: why it exists and the problem it solves- How hila turns enterprise data into something teams can interact with in plain language- Responsible AI in practice: tackling hallucinations and earning trust- Why a platform like hila is needed even with powerful foundation models- Conversational Finance: what makes it useful for finance teams- Real integrations: ERP, CRM, HR systems, and how this works end to end- Security for the real world: air-gapped deployments, privacy, and certifications- The road ahead: how AI, IoT, and cloud are converging in the next 2 to 3 years- Advice for the next generation of builders from Bhilai, the IITs, and beyondWhy this matters:Enterprises want outcomes, not hype. Navin's lens on trust, flexibility, and scale shows how AI moves from pilot to production.Thank you, Navin, for the clear thinking and straight answers. Full interview on The Ravit Show YouTube channel is live.#data #ai #vianai #theravitshow
See more: https://thinkfuture.substack.comConnect with Ibby: https://www.linkedin.com/in/ibby/---Can AI actually be trusted to make decisions? In this episode of thinkfuture, host Chris Kalaboukis sits down with Ibby, founder of Cotera, a platform making it easier for non-technical people to build powerful AI agents that solve real-world problems—without the hallucinations.Cotera acts as a translation layer between business needs and technical AI capabilities, empowering teams to automate complex workflows safely and effectively. By using multiple AI models to cross-check outputs and a “tiebreaker” model to handle edge cases, Cotera ensures accuracy, reliability, and trust in automation.We explore:- How Cotera enables non-technical users to build their own AI agents- The company's unique multi-model system to prevent hallucinations- Real-world applications in QA, fraud detection, and chatbot validation- Why critical thinking and creativity will define the future workforce- How education and job design must evolve for an AI-driven world- The balance between automation, human oversight, and trustIf you're interested in AI safety, workflow automation, or the evolution of work, this episode offers a grounded and hopeful look at how we can make AI useful—without losing control.
What happens when AI stops making mistakes… and starts misleading you?This discussion dives into one of the most important — and least understood — frontiers in artificial intelligence: AI deception.We explore how AI systems evolve from simple hallucinations (unintended errors) to deceptive behaviors — where models selectively distort truth to achieve goals or please human feedback loops. We unpack the coding incentives, enterprise risks, and governance challenges that make this issue critical for every executive leading AI transformation.Key Moments:00:00 What is AI Deception and Why It Matters3:43 Emergent Behaviors: From Hallucinations to Alignment to Deception4:40 Defining AI Deception6:15 Does AI Have a Moral Compass?7:20 Why AI Lies: Incentives to “Be Helpful” and Avoid Retraining15:12 Is Deception Built into LLMs? (And Can It Ever Be Solved?)18:00 Non-Human Intelligence Patterns: Hallucinations or Something Else?19:37 Enterprise Impact: What Business Leaders Need to Know27:00 Measuring Model Reliability: Can We Quantify AI Quality?34:00 Final Thoughts: The Future of Trustworthy AI Mentions:Scientists at OpenAI and Apollo Research showed in a paper that AI models lie and deceive: https://www.youtube.com/shorts/XuxVSPwW8I8TIME: New Tests Reveal AI's Capacity for DeceptionOpenAI: Detecting and reducing scheming in AI modelsStartupHub: OpenAI and Apollo Research Reveal AI Models Are Learning to Deceive: New Detection Methods Show PromiseMarcus WellerHugging Face Watch next: https://www.youtube.com/watch?v=plwN5XvlKMg&t=1s -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What happens when an AI strategy meets the real-world complexity of healthcare, law, and finance? That's the challenge at the heart of my conversation with Mark Sherwood, CIO of Wolters Kluwer, a global leader in professional information services. With over three decades in technology leadership across Microsoft, Symantec, and Nuance, Mark brings a rare combination of enterprise depth and hands-on pragmatism to the AI discussion. Mark explains why cloud-native architecture and data governance are the twin foundations of trustworthy AI. He shares how Wolters Kluwer is embedding AI across highly regulated industries—from helping doctors access life-saving insights through natural language queries to giving tax and legal professionals faster, more accurate guidance on complex regulations. Behind the innovation lies a disciplined approach: governing data, managing risk, and building confidence in AI systems that must meet the highest standards of accuracy and compliance. We also explore how to build high-trust, low-friction partnerships between IT and business teams to prevent shadow IT while accelerating digital transformation. Mark offers candid insights into the rise of AI agents, the emerging risks of quantum security, and why he believes that high-quality data is the most valuable currency in digital transformation. His philosophy is simple: speed means nothing without trust, and trust starts with clean, well-governed data. From cloud transformation to the future of AI regulation, this episode offers a grounded look at how global enterprises can scale responsibly in an era where innovation often outruns policy. So as AI becomes inseparable from how professionals think and work, how do we balance speed with stewardship? And are we truly ready for the ethical, technical, and quantum frontiers ahead? Share your thoughts after the episode.
As AI systems become more embedded in critical decisions—from healthcare to hiring—the need for transparency and trust has never been greater. But how do we document these powerful tools in a way that's both meaningful and actionable? In this episode, we'll welcome back Umang Bhatt, Assistant Professor in Trustworthy AI at the University of Cambridge and welcome Amy Winecoff, Senior Technologist for CDT as guest host. Together they'll explore the evolving landscape of AI documentation, its role in responsible deployment, and how emerging standards can help developers, policymakers, and the public understand and govern machine learning models more effectively.
In this episode of What's Next, we sit down with David Cosgrave, Country Manager for SAS South Africa, to reflect on the company's 30-year legacy in the country and its role in shaping the local technology and data landscape. David shares his journey stepping into the role, the privilege of leading during such a milestone year, and how SAS has built strong ties with South African businesses, universities, and graduates. From training more than 40,000 professionals to partnering with top institutions, SAS has become deeply embedded in the nation's business and skills ecosystem. The conversation also explores SAS's global legacy of nearly 50 years and its evolution from a statistical programming language to a global leader in analytics and AI. David discusses how the company is embracing the future with cloud technology, predictive and generative AI, and its focus on trustworthy AI for business-critical decision-making. With decades of experience in enabling organizations to unlock real value from their data, SAS continues to shape the future of analytics in South Africa and beyond.
Joe Lang, Vice President of Service Technology and Innovation at Comfort Systems USA, joins the AI in Business podcast to discuss why a clear data strategy must come before investing in storage infrastructure for AI adoption. Joe outlines the risks of assuming that cloud providers or storage solutions alone will produce reliable intelligence, and why organizations should approach AI initiatives as iterative R&D projects rather than instant ROI efforts. He shares practical guidance on right-sizing storage to business goals, addressing the skilled trade gap through scalable systems, and the advantages of a cloud-first approach with sequestered, trusted data. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! This episode is sponsored by Pure Storage. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.
How do you test a GenAI application that's constantly changing? In this episode, Shane talks to Leonard Tang, co-founder of Haize Labs, about why traditional testing fails for LLMs and how to adopt a new evaluation strategy. Leonard introduces "fuzzing"—a powerful technique for discovering edge cases, improving reliability, and building AI you can actually trust. He also gives a live demo of the Haize Labs platform, so be sure to watch the video version on YouTube or Spotify to see it in action.
This episode focuses on the core themes of the HEREDITARY project: cutting-edge AI and advanced analytics, multimodal health data, and the vital issues of data privacy and security. It explores how AI and machine learning are reshaping scientific research—unlocking new possibilities while raising important questions around privacy and citizen engagement. Using HEREDITARY's bold vision and methodology as a foundation, it highlights how AI can become a powerful driver of EU leadership in science and innovation. Joining the discussion are Elisabetta Biasin, Researcher at the KU Leuven Centre for IT & Law (CiTiP) and HEREDITARY project partner, and Helena Ledmyr, Director of Development and Communications at the International Neuroinformatics Coordinating Facility (INCF). They share expert insights into how AI is transforming brain research, while navigating the complex legal, ethical, and regulatory frameworks that come with it. By the end of this episode, listeners will gain a clear view of how AI is powering new frontiers in neuroscience, better understand the legal landscape, and take away practical ideas for meeting data protection and ethical requirements in research.
Over the course of a calendar year ending in May 2025, the United States absorbed nearly $1 trillion in damages due to extreme weather. This amount, representing 3% of U.S. gross domestic product, was driven by rising insurance costs and a series of disasters primarily concentrated in the Ten Across geography, such as Hurricanes Helene and Milton and the fires in Los Angeles. More than ever before, timely and detailed forecasts are needed to properly prepare—and in some cases to evacuate—communities ahead of such extreme events. Leaders across sectors are further in need of advanced weather modeling to support larger-scale mitigation and adaptation efforts. The data that influence such public and private decision-making mainly stem from the National Weather Service's six billion daily weather observations. The NWS recently shed 600 of its 4,000 positions, prompting a public warning from five former agency directors that understaffing could undermine the quality and delivery of forecasts, potentially putting many Americans at greater risk. At the same time, advanced artificial intelligence capabilities are contributing to a trend toward increased commercial ownership of U.S. weather forecasting. However, today's guest, Dr. Amy McGovern, points out that while today's AI can create and curate efficient weather models better than a conventional supercomputer, its monitoring capabilities are not comparable to the collective experience and proficiency of NWS scientists. Listen in as Ten Across founder Duke Reiter and Dr. McGovern, an expert in the integration of AI in meteorological science, explore the current forecasting landscape and how the emergence of private sector AI-powered modeling is influencing its evolution. Related articles and resources: Read about Brightband's Extreme Weather Bench, led by Amy McGovern NOAA stops tracking cost of extreme weather and climate disasters (UtilityDive, May 2025) Former Weather Service Leaders Warn Staffing Cuts Could Lead to ‘Loss of Life' (The New York Times, May 2025) Stabilizing ‘operations,' the National Weather Service hires again after Trump cuts (NPR, June 2025) Lawmakers revive bipartisan forecasting bill (E&E News by Politico, June 2025) Credits:Host: Duke Reiter Producer and editor: Taylor Griffith Music by: Parallax Deep Research and support provided by: Kate Carefoot, Rae Ulrich, and Sabine Butler About our guest: Amy McGovern is the director and principal investigator for the NSF Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. She is also a Lloyd G. and Joyce Austin Presidential Professor in the University of Oklahoma's School of Meteorology and leader of the Interaction, Discovery, Exploration, and Adaptation (IDEA) lab, and lead AI and meteorology strategist for the AI-powered customized weather forecasting startup, Brightband.
Tanmai Gopal, CEO and Co-Founder at Hasura, discusses the importance of reliability and trustworthiness for both generative and agentic AI. We discuss the pitfalls in existing data pipelines and how to enhance the results.SHOW: 931SHOW TRANSCRIPT: The Cloudcast #931 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" SPONSORS:[VASION] Vasion Print eliminates the need for print servers by enabling secure, cloud-based printing from any device, anywhere. Get a custom demo to see the difference for yourself.[US CLOUD] Cut Enterprise IT Support Costs by 30-50% with US CloudSHOW NOTES:Hasura websiteHasura GitHubTopic 1 - Welcome to the show, Tanmai. Give everyone a quick introduction.Topic 2 - Our topic today is Reliable and Trustworthy AI Agents. First off, what's the problem we're solving for here (define reliability and trustworthiness)? Are we solving for hallucinations? Reliability? Connecting private and Enterprise data to models with fine-tuning or RAG?Topic 3 - How is reliability or trustworthiness measured? I would imagine this isn't black and white, but maybe a bit more subjective?Topic 4 - How do Agentic and GenAI differ, if at all, with this model? I would think that since Gen AI lends itself more to the creative side and Agent AI is very deterministic, the approaches to solving the problem might be different. Thoughts?Topic 5 - Let's talk about data pipelines. Today, many organizations take an off-the-shelf frontier or foundational model and then apply a RAG pipeline to it for customization. Sometimes fine-tuning is involved, but in my experience, this is the exception rather than the rule. What is wrong with that architecture today? How is this less reliable?Topic 6 - Let's talk about Hasura and PromptQL. As I understand it, you are decoupling query planning from execution, thereby creating a more deterministic AI workflow. Now… that's a mouthful. Can you break down what this means and explain how the architecture differs?Topic 7 - If anyone is interested, what's the best way to get started?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
In this episode of the Sustainable Living Podcast – AI for a Sustainable Future, we welcome Prof. Fredrik Heintz, a leading voice in European AI research, policy, and education. As Professor of Computer Science at Linköping University and Program Director of the WASP-ED initiative, Fredrik brings deep expertise in Trustworthy AI, autonomous systems, and the ethical integration of AI into society.We explore how AI is reshaping education, work, and social systems, and what it takes to ensure this transformation remains inclusive, ethical, and beneficial to all. From AI literacy in schools to policies that shape innovation acrossEurope, Prof. Heintz offers a vision rooted in collaboration, values, and long-term sustainability. Whether you're a policymaker, educator, tech enthusiast, or just curious about the future of AI, this conversation will spark reflection on how we build trust and resilience in an AI-driven world.
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/689 In a world where AI is reshaping how we build, lead, and learn, standing still is no longer an option. In this episode, the Ecosystem Show team dives into the evolving role of Microsoft partners, the rise of “vibe coding,” and the critical mindsets needed to thrive in tech's next chapter. Whether you're navigating AI adoption, rethinking your Power Platform strategy, or simply trying to stay relevant in a fast-changing landscape, this conversation offers clarity, challenge, and a roadmap forward. KEY TAKEAWAYS AI demands new mental models: Success in the AI era requires critical thinking, constant learning, and a willingness to challenge assumptions—even those served up by machines. Power Platform is evolving fast: It's no longer just low-code—it's becoming the enterprise-grade “vibe coding” platform, integrating seamlessly with advanced tools and governance systems. Trustworthy AI is non-negotiable: Leaders must embed safety, transparency, and validation into every AI practice. Microsoft's internal frameworks offer a strong starting point. The partner landscape is shifting: The best Microsoft partners are transforming their culture, offerings, and delivery models to meet the demands of AI-native enterprises. Your career is your responsibility: In a time of layoffs and disruption, professionals must take ownership of their growth by developing adaptive, future-focused mindsets. RESOURCES MENTIONED
This week on Alter Everything, we chat with Scott Jones and Treyson Marks from DCG Analytics about the history and misconceptions of AI, the importance of data quality, and how Alteryx can serve as a powerful tool for pre-processing AI data. Topics of this episode include the role of humans in auditing AI outputs and the critical need for curated data to ensure trustworthy results. Through real-world use cases, this episode explores how AI can significantly enhance analytics and decision-making processes in various industries.Panelists: Treyson Marks, Managing Partner @ DCG Analytiocs - LinkedInScott Jones, Principal analytics consultant @ DCG Analytics - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: DCG Analytics Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.
Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn't enough; and the hard work required to develop good AI. Phaedra Boinodiris is IBM's Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. A transcript of this episode is here. Additional Resources: Phaedra's Website - https://phaedra.ai/ The Future World Alliance - https://futureworldalliance.org/
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/666 We explore whether AI is overhyped or dangerously underhyped, examining the disconnect between those creating AI technology and those selling it without adequately addressing trustworthy AI concerns.TAKEAWAYS• The Microsoft AI Tour event demonstrated excellent technical content with a strong focus on trustworthy AI• There's a dangerous disconnect between people who make AI technology and those who sell it regarding responsible AI implementation• Trustworthy AI doesn't mean stopping innovation but preventing potential calamities• The scale of AI's impact may be drastically underestimated, similar to our inability to truly comprehend "65 million years since dinosaurs"• AI enables processing information at unprecedented scale, creating extraordinary risks in surveillance and human rights contexts• Corporate discussions about completely replacing customer service departments with AI raise serious socioeconomic concerns• Shadow AI applications being developed without proper governance represent significant risks• Containing AI's risks while harnessing its benefits requires education, curiosity, and political wisdom• Book recommendations: "The Coming Wave" by Mustafa Suleiman and "Origin" by Dan BrownGet educated and don't rely on echo chambers or news articles - read in-depth material from experts to form your own opinions about AI's trajectory and implications.OTHER RESOURCES90 Day Mentoring Challenge - https://www.90daymc.com/ Support the show - https://www.buymeacoffee.com/nz365guy This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
Every “trustworthy” AI system quietly betrays at least one sacred principle. Ethical AI forces brutal trade-offs: Prioritizing any one aspect among fairness, accuracy, and transparency compromises the others. It's a messy game of Jenga: pull one block (like fairness), and accuracy wobbles; stabilize transparency, and performance tumbles. But why can't you be fair, accurate, AND transparent? And is there a solution? The Trilemma in Action Imagine you try to create ethical hiring algorithms. Prioritize diversity and you might ghost the best candidates. Obsess over qualifications and historical biases sneak in like uninvited guests. Same with chatbots. Force explanations and they'll robot-splain every comma. Let them “think” freely? You'll get confident lies about Elvis running a B&B on a Mars colony. Why Regulators Won't Save Us Should we set up laws that dictate universal error thresholds or fairness metrics? Regulators wisely steer clear of rigid one-size-fits-all rules. Smart move. They acknowledge AI's messy reality where a 3% mistake margin might be catastrophic for autonomous surgery bots but trivial for movie recommendation engines. The Path Forward? Some companies now use “ethical debt” trackers, logging trade-offs as rigorously as technical debt. They document their compromises openly, like a chef publishing rejected recipe variations alongside their final dish. Truth is: the real AI dilemma is that no AI system maximizes fairness, accuracy, and transparency simultaneously. So, what could we imagine? Letting users pick their poison with trade-off menus: “Click here for maximum fairness (slower, dumber AI)” or “Turbo mode (minor discrimination included)”? Or how about launching bias bounties: pay hackers to hunt unfairness and turn ethics into an extreme sport? Obviously, it's complicated. The Bullet-Proof System Sorry, there's no bullet-proof system since value conflicts will always demand context-specific sacrifices. After all, ethics isn't about avoiding hard choices, it's about admitting we're all balancing on a tightrope—and inviting everyone to see the safety net we've woven below. Should We Hold Machines to Higher Standards Than Humans? Trustworthy AI isn't achieved through perfect systems, but through processes that make our compromises legible, contestable, and revisable. After all, humans aren't fair, accurate, and transparent either.
Artificial Intelligence has become a hot-button issue, with questions about AI accuracy and precision. But this week, we're exploring the role of artificial intelligence in weather forecasting! Come Off the Radar with us as we learn about how generative AI modeling can now use historical weather data to make hyper-local predictions about future weather probabilities. We'll be talking to Dr. Amy McGovern from the National Science Foundation's AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. We'll also be chatting with Ilan Price, a Senior Research Scientist at Google DeepMind whose work centers around using AI in weather forecasting. If you rely on your phone to check the weather forecast, you won't want to miss this one!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this CISO Tradecraft episode, host G. Mark Hardy delves into the recent U.S. presidential executive orders impacting AI and their implications for cybersecurity professionals. Learn about the evolution of AI policies from various administrations and how they influence national security, innovation, and the strategic decisions of CISOs. Discover key directives, deregulatory moves, and practical steps you can take to secure your AI systems in an era marked by rapidly changing regulations. Plus, explore the benefits of using AI tools like ZeroPath to bolster your cybersecurity efforts. Big Thanks to our Sponsors: ZeroPath - https://zeropath.com/ Transcripts: https://docs.google.com/document/d/1Nv27tpDQs2fjdOedJOi0LhlkyQ5N5dKt Links: https://www.americanbar.org/groups/public_education/publications/teaching-legal-docs/what-is-an-executive-order-/ https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence https://www.csis.org/analysis/made-china-2025 https://www.researchgate.net/publication/242704112_China's_15-year_Science_and_Technology_Plan https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence https://www.presidency.ucsb.edu/documents/executive-order-14148-initial-rescissions-harmful- executive-orders-and-actions https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting- innovation-in-the-nations-cybersecurity https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting- innovation-in-the-nations-cybersecurity https://www.cisecurity.org/controls/cis-controls-list Chapters 00:00 Introduction to AI Policy Shifts 00:23 AI Tool for Cybersecurity: ZeroPath 01:12 Understanding Executive Orders 02:44 EO 13859: Maintaining American Leadership in AI 05:42 EO 13960: Trustworthy AI in Federal Government 07:10 EO 14028: Strengthening U.S. Cybersecurity 09:38 EO 14110: Safe and Trustworthy AI Development 11:09 EO 14148: Rescinding AI Policies 12:21 EO 14179: Removing Barriers to AI Innovation 15:26 EO 14144: Strengthening Cybersecurity Innovation 37:19 Mapping Executive Orders to CIS Controls 40:15 Conclusion and Key Takeaways
In this CISO Tradecraft episode, host G. Mark Hardy delves into the recent U.S. presidential executive orders impacting AI and their implications for cybersecurity professionals. Learn about the evolution of AI policies from various administrations and how they influence national security, innovation, and the strategic decisions of CISOs. Discover key directives, deregulatory moves, and practical steps you can take to secure your AI systems in an era marked by rapidly changing regulations. Plus, explore the benefits of using AI tools like ZeroPath to bolster your cybersecurity efforts. Big Thanks to our Sponsors: ZeroPath - https://zeropath.com/ Transcripts: https://docs.google.com/document/d/1Nv27tpDQs2fjdOedJOi0LhlkyQ5N5dKt Links: https://www.americanbar.org/groups/public_education/publications/teaching-legal-docs/what-is-an-executive-order-/ https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence https://www.csis.org/analysis/made-china-2025 https://www.researchgate.net/publication/242704112_China's_15-year_Science_and_Technology_Plan https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence https://www.presidency.ucsb.edu/documents/executive-order-14148-initial-rescissions-harmful- executive-orders-and-actions https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting- innovation-in-the-nations-cybersecurity https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting- innovation-in-the-nations-cybersecurity https://www.cisecurity.org/controls/cis-controls-list Chapters 00:00 Introduction to AI Policy Shifts 00:23 AI Tool for Cybersecurity: ZeroPath 01:12 Understanding Executive Orders 02:44 EO 13859: Maintaining American Leadership in AI 05:42 EO 13960: Trustworthy AI in Federal Government 07:10 EO 14028: Strengthening U.S. Cybersecurity 09:38 EO 14110: Safe and Trustworthy AI Development 11:09 EO 14148: Rescinding AI Policies 12:21 EO 14179: Removing Barriers to AI Innovation 15:26 EO 14144: Strengthening Cybersecurity Innovation 37:19 Mapping Executive Orders to CIS Controls 40:15 Conclusion and Key Takeaways
Send us a textWelcome to the 21st edition of DigiPath Digest! In this episode, together with Dr. Aleksandra Zuraw you will review the latest digital pathology abstracts and gain insights into emerging trends in the field. Discover the promising results of the PSMA PET study for prostate cancer imaging, explore the collaborative open-source platform HistioColAI for enhancing histology image annotation, and learn about AI's role in improving breast cancer detection. Dive into topics such as the role of AI in renal histology classification, the innovative TrueCam framework for trustworthy AI in pathology, and the latest advancements in digital tools like QuPath for nephropathology. Stay tuned to elevate your digital pathology game with cutting-edge research and practical applications.00:00 Introduction to DigiPath Digest #2101:22 PSMA PET in Prostate Cancer06:49 HistoColAI: Collaborative Digital Histology12:34 AI in Mammogram Analysis17:21 Blood-Brain Barrier Organoids for Drug Testing22:02 Trustworthy AI in Lung Cancer Diagnosis30:09 QuPath for Nephropathology35:30 AI Predicts Endocrine Response in Breast Cancer40:04 Comprehensive Classification of Renal Histologic Types45:02 Conclusion and Viewer EngagementLinks and Resources:Subscribe to Digital Pathology Podcast on YouTubeFree E-book "Pathology 101"YouTube (unedited) version of this episodeTry Perplexity with my referral linkMy new page built with PerplexityHistoColAI Github PagePublications Discussed Today:
SanDisk Desk Drive fuses huge capacity with fast speeds! Learn all about it here first, with Christina Garza, Director of Product Marketing at Western DigitalBachelorette alum and firefighter Kevin Wendt drops by to talk about the first air purifier from LGHave dogs or cats? Amazon's Melissa Mohr, Director of Smart Home, is a guest on Tech It Out, to share what's new and innovative for your furry friendsPhaedra Boinodiris, IBM Consulting's Global Leader for Trustworthy AI, stop by to talk “generative AI” ethics and governanceThank you to Intel and Visa for your incredible support!
Send Everyday AI and Jordan a text messageDid you see that Coca-Cola holiday AI commercial!? It's been all the buzz lately. There were more than a dozen humans that worked on the project. Think that's shocking? Wait until you hear the REAL story and a new way that Coca-Cola is partnering with Microsoft to create some real magic.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan, Pratik and Marco questions on AI Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. AI and Coca Cola2. AI beyond Chatbots3. AI and Microsoft Azure Foundry4. AI Usage in Coca Cola CampaignsTimestamps:00:00 AI empowers creativity and consumer connection daily.05:57 Custom snow globes created using Microsoft technology.10:11 Discussing Coca-Cola's nostalgic Santa Claus depiction challenges.12:19 Reimagining Santa authentically with cutting-edge technology.15:13 Started Gen AI and OpenAI collaboration in 2022.18:30 Combining human creativity and AI technology.22:27 Trustworthy AI is crucial for responsible deployment.23:50 Generative AI offers unique consumer connections.Keywords:AI, generative AI, Copilot, productivity, consumers, everyday AI, artificial intelligence, Microsoft WorkLab, Azure AI, Microsoft Ignite conference, Pratik Thakar, Coca Cola Company, digital twin, Leonardo AI, chatbot, Outlook calendar, Marco Casalaina, Create Real Magic campaign, Microsoft, marketing, commercials, AI commercials, AI Foundry, programming model, trustworthy AI, ethical prompting strategy, technology, digital marketing, nostalgia, Coca Cola Santa.
For episode 459, Co-founder & CEO Salman Avestimehr joins Brandon Zemp to discuss ChainOpera, a decentralized AI platform and a community-driven generative AI application ecosystem. He is also the Dean's Professor of ECE and CS at the University of Southern California (USC), the inaugural director of the USC-Amazon Center on Trustworthy AI, and co-founder of TensorOpera. He is an expert in machine learning, information theory, security/privacy, and blockchain systems, with more than 10 years of R&D leadership in both academia and industry. He is a United States Presidential award winner for his profound contributions in information technology, and a Fellow of IEEE. He received his PhD from UC Berkeley/EECS in 2008, and has held Advisory positions at various Tech companies, including Amazon. ⏳ Timestamps: 0:00 | Introduction 0:55 | Who is Salman Avestimehr? 5:04 | Web3 interest on College campuses 6:00 | What is ChainOpera? 8:01 | What is the AI Economy? 13:18 | AI Agents 14:14 | ChainOpera's Decentralized AI Platform 17:44 | ChainOpera's infrastructure 20:43 | Security & Privacy of Decentralized AI 25:46 | ChainOpera 2025 Roadmap 27:28 | ChainOpera website, social media & community
Janice is joined by Lara Abrash, Chair at Deliotte US, the largest multiprofessional services network in the US, where they talk about how to connect ideas, innovations, and industries to create prosperity for clients, society and the planet. They discuss how Laras family links to her success within the company today, and the new upcoming AI implications.Tags: janice, ellig, lara, abrash. chair, deloitte, us, ai, professional, technology, clients, society, family, lesson, mentor
Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NIST's 'AI Risk Management Framework' (AI RMF) within the context of the White House's 'Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence'.
Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NIST's 'AI Risk Management Framework' (AI RMF) within the context of the White House's 'Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence'.
This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere. If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai In this episode of the Eye on AI podcast, we sit down with Mark Surman, President of Mozilla, to explore the future of open-source AI and how Mozilla is leading the charge for privacy, transparency, and ethical technology. Mark shares Mozilla's vision for AI, detailing the company's innovative approach to building trustworthy AI and the launch of Mozilla AI. He explains how Mozilla is working to make AI open, accessible, and secure for everyone—just as it did for the web with Firefox. We also dive into the growing importance of federated learning and AI governance, and how Mozilla Ventures is supporting groundbreaking companies like Flower AI. Throughout the conversation, Mark discusses the critical need for open-source AI alternatives to proprietary models like OpenAI and Meta's LLaMA. He outlines the challenges with closed systems and highlights Mozilla's work in giving users the freedom to choose AI models directly in Firefox. Mark provides a fascinating look into the future of AI and how open-source technologies can create trillions in economic value while maintaining privacy and inclusivity. He also sheds light on the global race for AI innovation, touching on developments from China and the impact of public AI funding. Don't forget to like, subscribe, and hit the notification bell to stay up to date with the latest trends in AI, open-source tech, and machine learning! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Mark Surman and Mozilla's Mission (02:01) The Evolution of Mozilla: From Firefox to AI (04:40) Open-Source Movement and Mozilla's Legacy (06:58) The Role of Open-Source in AI (11:06) Advancing Federated Learning and AI Governance (14:10) Integrating AI Models into Firefox (16:28) Open vs Closed Models (22:09) Partnering with Non-Profit AI Labs for Open-Source AI (25:08) How Meta's Strategy Compares to OpenAI and Others (27:58) Global Competition in AI Innovation (31:17) The Cost of Training AI Models (33:36) Public AI Funding and the Role of Government (37:40) The Geopolitics of AI and Open Source (41:35) Mozilla's Vision for the Future of AI and Responsible Tech
In this episode Michael and Sarah talk to Nestori Syynimaa about Entra ID security and his purple-team tool, AADInternals. We also cover the latest security news about Secure Future Initiative (SFI), MFA for Azure Portal, Playright, WordPress, NSG, Bastion, Azure Functions, MS Ignite, App Service, Defender for Cloud, Containers, Azure Monitor, AKS, Trustworthy AI and Azure AI Content Safety.https://aka.ms/azsecpod
In this episode, Kevin Werbach is joined by Reggie Townsend, VP of Data Ethics at SAS, an analytics software for business platform. Together they discuss SAS's nearly 50-year long history of supporting business's technology and the recent implementation of responsible AI initiatives. Reggie introduces model cards and the importance of variety in AI systems across diverse stakeholders and sectors. Reggie and Kevin explore the increase in both consumer trust and purchases when they feel a brand is ethical in its use of AI and the importance of trustworthy AI in employee retention and recruitment. Their discussion approaches the idea of bias in an untraditional way, highlighting the positive humanistic nature of bias and learning to manage the negative implications. Finally, Reggie shares his insights on fostering ethical AI practices through literacy and open dialogue, stressing the importance of authentic commitment and collaboration among developers, deployers, and regulators. SAS adds to its trustworthy AI offerings with model cards and AI governance services Article by Reggie Townsend: Talking AI in Washington, DC Reggie Townsend oversees the Data Ethics Practice (DEP) at SAS Institute. He leads the global effort for consistency and coordination of strategies that empower employees and customers to deploy data driven systems that promote human well-being, agency and equity. He has over 20 years of experience in strategic planning, management, and consulting focusing on topics such as advanced analytics, cloud computing and artificial intelligence. With visibility across multiple industries and sectors where the use of AI is growing, he combines this extensive business and technology expertise with a passion for equity and human empowerment. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.
Dino Martis is the Chief Executive Officer & Founder at Genexia LLC. Genexia provides a platform that enables simultaneous mammogram and coronary artery disease (CAD) risk diagnosis using the same images, with no alteration to workflow. In this episode, KJ and Dino discuss the severe underdiagnosis of this condition in women, the clinical bias in healthcare, and Genexia's innovative approach of integrating coronary artery disease diagnosis with routine mammograms. Dino shares his personal motivation behind this venture, emphasizing the impact of cardiovascular disease on women's lives and how early detection can prevent heart attacks and strokes, ultimately aiming for a 50% reduction in deaths. Key Takeaways: 03:09 The Importance of Women's Health 05:49 Challenges in Diagnosing Women's Heart Health 16:27 Innovative Solutions in Health Tech 23:02 The Broader Impact of Preventative Care Quote of the Show (10:00): "We believe the disruption that we are bringing to healthcare is one of health equity and democratization. Women are central to the family, central to the community." – Dino Martis Join our Anti-PR newsletter where we're keeping a watchful and clever eye on PR trends, PR fails, and interesting news in tech so you don't have to. You're welcome. Want PR that actually matters? Get 30 minutes of expert advice in a fast-paced, zero-nonsense session from Karla Jo Helms, a veteran Crisis PR and Anti-PR Strategist who knows how to tell your story in the best possible light and get the exposure you need to disrupt your industry. Click here to book your call: https://info.jotopr.com/free-anti-pr-eval Ways to connect with Dino MartisLinkedIn: https://www.linkedin.com/in/dino-martis/ X:https://twitter.com/Dino_Martis Company Website: https://genexia.co/ WCPO Interview:https://www.wcpo.com/news/local-news/finding-solutions/cincinnati-based-company-using-ai-to-diagnose-coronary-artery-disease-risk-during-a-mammogram How to get more Disruption/Interruption: Amazon Music - https://music.amazon.com/podcasts/eccda84d-4d5b-4c52-ba54-7fd8af3cbe87/disruption-interruption Apple Podcast - https://podcasts.apple.com/us/podcast/disruption-interruption/id1581985755 Spotify - https://open.spotify.com/show/6yGSwcSp8J354awJkCmJlDSee omnystudio.com/listener for privacy information.
On this episode of “B The Way Forward,” Host Brenda Darden Wilkerson is joined by Beena Ammanath, an executive, author, advocate, AnitaB.org board member, and nonprofit founder, who aims to increase awareness on the use, risks, and benefits of artificial intelligence, all while promoting diversity in this niche tech space. Beena is the Executive Director at the Deloitte Global AI Institute, where she helps companies and businesses learn how to leverage AI in the most practical and safe ways possible. Through this conversation, Beena offers listeners insight on how to utilize AI in every aspect of business and in our own personal career paths. As a computer scientist by trade, there was nothing in Beena's education or curriculum about ethics in the AI space, which led her into forging her own unique path to incorporate them into her career. Beena penned Trustworthy AI, a book that bridges the gap for readers on ethics and AI, and Zero Latency Leadership, which looks at other new emerging technologies that are on the horizon. Through all of this work, she also became an advocate for women and minorities in the AI realm, knowing that in order for AI to be successful, it needs to have diverse voices at the table. Brenda and Beena discuss how more people can become “AI Fluent”, why diversity in technology is crucial, and how to raise your voice to make the best use of these technologies. “Diversity has so many different angles. It's the culture, the experience, the education, age, the geographic location you come from. There are so many nuances to diversity, and for your AI products to be robust, you have to factor in. Start with the largest demographic, but try to bring in as much diversity to your AI teams as you can, because it's only going to make your product better and make more profit.” For more, check out Been and Delloitte... On LinkedIn - /bammanath | /delloitte On the Web - https://beenammanath.com/ | Deloitte AI Institute - AI Insights --- At AnitaB.org, we envision a future where the people who imagine and build technology mirror the people and societies for whom they build it. Find out more about how we support women, non-binary individuals, and other underrepresented groups in computing, as well as the organizations that employ them and the academic institutions training the next generations. --- Connect with AnitaB.org Instagram - @anitab_org Facebook - /anitab.0rg LinkedIn - /anitab-org On the web - anitab.org --- Our guests contribute to this podcast in their personal capacity. The views expressed in this interview are their own and do not necessarily represent the views of Anita Borg Institute for Women and Technology or its employees (“AnitaB.org”). AnitaB.org is not responsible for and does not verify the accuracy of the information provided in the podcast series. The primary purpose of this podcast is to educate and inform. This podcast series does not constitute legal or other professional advice or services. --- B The Way Forward Is… Produced by Dominique Ferrari and Paige Hymson Sound design and editing by Neil Innes and Ryan Hammond Mixing and mastering by Julian Kwasneski Associate Producer is Faith Krogulecki Executive Produced by Dominique Ferrari, Stacey Book, and Avi Glijansky for Riveter Studios and Frequency Machine Executive Produced by Brenda Darden Wilkerson for AnitaB.org Podcast Marketing from Lauren Passell and Arielle Nissenblatt with Riveter Studios and Tink Media in partnership with Coley Bouschet at AnitaB.org Photo of Brenda Darden Wilkerson by Mandisa Media Productions For more ways to be the way forward, visit AnitaB.org
In this episode of AI, Government, and the Future, host Marc Leh is joined by Giorgos Verdi, Distinguished Policy Fellow at the European Council on Foreign Relations, to discuss the EU's pioneering AI Act, its implications for innovation, and Europe's role in shaping global AI standards. Giorgos shares insights on the challenges and opportunities facing European tech companies, the geopolitical factors influencing AI development, and the potential for AI to transform government services.
Resources:OpenAI co-founder leaves for AnthropicMicrosoft says OpenAI is now a competitor in AI and searchZoom Is Going After Google and Microsoft With AI-Driven DocsMethod prevents an AI model from being overconfident about wrong answersConnect with Jill: linkedin.com/in/jill-berkowitzConnect with Will: linkedin.com/in/william-jonathan-bowen___Check out Will's AI Digest___160 Characters is powered by Clerk Chat.
The United States Department of Education recently released a new report called "Designing for Education with Artificial Intelligence: An Essential Guide for Developers." The guide seeks to inform ed tech developers as they create AI products and services for use in education — and help them work toward AI safety, security, and trust. We spoke with Kevin Johnstun, education program specialist in ED's Office of Educational Technology, about the ins and outs of the report and what it means for education institutions. Resource links: Designing for Education with Artificial Intelligence: An Essential Guide for Developers Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations Music: Mixkit Duration: 21 minutes Transcript
Update 3 on The Future of BIPOC, Disabled and LGBTQ+ Artists with Colony Little and Evonne GallardoIn this episode we discuss:The Importance of Community: Delving into social weaving and the necessity of creating spaces to go deep rather than wide.History, Land, and Care: Examining the intricate relationship between history, land, and the need for foregrounded care.Cultural and Societal Shifts: Discussing how recent changes have impacted the visibility and acceptance of marginalized artists.Underground Spaces: Exploring whether artists today are creating their own modern version of the “underground.”Supporting Artists: Strategies for protecting and supporting artists and culture bearers in today's world.Generational Perspectives on Tech: Investigating whether younger generations are embracing or rejecting social media and technology.Technology and Representation: Analyzing the role of technology in shaping the future of representation for marginalized artists.Creative Ecosystem Navigation: Offering practical advice for creatives of any age navigating the current ecosystem.Honoring Debra Padilla: Giving Debra Padilla her well-deserved flowers.Hip Hop and LA's Success: Our love for Hip Hop and discussing how LA is winning right now.Episode linksFanshen Cox's WTYM EpisodeThe Institute for Trustworthy AI in Law & Society (TRAILS)WTYM EP 57 Evonne Gallardo: Latinx Arts and Cultural ManagementWTYM EP 67 Colony Little: Seeing Yourself in ArtWord To Your Mama Guest Hype Songs PlaylistWTYM LINKSRitzy PeriwinkleBook Ritzy P as a SpeakerWord To Your Mama Store: Use code WTYM at check out to receive 10% off any order YouTubeMental Health ResourcesWTYM Patreon PageDONATEMEDIA KITWTYM was recorded using Riverside.FM TRY NOWAVAILABLE WHERE EVER YOU CONSUME PODCASTS on socials @wtymama | email: hola@wordtoyourmama.com
AI's potential to revolutionize healthcare requires a focus on responsible and trustworthy implementation. In this episode, Dr. David Rhew from Microsoft, along with Marcella Dalrymple and Dr. Michael Pencina from Duke Health, discuss the collaboration between Microsoft and Duke Health to explore the transformative potential of artificial intelligence (AI) in healthcare. Dr. Rhew emphasizes the importance of responsible and trustworthy AI, acknowledging its limitations and the need for operationalizing key principles. Dr. Pencina outlines four principles for trustworthy AI: prioritizing the human person, defining AI use cases, anticipating consequences, and establishing governance. Marcella Dalrymple, with her community perspective, highlights the necessity of addressing public uncertainty and mistrust regarding AI development. The partnership aims to form a Center of Excellence for trustworthy AI, focusing on collaborative efforts to align with ethical values and engage the community bidirectionally. The guests stress the importance of a robust governance system, automation for efficiency, and continuous monitoring to ensure AI's intended impact. Tune in and learn how this collaboration strives to revolutionize healthcare responsibly through AI! Resources: Connect with and follow David Rhew on LinkedIn. Follow Microsoft on LinkedIn and visit their website. Connect with and follow Marcella Dalrymple on LinkedIn. Connect with and follow Michael Pencina on LinkedIn. Follow Duke Health AI on LinkedIn and visit their website.
NVIDIA's Head of AI & Legal Ethics, Nikki Pope, talks about why how we talk about artificial intelligence matters and why making the tech representative of more people is important. She and Niki break down some of the myths surrounding the tech and re-examine what regulators should be focused on when they think of “existential threat” and AI. Spoiler alert - it's not what Hollywood thinks! “...democratization of AI means making sure that we don't leave cultures and languages and communities behind as we all go running breakneck into the future.” -Nikki PopeFollow Nikki Pope on LinkedInRead more about Te Hiku MediaLearn more about NVIDIA's Trustworthy AI initiative Learn More at www.techedup.com Follow us on Instagram Check out video on YouTube Follow Niki on LinkedIn
Anyone who has experimented with generative AI knows the tech is still flawed. Despite massive investments in AI models, tools, and applications, the fact that AI outputs are still biased and inconsistently accurate raises global concerns regarding trustworthiness and who is responsible for making AI safe as it evolves at earth-shattering speed. The unfortunate truth is that presently, the majority of AI models only reflect a narrow sample of our collective humanity, inevitably reinforcing the existing biases of those who programmed AI and the narrow data sets used, making today's AI models inept at delivering diverse perspectives. Unpacking the ethics and path to a safer, more responsible, and representative AI future is Phaedra Boinodiris, IBM Consulting's Global Leader for Trustworthy AI. Phaedra is a top voice, author, speaker, and one of the earliest leaders responsible for reimagining AI initiatives. Her recent book, “AI for the Rest of Us,” and her role as co-founder of the Future World Alliance highlight her commitment to integrating ethics into AI education and development. She's here to discuss the need for inclusive AI that represents all of humanity, outlining the important considerations leaders should take into account to ensure their AI initiatives are ethical, inclusive, and able to effectively augment our capabilities without compromising human values. Further, what AI governance models look like at IBM and how to develop the right teams to develop truly groundbreaking AI solutions without compromise. We talk about how AI ethics intersects with broader societal issues, including education, corporations, and parental responsibilities. Phaedra also shares IBM's approaches to AI training, tools, teams, and transparency as well as the importance of AI literacy in different fields, and why diversity is crucial in AI development. Tune in to understand why we must approach AI with the intentionality it demands so it can work for humanity, and not against it. — Key Takeaways: Introducing Phaedra Boinodiris & Her Take On Ethical AI (00:00) IBM's Ethical Governance Model For Training AI (12:58) AI Transparency & Accountability vs. The AI Arms Race (17:11) Is Decentralized AI The Way Forward? (22:26) What C-Suite Leaders Need To Know & Preventing Misinformation (24:43) AI Regulation, Intellectual Property & Tips for Parents (32:13) What Sparked Phaedra's Passion for AI & Tech (42:39) Speed Round Questions (46:25) — ADDITIONAL RESOURCES Connect with Phaedra Boinodiris: Website: https://phaedra.ai/ LinkedIn: https://www.linkedin.com/in/phaedra/ Pick up Phaedra's book, “AI for the Rest of Us,” for insights on developing inclusive and responsible AI: https://aifortherestofus.us Learn about the Future World Alliance here: https://futureworldalliance.org Subscribe to our YouTube channel: https://bit.ly/44ieyPB Follow our podcast: Apple Podcasts: https://apple.co/44kONi6 Spotify: https://spoti.fi/3NtVK9W Join the TDW tribe and learn more: https://disruptedwork.com
Join Professor Kevin Werbach and Dominique Shelton Leipzig, an expert in data privacy and technology law, as they share practical insights on AI's transformative potential and regulatory challenges in this episode on The Road to Accountable AI. They dissect the ripple effects of recent legislation, and why setting industry standards and codifying trust in AI are more than mere legal checkboxes—they're the bedrock of innovation and integrity in business. Transitioning from theory to practice, this episode uncovers what it truly means to govern AI systems that are accurate, safe, and respectful of privacy. Kevin and Dominique navigate through the high-risk scenarios outlined by the EU and discuss how companies can future-proof their brands by adopting AI governance strategies. Dominique Shelton Leipzig is a partner and head of the Ad Tech Privacy & Data Management team and the Global Data Innovation team at the law firm Mayer Brown. She is the author of the recent book Trust: Responsible AI, Innovation, Privacy and Data Leadership. Dominique co-founded NxtWork, a non-profit aimed at diversifying leadership in corporate America, and has trained over 50,000 professionals in data privacy, AI, and data leadership. She has been named a "Legal Visionary" by the Los Angeles Times, a "Top Cyber Lawyer" by the Daily Journal, and a "Leading Lawyer" by Legal 500. Trust: Responsible AI, Innovation, Privacy and Data Leadership Mayer Brown Digital Trust Summit A Framework for Assessing AI Risk Dominique's Data Privacy Recommendation Enacted in Biden's EO
The one where Hinge needs to drop their location Emma and Nicole speak to Apryl Williams, an assistant professor of communication and digital studies at the University of Michigan, senior fellow in Trustworthy AI at the Mozilla Foundation, and faculty associate at Harvard University's Berkman Klein Center for Internet & Society. She's the author of Not My Type: Automating Sexual Racism in Online Dating. They discuss Apryl's research into about dating app inequality and sexual racism in online dating and how prejudice and bias gets baked into modern day dating culture through algorithms and AI. Pre-order our book The Half Of It: https://lnkfi.re/nf0upC Apryl's Twitter: https://twitter.com/AprylW Instagram: https://instagram.com/mixedup.podcast Website: https://www.mixedup.co.uk/ Substack: https://mixeduppod.substack.com
Are AI hallucinations undermining trust in machine learning, and can Retrieval-Augmented Generation (RAG) offer a solution? As we invite Rahul Pradhan, VP of Product and Strategy at Couchbase, to our podcast, we delve into the fascinating yet challenging issue of AI hallucinations—situations where AI systems generate plausible but factually incorrect content. This phenomenon poses risks to AI's reliability and threatens its adoption across critical sectors like healthcare and legal industries, where precision is paramount. In this episode, Rahul will explain how these hallucinations occur in AI models that operate on probability, often simulating understanding without genuine comprehension. The consequences? A potential erosion of trust in automated systems is a barrier that is particularly significant in domains where the stakes are high, and errors can have profound implications. But fear not, there's a beacon of hope on the horizon—Retrieval-Augmented Generation (RAG). Rahul will discuss how RAG integrates a retrieval component that pulls real-time, relevant data before generating responses, thereby grounding AI outputs in reality and significantly mitigating the risk of hallucinations. He will also show how Couchbase's innovative data management capabilities enable this technology by combining operational and training data to enhance accuracy and relevance. Moreover, Rahul will explore RAG's broader implications. From enhancing personalization in content generation to facilitating sophisticated decision-making across various industries, RAG stands out as a pivotal innovation in promoting more transparent, accountable, and responsible AI applications. Join us as we navigate the labyrinth of AI hallucinations and the transformative power of the Retrieval-Augmented Generation. How might this technology reshape the landscape of AI deployment across different sectors? After listening, we eagerly await your thoughts on whether RAG could be the key to building more trustworthy AI systems.
The current AI ecosystem has plenty in common with the early days of the web. So, what have we learned? We Meet: Mark Surman, President of Mozilla Credits: This episode of SHIFT was produced by Jennifer Strong with help from Emma Cillekens. It was mixed by Garret Lang, with original music from him and Jacob Gorski. Art by Anthony Green.