Podcasts about responsible ai

  • 620PODCASTS
  • 1,085EPISODES
  • 36mAVG DURATION
  • 1DAILY NEW EPISODE
  • Oct 9, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about responsible ai

Show all podcasts related to responsible ai

Latest podcast episodes about responsible ai

IT Visionaries
Cisco's Vijoy Pandey: The New Internet, AI Agents, and Quantum Networks

IT Visionaries

Play Episode Listen Later Oct 9, 2025 61:04


Cisco's Vijoy Pandey - SVP & GM of Outshift by Cisco - explains how AI agents and quantum networks could completely redefine how software, infrastructure, and security function in the next decade.You'll learn:→ What “Agentic AI” and the “Internet of Agents” actually are→ How Cisco open-sourced the Internet of Agents framework and why decentralization matters→ The security threat of “store-now, decrypt-later” attacks—and how post-quantum cryptography will defend against them→ How Outshift's “freedom to fail” model fuels real innovation inside a Fortune-500 company→ Why the next generation of software will blur the line between humans, AI agents, and machines→ The vision behind Cisco's Quantum Internet—and two real-world use cases you can see today: Quantum Sync and Quantum AlertAbout Today's Guest:​​Meet Vijoy Pandey, the mind behind Cisco's Outshift—a team pushing the boundaries of what's next in AI, quantum computing, and the future internet. With 80+ patents to his name and a career spent redefining how systems connect and think, he's one of the few leaders truly building the next era of computing before the rest of us even see it coming.Key Moments:00:00 Meet Vijoy Pandey & Outshift's mission04:30 The two hardest problems in computer science: Superintelligence & Quantum Computing06:30 Why “freedom to fail” is Cisco's innovation superpower10:20 Inside the Outshift model: incubating like a startup inside Cisco21:00 What is Agentic AI? The rise of the Internet of Agents27:00 AGNTCY.org and open-sourcing the Internet of Agents32:00 What would an Internet of Agents actually look like?38:19 Responsible AI & governance: putting guardrails in early49:40 What is quantum computing? What is quantum networking?55:27 The vision for a global Quantum InternetWatch Next: https://youtu.be/-Jb2tWsAVwI?si=l79rdEGxB-i-Wrrn  -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Deep State Radio
Siliconsciousness: Why Responsible AI Begins with Each of Us

Deep State Radio

Play Episode Listen Later Oct 2, 2025 35:52


The best time to regulate AI was yesterday, and the next best time is now. There is a clear and urgent need for responsible AI development that implements reasonable guidelines to mitigate harms and foster innovation, yet the conversation in DC and capitals around the world remains muddled. NYU's Dr. Julia Stoyanovich joins David Rothkopf to explore the role of collective action in AI development and why responsible AI is the responsibility of each of us.  This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Road to Accountable AI
Heather Domin: From Principles to Practice

The Road to Accountable AI

Play Episode Listen Later Oct 2, 2025 34:38 Transcription Available


Kevin Werbach interviews Heather Domin, Global Head of the Office of Responsible AI and Governance at HCLTech. Domin reflects on her path into AI governance, including her pioneering work at IBM to establish foundational AI ethics practices. She discusses how the field has grown from a niche concern to a recognized profession, and the importance of building cross-functional teams that bring together technologists, lawyers, and compliance experts. Domin emphasizes the advances in governance tools, bias testing, and automation that are helping developers and organizations keep pace with rapidly evolving AI systems. She describes her role at HCLTech, where client-facing projects across multiple industries and jurisdictions create unique governance challenges that require balancing company standards with client-specific risk frameworks. Domin notes that while most executives acknowledge the importance of responsible AI, few feel prepared to operationalize it. She emphasizes the growing demand for proof and accountability from regulators and courts, and finds the work exciting for its urgency and global impact. She also talks about the new chalenges of agentic AI, and the potential for "oversight agents" that use AI to govern AI.  Heather Domin is Global Head of the Office of Responsible AI and Governance at HCLTech and co-chair of the IAPP AI Governance Professional Certification. A former leader of IBM's AI ethics initiatives, she has helped shape global standards and practices in responsible AI. Named one of the Top 100 Brilliant Women in AI Ethics™ 2025, her work has been featured in Stanford executive education and outlets including CNBC, AI Today, Management Today, Computer Weekly, AI Journal, and the California Management Review. Transcript  AI Governance in the Agentic Era Implementing Responsible AI in the Generative Age - Study Between HCL Tech and MIT

Deep State Radio
Siliconsciousness: Why Responsible AI Begins with Each of Us

Deep State Radio

Play Episode Listen Later Oct 2, 2025 35:52


The best time to regulate AI was yesterday, and the next best time is now. There is a clear and urgent need for responsible AI development that implements reasonable guidelines to mitigate harms and foster innovation, yet the conversation in DC and capitals around the world remains muddled. NYU's Dr. Julia Stoyanovich joins David Rothkopf to explore the role of collective action in AI development and why responsible AI is the responsibility of each of us.  This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

Pondering AI
To Be or Not to Be Agentic with Maximilian Vogel

Pondering AI

Play Episode Listen Later Oct 1, 2025 51:19


Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.   Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work.  Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.Related ResourcesMedium: https://medium.com/@maximilian.vogelA transcript of this episode is here.   

HLTH Matters
AI @ HLTH: Inside Welldoc's Mission: Responsible AI and the Next Leap in Digital Health

HLTH Matters

Play Episode Listen Later Sep 30, 2025 24:52


In this episode, host Sandy Vance, along with Anand Iyer, discuss Welldoc's core philosophy of responsible innovation, particularly how they are pushing the boundaries of AI while maintaining a strong commitment to safety, compliance, and member trust. Anand reveals how Welldoc is shaping the future of AI in healthcare by collaborating with the FDA, addressing bias, and leading a national effort for interoperability. Discover why responsibility, trust, and consumer empowerment are the keys to turning digital health innovation into safer, more proactive care. Healthcare innovation must be responsible in order to be effective.In this episode, they talk about:Healthcare innovation must be responsible.The four levels of AI and how they are best usedWorking with the FDA to advance safe, high-risk features.Driving interoperability, reducing friction, and encouraging consumer empowerment.Addressing Bias by using diverse data, garnering trust, and establishing the right guardrailsGovernance through consistent standardsPartnerships are the key to growing AI responsiblyKey Takeaways:Trust, safety, and compliance are non-negotiable foundations of innovation.Regulators are partners in shaping the future of AI, not barriers.Responsible AI creates safer, more equitable, and more proactive careA Little About Anand Iyer:Anand is a respected global digital health innovator and leader, most known for his insights on and experience with technology, strategy, and regulatory policy. Anand has been instrumental in Welldoc's success and the development of BlueStar®, the first FDA-cleared digital therapeutic for adults with type 2 diabetes. Since joining Welldoc in 2008, he has held core leadership positions that included President and Chief Operating Officer and Chief Strategy Officer. In 2013, Anand was named “Maryland Healthcare Innovator of the Year” in the field of mobile health. Anand was also recognized as a top AI thought leader globally when he was named to Constellation Research's prestigious AI 150 list in 2024.

Oracle University Podcast
AI Across Industries and the Importance of Responsible AI

Oracle University Podcast

Play Episode Listen Later Sep 30, 2025 18:55


AI is reshaping industries at a rapid pace, but as its influence grows, so do the ethical concerns that come with it.   This episode examines how AI is being applied across sectors such as healthcare, finance, and retail, while also exploring the crucial issue of ensuring that these technologies align with human values.   In this conversation, Lois Houston and Nikita Abraham are joined by Hemant Gahankari, Senior Principal OCI Instructor, who emphasizes the importance of fairness, inclusivity, transparency, and accountability in AI systems.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------- Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about how Oracle integrates AI capabilities into its Fusion Applications to enhance business workflows, and we focused on Predictive, Generative, and Agentic AI. Lois: Today, we'll discuss the various applications of AI. This is the final episode in our AI series, and before we close, we'll also touch upon ethical and responsible AI.  01:01 Nikita: Taking us through all of this is Senior Principal OCI Instructor Hemant Gahankari. Hi Hemant! AI is pretty much everywhere today. So, can you explain how it is being used in industries like retail, hospitality, health care, and so on?  Hemant: AI isn't just for sci-fi movies anymore. It's helping doctors spot diseases earlier and even discover new drugs faster. Imagine an AI that can look at an X-ray and say, hey, there is something sketchy here before a human even notices. Wild, right? Banks and fintech companies are all over AI. Fraud detection. AI has got it covered. Those robo advisors managing your investments? That's AI too. Ever noticed how e-commerce companies always seem to know what you want? That's AI studying your habits and nudging you towards that next purchase or binge watch. Factories are getting smarter. AI predicts when machines will fail so they can fix them before everything grinds to a halt. Less downtime, more efficiency. Everyone wins. Farming has gone high tech. Drones and AI analyze crops, optimize water use, and even help with harvesting. Self-driving cars get all the hype, but even your everyday GPS uses AI to dodge traffic jams. And if AI can save me from sitting in bumper-to-bumper traffic, I'm all for it. 02:40 Nikita: Agreed! Thanks for that overview, but let's get into specific scenarios within each industry.  Hemant: Let us take a scenario in the retail industry-- a retail clothing line with dozens of brick-and-mortar stores. Maintaining proper inventory levels in stores and regional warehouses is critical for retailers. In this low-margin business, being out of a popular product is especially challenging during sales and promotions. Managers want to delight shoppers and increase sales but without overbuying. That's where AI steps in. The retailer has multiple information sources, ranging from point-of-sale terminals to warehouse inventory systems. This data can be used to train a forecasting model that can make predictions, such as demand increase due to a holiday or planned marketing promotion, and determine the time required to acquire and distribute the extra inventory. Most ERP-based forecasting systems can produce sophisticated reports. A generative AI report writer goes further, creating custom plain-language summaries of these reports tailored for each store, instructing managers about how to maximize sales of well-stocked items while mitigating possible shortages. 04:11 Lois: Ok. How is AI being used in the hospitality sector, Hemant? Hemant: Let us take an example of a hotel chain that depends on positive ratings on social media and review websites. One common challenge they face is keeping track of online reviews, leading to missed opportunities to engage unhappy customers complaining on social media. Hotel managers don't know what's being said fast enough to address problems in real-time. Here, AI can be used to create a large data set from the tens of thousands of previously published online reviews. A textual language AI system can perform a sentiment analysis across the data to determine a baseline that can be periodically re-evaluated to spot trends. Data scientists could also build a model that correlates these textual messages and their sentiments against specific hotel locations and other factors, such as weather. Generative AI can extract valuable suggestions and insights from both positive and negative comments. 05:27 Nikita: That's great. And what about Financial Services? I know banks use AI quite often to detect fraud. Hemant: Unfortunately, fraud can creep into any part of a bank's retail operations. Fraud can happen with online transactions, from a phone or browser, and offsite ATMs too. Without trust, banks won't have customers or shareholders. Excessive fraud and delays in detecting it can violate financial industry regulations. Fraud detection combines AI technologies, such as computer vision to interpret scanned documents, document verification to authenticate IDs like driver's licenses, and machine learning to analyze patterns. These tools work together to assess the risk of fraud in each transaction within seconds. When the system detects a high risk, it triggers automated responses, such as placing holds on withdrawals or requesting additional identification from customers, to prevent fraudulent activity and protect both the business and its client. 06:42 Nikita: Wow, interesting. And how is AI being used in the health industry, especially when it comes to improving patient care? Hemant: Medical appointments can be frustrating for everyone involved—patients, receptionists, nurses, and physicians. There are many time-consuming steps, including scheduling, checking in, interactions with the doctors, checking out, and follow-ups. AI can fix this problem through electronic health records to analyze lab results, paper forms, scans, and structured data, summarizing insights for doctors with the latest research and patient history. This helps practice reduced costs, boost earnings, and deliver faster, more personalized care. 07:32 Lois: Let's take a look at one more industry. How is manufacturing using AI? Hemant: A factory that makes metal parts and other products use both visual inspections and electronic means to monitor product quality. A part that fails to meet the requirements may be reworked or repurposed, or it may need to be scrapped. The factory seeks to maximize profits and throughput by shipping as much good material as possible, while minimizing waste by detecting and handling defects early. The way AI can help here is with the quality assurance process, which creates X-ray images. This data can be interpreted by computer vision, which can learn to identify cracks and other weak spots, after being trained on a large data set. In addition, problematic or ambiguous data can be highlighted for human inspectors. 08:36 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:20 Nikita: Welcome back! AI can be used effectively to automate a variety of tasks to improve productivity, efficiency, cost savings. But I'm sure AI has its constraints too, right? Can you talk about what happens if AI isn't able to echo human ethics?  Hemant: AI can fail due to lack of ethics.  AI can spot patterns, not make moral calls. It doesn't feel guilt, understand context, or take responsibility. That is still up to us.  Decisions are only as good as the data behind them. For example, health care AI underdiagnosing women because research data was mostly male. Artificial narrow intelligence tends to automate discrimination at scale. Recruiting AI downgraded resumes just because it had a word "women's" (for example, women's chess club). Who is responsible when AI fails? For example, if a self-driving car hits someone, we cannot blame the car. Then who owns the failure? The programmer? The CEO? Can we really trust corporations or governments having programmed the use of AI not to be evil correctly? So, it's clear that AI needs oversight to function smoothly. 10:48 Lois: So, Hemant, how can we design AI in ways that respect and reflect human values? Hemant: Think of ethics like a tree. It needs all parts working together. Roots represent intent. That is our values and principles. The trunk stands for safeguards, our systems, and structures. And the branches are the outcomes we aim for. If the roots are shallow, the tree falls. If the trunk is weak, damage seeps through. The health of roots and trunk shapes the strength of our ethical outcomes. Fairness means nothing without ethical intent behind it. For example, a bank promotes its loan algorithm as fair. But it uses zip codes in decision-making, effectively penalizing people based on race. That's not fairness. That's harm disguised as data. Inclusivity depends on the intent sustainability. Inclusive design isn't just a check box. It needs a long-term commitment. For example, controllers for gamers with disabilities are only possible because of sustained R&D and intentional design choices. Without investment in inclusion, accessibility is left behind. Transparency depends on the safeguard robustness. Transparency is only useful if the system is secure and resilient. For example, a medical AI may be explainable, but if it is vulnerable to hacking, transparency won't matter. Accountability depends on the safeguard privacy and traceability. You can't hold people accountable if there is no trail to follow. For example, after a fatal self-driving car crash, deleted system logs meant no one could be held responsible. Without auditability, accountability collapses. So remember, outcomes are what we see, but they rely on intent to guide priorities and safeguards to support execution. That's why humans must have a final say. AI has no grasp of ethics, but we do. 13:16 Nikita: So, what you're saying is ethical intent and robust AI safeguards need to go hand in hand if we are to truly leverage AI we can trust. Hemant: When it comes to AI, preventing harm is a must. Take self-driving cars, for example. Keeping pedestrians safe is absolutely critical, which means the technology has to be rock solid and reliable. At the same time, fairness and inclusivity can't be overlooked. If an AI system used for hiring learns from biased past data, say, mostly male candidates being hired, it can end up repeating those biases, shutting out qualified candidates unfairly. Transparency and accountability go hand in hand. Imagine a loan rejection if the AI's decision isn't clear or explainable. It becomes impossible for someone to challenge or understand why they were turned down. And of course, robustness supports fairness too. Loan approval systems need strong security to prevent attacks that could manipulate decisions and undermine trust.  We must build AI that reflects human values and has safeguards. This makes sure that AI is fair, inclusive, transparent, and accountable.  14:44 Lois: Before we wrap, can you talk about why AI can fail? Let's continue with your analogy of the tree. Can you explain how AI failures occur and how we can address them? Hemant: Root elements like do not harm and sustainability are fundamental to ethical AI development. When these roots fail, the consequences can be serious. For example, a clear failure of do not harm is AI-powered surveillance tools misused by authoritarian regimes. This happens because there were no ethical constraints guiding how the technology was deployed. The solution is clear-- implement strong ethical use policies and conduct human rights impact assessment to prevent such misuse. On the sustainability front, training AI models can consume massive amount of energy. This failure occurs because environmental costs are not considered. To fix this, organizations are adopting carbon-aware computing practices to minimize AI's environmental footprint. By addressing these root failures, we can ensure AI is developed and used responsibly with respect for human rights and the planet. An example of a robustness failure can be a chatbot hallucinating nonexistent legal precedence used in court filings. This could be due to training on unverified internet data and no fact-checking layer. This can be fixed by grounding in authoritative databases. An example of a privacy failure can be AI facial recognition database created without user consent. The reason being no consent was taken for data collection. This can be fixed by adopting privacy-preserving techniques. An example of a fairness failure can be generated images of CEOs as white men and nurses as women, minorities. The reason being training on imbalanced internet images reflecting societal stereotypes. And the fix is to use diverse set of images. 17:18 Lois: I think this would be incomplete if we don't talk about inclusivity, transparency, and accountability failures. How can they be addressed, Hemant? Hemant: An example of an inclusivity failure can be a voice assistant not understanding accents. The reason being training data lacked diversity. And the fix is to use inclusive data. An example of a transparency and accountability failure can be teachers could not challenge AI-generated performance scores due to opaque calculations. The reason being no explainability tools are used. The fix being high-impact AI needs human review pathways and explainability built in. 18:04 Lois: Thank you, Hemant, for a fantastic conversation. We got some great insights into responsible and ethical AI. Nikita: Thank you, Hemant! If you're interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…. Lois: And Lois Houston, signing off! 18:26 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

Entangled Things
Episode 124: Vijoy Pandey on Quantum Networking & Cisco's Quantum Software Stack

Entangled Things

Play Episode Listen Later Sep 30, 2025 41:12


In Episode 124 of Entangled Things, Patrick is joined by Vijoy Pandey, Senior Vice President of Outshift by Cisco. Together, they explore the future of quantum networking, the power of entanglement, and how these breakthroughs will shape the next generation of technology. Vijoy also shares an exciting announcement: the launch of Cisco's Quantum Software Stack.Want to dive deeper? Don't miss the Cisco Quantum Summit, happening September 30 and October 1: https://research.cisco.com/quantum-summit.Vijoy Pandey is GM and Senior Vice President of Outshift by Cisco, leading the company's internal incubation engine that delivers what's next and new for Cisco. Outshift focuses on emerging technologies that target adjacent markets and personas, with current initiatives spanning AI-enabled infrastructure, quantum networking, and next-generation infrastructure solutions.Outshift operates as a series of startup-like teams inside Cisco, rapidly validating which emerging technologies can become meaningful businesses for the company's future. Under Vijoy's leadership, these teams work across three key layers: agentic AI, next-gen infrastructure, quantum networking, and more. This model allows Outshift to move quickly and test multiple opportunities simultaneously while leveraging Cisco's enterprise strengths and established processes.Vijoy oversees a broader strategic scope that includes several critical Cisco-wide initiatives. He also leads Cisco Research, driving foundational research across quantum networking, security, observability, and emerging technologies. He directs Cisco's Open Source initiatives and the Developer Network (DevNet), which leads API consistency and programmability across Cisco's portfolio while pioneering AI-native infrastructure tools. Additionally, he co-chairs Cisco's Responsible AI committee.Vijoy holds a Ph.D. in Computer Science from the University of California, Davis, and is an inventor on over 80 patents in cloud computing, AI/ML, and distributed systems. Through his leadership of Outshift, Vijoy continues to guide Cisco's exploration of emerging technologies, ensuring the company can move quickly to capture opportunities in new markets before they fully mature.

Syntax - Tasty Web Development Treats
941: Is Responsible AI Possible? with Dr. Sarah Bird of Microsoft

Syntax - Tasty Web Development Treats

Play Episode Listen Later Sep 29, 2025 22:51


Scott heads to Microsoft's campus for the VS Code Insider Summit to sit down with Dr. Sarah Bird and explore what “Responsible AI” really means for developers. From protecting user privacy to keeping humans in the loop, they dig into how everyday coders can play a role in shaping AI's future. Show Notes 00:00 Welcome to Syntax! 01:27 Brought to you by Sentry.io. 03:13 The path the machine learning. 04:44 How do you get to ‘Responsible AI'? 06:43 Is there such a thing as ‘Responsible AI'? 07:34 Does the average developer have a part to play? 09:12 How can AI tools protect inexperienced users? 11:55 Let's talk about user and company privacy. 13:57 Are local tools and services becoming more viable? 15:06 Are people right to be skeptical? 16:58 The software developer role is fundamentally changing. 17:43 Human in the loop. 19:37 The career path to Responsible AI. 21:21 Sick Picks. Sick Picks Sarah: Japanese pottery Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

Target: Cancer Podcast
Responsible AI Needs Global Regulation

Target: Cancer Podcast

Play Episode Listen Later Sep 29, 2025 3:35


Should AI in health records be considered a medical device? Emily Lewis, an AI thought leader, compares the U.S. FDA and U.K. NHS approaches to AI regulation in healthcare. She shares how global discrepancies affect responsible AI implementation and what leaders must do to stay compliant. Discover why local adaptation and ongoing education are critical.

Nightside With Dan Rea
Empowering Students in the Age of AI.

Nightside With Dan Rea

Play Episode Listen Later Sep 26, 2025 38:49 Transcription Available


During this age of AI, we will talk about the national movement, Responsible AI for America's Youth, which is a movement that ensures every child in the United States has the opportunity to become a responsible and confident user of artificial intelligence (AI). This effort puts students and educators at the center of shaping how AI is brought into schools, elevating their voices in the national conversation. Jeff Riley, former Massachusetts Commissioner of Elementary and Secondary Education, now the Executive Director of Day of AI, checked in to discuss.

The sgENGAGE Podcast
Fair, Transparent, Inclusive: The Pillars of Responsible AI

The sgENGAGE Podcast

Play Episode Listen Later Sep 24, 2025 27:47


In this podcast, Carrie Cobb, chief data and AI officer at Blackbaud and one of DataIQ's 100 Most Influential People in Data, sits down fora powerful conversation on the foundational principles of responsible AI. As AI becomes increasingly embedded in the daily operations of social impact organizations, Carrie shares how fairness, transparency, and inclusiveness must guide every step of AI development and deployment. ​ From inclusive data practices to human-centered design, this episode offers a roadmap for organizations seeking to build trust and drive impact through responsible innovation. This episode will inspire you to lead with values and build technology that truly serves people and communities.

Connect, Collaborate, Champion!
Responsible AI in Higher Ed and Beyond

Connect, Collaborate, Champion!

Play Episode Listen Later Sep 24, 2025 30:28


Artificial intelligence has long been part of our world, but the rapid rise of generative AI has brought new urgency to questions of how we use it and how we use it responsibly. In this episode of Degrees of Impact, host Michelle Apuzzio speaks with Dr. Jeffrey Allan, assistant professor and director of the Institute for Responsible Technology at Nazareth University. Together, they explore the Institute's work, the ethical dilemmas that come with AI-driven innovation, and what it means for both universities and businesses striving to harness AI productively. Thank you for tuning in to this episode of Degrees of Impact, where we explore innovative ideas and the people behind them in higher education. To learn more about NACU and our programs, visit nacu.edu. Connect with us on LinkedIn: NACU If you enjoyed this episode, don't forget to subscribe, rate, and share it with your network.

Purpose 360
Defining and Shaping JUST AI With Martin Whittaker

Purpose 360

Play Episode Listen Later Sep 23, 2025 32:01


Artificial intelligence has the power to reshape economies, societies, and our daily lives. But with its rapid rise comes an important question: how can we ensure AI is developed and applied ethically so that it serves humanity instead of harming it? Responsible use requires transparency, accountability, and inclusivity—but defining and implementing these is complex. JUST Capital, a nonprofit dedicated to advancing just business practices, is addressing this challenge by exploring what “just AI” looks like, while also giving both the public and companies a voice in shaping its future.We invited Martin Whittaker, CEO of JUST Capital, to speak about how companies can responsibly navigate the opportunities and risks of AI. He highlighted the importance of aligning AI strategies with company values, building strong governance, and listening to stakeholders to guide ethical decision-making. Martin also shared insights from JUST Capital's new research, which reveals a gap between companies acknowledging AI and those taking meaningful steps, such as workforce training and transparency. He ultimately challenges business leaders to reflect on what it means to be a truly human company in an AI-driven world while assuming the responsibility that comes with this technology.Listen for insights on:How AI layoffs may require new ethical standards and practicesWhy company culture determines success in AI adoption and useLessons from early leaders like IBM and Boston ScientificThe growing role of investors in shaping AI accountabilityResources + Links:Martin Whittaker's LinkedInJUST CapitalThe JUST Report: An Early Measure of JUST AI2025 JUST 100 (00:00) - Welcome to Purpose 360 (00:13) - Martin Whittaker, JUST Capital, and AI (02:40) - Who Is JUST Capital? (03:33) - Describing Justness (04:44) - Responsible AI (08:25) - Early Measure of Just AI (11:12) - Martin's AI Usage (12:49) - AI Use Principles (14:58) - AI Study (17:04) - What Stood Out (21:44) - Adding AI Methodology (24:27) - Advice for Companies Slow to Adopt AI (26:38) - Last Thoughts (28:15) - Can AI Replace Humanity in Business? (29:57) - Wrap Up

SRA Risk Intel
S3 | E23: How Community Financial Institutions Can Build a Responsible AI Approach

SRA Risk Intel

Play Episode Listen Later Sep 23, 2025 19:19


Artificial intelligence (AI) is no longer just a buzzword in financial services. From lending to fraud detection to customer service, AI is steadily finding its way into community banks and credit unions. But for leaders, boards, and compliance teams, one pressing question remains: how do we adopt AI responsibly?In this episode of the Banking on Data podcast, host Ed Vincent sits down with Beth Nilles, Director of Implementation, who brings more than 30 years of banking leadership across lending, operations, and compliance. Beth offers practical guidance for financial institution leaders who may be exploring AI for the first time - or wrestling with how to scale responsibly without falling behind on regulatory expectations.Follow us to stay in the know!

FUTURE FOSSILS
Holistic Technology for Growing a World in Love with Larry Muhlstein

FUTURE FOSSILS

Play Episode Listen Later Sep 22, 2025 134:58


Membership | Donations | Spotify | YouTube | Apple PodcastsThis week we hear from Larry Muhlstein, who worked on Responsible AI at Google and DeepMind before leaving to found the Holistic Technology Project. In Larry's words:“Care is crafted from understanding, respect, and will. Once care is deep enough and in a generative reciprocal relationship, it gives rise to self-expanding love. My work focuses on creating such systems of care by constructing a holistic sociotechnical tree with roots of philosophical orientation, a trunk of theoretical structure, and technological leaves and fruit that offer nourishment and support to all parts of our world. I believe that we can grow love through technologies of togetherness that help us to understand, respect, and care for each other. I am committed to supporting the responsible development of such technologies so that we can move through these trying times towards a world where we are all well together.”In this episode, Larry and I explore the “roots of philosophical orientation” and “trunk of theoretical structure” as he lays them out in his Technological Love knowledge garden, asking how technologies for reality, perspectives, and karma can help us grow a world in love. What is just enough abstraction? When is autonomy desirable and when is it a false god? What do property and selfhood look like in a future where the ground truths of our interbeing shape design and governance?It's a long, deep conversation on fundamentals we need to reckon with if we are to live in futures we actually want. I hope you enjoy it as much as we did.Our next dialogue is with Sam Arbesman, resident researcher at Lux Capital and author of The Magic of Code. We'll interrogate the distinctions between software and spellcraft, explore the unique blessings and challenges of a world defined by advanced computing, and probe the good, bad, and ugly of futures that move at the speed of thought…✨ Show Links• Hire me for speaking or consulting• Explore the interactive knowledge garden grown from over 250 episodes• Explore the Humans On The Loop dialogue and essay archives• Browse the books we discuss on the show at Bookshop.org• Dig into nine years of mind-expanding podcasts✨ Additional Resources“Growing A World In Love” — Larry Muhlstein at Hurry Up, We're Dreaming“The Future Is Both True & False” — Michael Garfield on Medium“Sacred Data” — Michael Garfield at Hurry Up, We're Dreaming“The Right To Destroy” — Lior Strahilevitz at Chicago Unbound“Decentralized Society: Finding Web3's Soul” — Puja Ohlhaver, E. Glen Weyl, and Vitalik Buterin at SSRN✨ MentionsKarl Schroeder's “Degrees of Freedom”Joshua DiCaglio's Scale TheoryGeoffrey West's ScaleHannah ArendtKen WilberDoug Rushkoff's Survival of the RichestManda Scott's Any Human Power Torey HaydenChaim Gingold's Building SimCityJames P. Carse's Finite & Infinite GamesJohn C. Wright's The Golden OecumeneEckhart Tolle's The Power of Now✨ Related Episodes This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe

Pondering AI
The Problem of Democracy with Henrik Skaug Sætra

Pondering AI

Play Episode Listen Later Sep 17, 2025 54:04


Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. Henrik and Kimberly discuss AI's impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google's experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.   Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.Related ResourcesGoogle Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=enHow to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVLA transcript of this episode is here.   

Data Today with Dan Klein
Should we trust AI as a creative collaborator with Professor Anjana Susarla

Data Today with Dan Klein

Play Episode Listen Later Sep 16, 2025 27:26


By 2026, Europol estimates that more than 90% of online content could be AI-generated – from music and written work to imagery and beyond. But what does this shift mean for creativity, originality, and the role of human value in the process?In this episode of Tech Tomorrow, David Elliman speaks with Anjana Susarla, Professor of Responsible AI at the Eli Broad College of Business, Michigan State University. Together, they explore whether AI can truly be trusted as a creative collaborator in both work and wider society.Their conversation looks at how the traditional process of drafting and redrafting may change when AI enters the picture, and the rise of so-called ‘AI slop' – mass-produced, low-quality outputs – in areas such as writing, design, and programming. They also consider whether agentic AI might one day predict our preferences more accurately than we can ourselves, and reflect on the persistent hype and ‘magic' surrounding new AI tools, asking what this means for the future of creativity, business, and work.Episode Highlights:00:47 – What happens to the iterative creative process when AI is introduced?02:59 – The polarising reactions to AI tools.04:03 – Do we even like the creative outputs of AI?05:16 – David's thoughts: Can we put a qualitative value on creativity?06:31 – What was the AI-generated podcast based on  Anjana's paper like?10:28 – The homogenising effect of AI.11:45 – Feedback loops and the halo effect.13:04 – David's thoughts: AI prediction models.16:28 – Human oversight in AI creativity.19:08 – Maintaining the quality of AI-generated outputs.21:05 – David's thoughts: What happens when AI tools enter the workplace?23:08 – AI creativity, brain drain, and deskilling24:58 – Should we trust AI as a creative collaborator?About Zühlke:Zühlke is a global transformation partner, with engineering and innovation at its core. We help clients envision and build their businesses for the future – running smarter today while adapting for tomorrow's markets, customers, and communities.Our multidisciplinary teams specialise in technology strategy and business innovation, digital solutions and applications, and device and systems engineering. We thrive in complex, regulated sectors such as healthcare and finance, connecting strategy, implementation, and operations to help clients build more effective and resilient businesses.Links:Zühlke WebsiteZühlke on LinkedInDavid Elliman on LinkedInProf. Anjana Susarla on LinkedIn

The Digital Supply Chain podcast
Fixing Scope 3 with AI: Supplier Engagement, Data Accuracy, and Decarbonisation Levers

The Digital Supply Chain podcast

Play Episode Listen Later Sep 15, 2025 42:38 Transcription Available


Send me a messageIn this week's episode of the Sustainable Supply Chain Podcast, I sat down with fellow Irishman Paul Byrnes, CEO of Mavarick AI, to explore how manufacturers can use AI and data to tackle the notoriously difficult challenge of Scope 3 emissions.Paul brings a unique perspective, rooted in both deep data science and hands-on manufacturing experience, and he didn't shy away from the hard truths: most companies still struggle with messy, unreliable data and limited supplier engagement. We unpack why primary data will soon become table stakes, why spend-based estimates can be 40% off the mark, and how engaging suppliers requires a simple but often overlooked question, what's in it for them?We also discussed where AI genuinely moves the needle:Boosting confidence in data accuracy by identifying gaps and “contaminated” entriesProviding personalised training to help suppliers meet sustainability requestsUncovering and prioritising decarbonisation levers with clear ROIPaul shared real-world examples, from medical devices to automotive, that show how targeted projects, rather than trying to tackle all 15 Scope 3 categories at once, deliver the best results. We also touched on the environmental footprint of AI itself, energy, water, rare materials, and how responsible computing and smaller, purpose-built models can reduce the impact.For leaders wrestling with emissions strategy, Paul's advice is simple: start by mapping your data landscape. Know where you're rich, where you're poor, and build from there.This is a practical, candid conversation about making sustainability and profitability work hand-in-hand, and why efficiency wins are so often sustainability wins.Elevate your brand with the ‘Sustainable Supply Chain' podcast, the voice of supply chain sustainability.Last year, this podcast's episodes were downloaded over 113,000 times by senior supply chain executives around the world.Become a sponsor. Lead the conversation.Contact me for sponsorship opportunities and turn downloads into dialogues.Act today. Influence the future.Podcast supportersI'd like to sincerely thank this podcast's generous Subscribers: Alicia Farag Kieran Ognev And remember you too can become a Sustainable Supply Chain+ subscriber - it is really easy and hugely important as it will enable me to continue to create more excellent episodes like this one and give you access to the full back catalog of over 460 episodes.Podcast Sponsorship Opportunities:If you/your organisation is interested in sponsoring this podcast - I have several options available. Let's talk!FinallyIf you have any comments/suggestions or questions for the podcast - feel free to just send me a direct message on LinkedIn, or send me a text message using this link.If you liked this show, please don't forget to rate and/or review it. It makes a big difference to help new people discover it. Thanks for listening.

Becker’s Healthcare Podcast
Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI)

Becker’s Healthcare Podcast

Play Episode Listen Later Sep 14, 2025 21:08


Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI), discusses the consequences of AI misuse and the importance of building trust in clinical applications of AI. She highlights the need for human-centered solutions, emphasizing ethics in healthcare, and discusses the evolution of the healthcare industry.

RunAs Radio
Training for AI with Stephanie Donahue

RunAs Radio

Play Episode Listen Later Sep 10, 2025 40:34


How do you get your organization trained up to use AI tools? Richard talks to Stephanie Donahue about her work implementing AI tools at Avanade and with Avanade's customers. Stephanie discusses how many workers are bringing their own AI tools, such as ChatGPT, to work and the risks that represent to the organization. Having an approved set of tools helps people work in the right direction, but they will still need some education. The challenge lies in the rapidly shifting landscape and the lack of certifications. However, you'll have some individuals eager to utilize these tools, often on the younger side, and they can help build your practice. The opportunities are tremendous!LinksChatGPT EnterpriseLearning M365 CopilotSecure and Govern Microsoft 365 CopilotMicrosoft PurviewResponsible AI at MicrosoftMicrosoft Viva EngageRecorded July 18, 2025

Alter Everything
193: Women, Data Science, and Building Inclusive AI

Alter Everything

Play Episode Listen Later Sep 10, 2025 25:54


Join us as we sit down with Christina Stathopoulos, founder of Dare to Data and former Google and Waze data strategist, to discuss the unique challenges and opportunities for women in data science and AI. In this episode, you'll learn how data bias and AI algorithms can impact women and minority groups, why diversity in tech teams is crucial, and how inclusive design can lead to better, fairer technology. Christina shares her personal journey as a woman in data, offers actionable advice for overcoming imposter syndrome, and highlights the importance of education and allyship in building a more inclusive future for data and AI. Panelists: Christina Stathopoulos, Founder of Dare to Data - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: Dare to DataDiversity at AlteryxInvisible WomenUnmasking AI Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music.

Edtech Insiders
How EdTech Leaders Earn Trust Through Responsible AI and Data-Privacy Best Practices

Edtech Insiders

Play Episode Listen Later Sep 10, 2025 61:20 Transcription Available


Send us a textIn this special episode, we speak with Daphne Li, CEO of Common Sense Privacy, alongside leaders from Prodigy Education, AI for Equity, MagicSchool AI, and ClassDojo—recipients of the Privacy Seal. Together, we explore how the edtech sector is tackling one of its biggest challenges: earning trust through responsible AI and data privacy practices.

Becker’s Women’s Leadership
Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI)

Becker’s Women’s Leadership

Play Episode Listen Later Sep 10, 2025 21:08


Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI), discusses the consequences of AI misuse and the importance of building trust in clinical applications of AI. She highlights the need for human-centered solutions, emphasizing ethics in healthcare, and discusses the evolution of the healthcare industry.

Data Transforming Business
Data Experts Question: Is Data Infrastructure Ready for Responsible AI?

Data Transforming Business

Play Episode Listen Later Sep 10, 2025 36:39


Welcome back to Meeting of the Minds, a special podcast episode series by EM360Tech, where we talk about the future of tech.In this Big Data special episode of the Meeting of the Minds, our expert panel – Ravit Jain, Podcast host, Christina Stathopoulos of Dare to Data and a data and AI evangelist, Wayne Eckerson, data strategy consultant and president of the Eckerson Group and Kevin Petrie VP of Research at BARC, come together again to discuss the key data and AI trends, particularly focusing on data ethics. They discuss ethical issues related to using AI, the need for data governance and guidelines, and the essential role of data quality in AI success. The speakers also look at how organisations can measure the value of AI through different KPIs, stressing the need for a balance between technical achievements and business results. Our data experts examine the changing role of AI across various sectors, with a focus on success metrics, the effects on productivity and employee stress, changes in education, and the possible positive and negative impacts of AI in everyday life. They highlight the need to balance productivity with quality and consider the ethics of autonomous AI systems.In the previous episode, new challenges and opportunities in data governance, regulatory frameworks, and the AI workforce were discussed. They looked at the important balance between innovation and ethical responsibility, looking at how companies are handling these issues.Tune in to get new understandings about the future of data and AI and how your enterprise can adapt to the upcoming changes and challenges. Hear how leaders in the field are preparing for a future that is already here.Also watch: Meeting of the Minds: State Of Cybersecurity in 2025TakeawaysGenerative AI is creating a supply shock in cognitive power.Companies are eager for data literacy and AI training.Data quality remains a critical issue for AI success.Regulatory frameworks like GDPR are shaping AI governance.The US prioritises innovation, sometimes at the expense of regulation.Generative AI introduces new risks that need to be managed.Data quality issues are often the root of implementation failures.AI's impact on jobs is leading to concerns about workforce automation.Organisations must adapt to the probabilistic nature of generative AI.The conversation around data quality is ongoing and evolving. AI literacy and data literacy are crucial for workforce success.Executives are more concerned about retraining than layoffs.Younger workers may struggle to evaluate AI-generated answers.Incremental changes in productivity are expected with AI.Job displacement may not be immediate, but could create future gaps.Human empathy and communication skills remain essential in many professions.AI will augment, not replace, skilled software developers.Global cooperation is needed to navigate...

Becker’s Healthcare Digital Health + Health IT
Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI

Becker’s Healthcare Digital Health + Health IT

Play Episode Listen Later Sep 10, 2025 21:08


Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI), discusses the consequences of AI misuse and the importance of building trust in clinical applications of AI. She highlights the need for human-centered solutions, emphasizing ethics in healthcare, and discusses the evolution of the healthcare industry.

AI for Kids
Can Kids Really Use AI Safely? (Middle School+)

AI for Kids

Play Episode Listen Later Sep 9, 2025 40:32 Transcription Available


Send us a textDiya Wynn, Responsible AI Lead at Amazon Web Services, takes us on a remarkable journey from her childhood in the South Bronx to becoming a technology leader championing fairness in artificial intelligence. Her story begins with a pivotal moment at age eight when, after receiving a basic computer as an academic achievement award, she declared she wanted to be a computer engineer—a path that would shape her entire professional life.What makes Diya's perspective so valuable is how she demystifies AI for families. Rather than presenting artificial intelligence as some futuristic concept, she helps us recognize how it's already woven into our daily lives through search engines, streaming recommendations, and customer service interactions. This familiarity makes AI more approachable for both parents and children navigating today's digital landscape.For children curious about future careers, Diya offers reassuring guidance. As AI continues changing the job landscape, she emphasizes developing timeless human capabilities—critical thinking, problem-solving, emotional intelligence, and effective communication—that will remain valuable regardless of technological evolution. Her message inspires young listeners to approach technology with curiosity rather than fear.Resources Mentioned in the EpisodeAmber invited kids (through their parents) to share stories or be guests.Email: contact@aidigitales.comDiya described her role leading Responsible AI initiatives at Amazon Web Services (AWS). AWS Responsible AI page: https://aws.amazon.com/ai/responsible-ai/Code.org – Free coding lessons and AI-related activities. https://code.orgPartyRock by AWS – A fun, no-code way to create generative AI apps. https://partyrock.awsGoogle AI courses for beginners (referenced as free learning resources). https://ai.google/education/Microsoft Learn (free coding & AI training modules). https://learn.microsoft.com/training/Data Science Camp (DMV area) https://datasciencecamp.orgSupport the showHelp us become the #1 podcast for AI for Kids.Buy our new book "Let Kids Be Kids, Not Robots!: Embracing Childhood in an Age of AI"Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Gift or get our books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform. Apple Podcasts Amazon Music Spotify YouTube Other Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales...

Data Hackers
Viéses algorítmicos e Responsible AI- Data Hackers Podcast #113

Data Hackers

Play Episode Listen Later Sep 5, 2025 62:53


Você já parou para pensar quais viéses seu algoritmo pode carregar e como isso impacta suas análises?Neste episódio, conversamos com Andressa Freires, fundadora da diversiData e Data Science Specialist, sobre como as perspectivas dos desenvolvedores de AIs e modelos podem transpassar no conteúdo criado por essas tecnologias. Além disso, discutimos como a falta de diversidade pode impactar as ferramentas que são amplamente utilizadas pelo mundo e as consequências desse movimento.Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas.Nossa Bancada Data Hackers:Paulo Vasconcellos — Co-founder da Data Hackers e Principal Data Scientist na Hotmart.Monique Femme — Head of Community Management na Data HackersReferências:https://mitsloanreview.com.br/quebrando-correntes-e-liderando-com-proposito/https://linktr.ee/diversidatahttps://www.amazon.com/Unmasking-AI-Mission-Protect-Machines/dp/0593241835https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815

The AI Fundamentalists
AI in practice: LLMs, psychology research, and mental health

The AI Fundamentalists

Play Episode Listen Later Sep 4, 2025 42:28 Transcription Available


We're excited to have Adi Ganesan, a PhD researcher at Stony Brook University, Penn University, and Vanderbilt, on the show. We'll talk about how large language models LLMs) are being tested and used in psychology, citing examples from mental health research. Fun fact: Adi was Sid's research partner during his Ph.D. program.Discussion highlightsLanguage models struggle with certain aspects of therapy including being over-eager to solve problems rather than building understandingCurrent models are poor at detecting psychomotor symptoms from text alone but are oversensitive to suicidality markersCognitive reframing assistance represents a promising application where LLMs can help identify thought trapsProper evaluation frameworks must include privacy, security, effectiveness, and appropriate engagement levelsTheory of mind remains a significant challenge for LLMs in therapeutic contexts; example: The Sally-Anne Test.Responsible implementation requires staged evaluation before patient-facing deploymentResourcesTo learn more about Adi's research and topics discussed in this episode, check out the following resources:Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluationTherapist Behaviors paper: [2401.00820] A Computational Framework for Behavioral Assessment of LLM Therapists Cognitive reframing paper: Cognitive Reframing of Negative Thoughts through Human-Language Model Interaction - ACL Anthology Faux Pas paper: Testing theory of mind in large language models and humans | Nature Human Behaviour READI: Readiness Evaluation for Artificial Intelligence-Mental Health Deployment and Implementation (READI): A Review and Proposed Framework Large language models could change the future of behavioral healthcare: A proposal for responsible development and evaluation | npj Mental Health Research GPT-4's Schema of Depression: Explaining GPT-4's Schema of Depression Using Machine Behavior AnalysisAdi's Profile: Adithya V Ganesan - Google Scholar What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

CISO-Security Vendor Relationship Podcast
We're All for a Responsible AI Rollout as Long as It Goes as Fast as Possible

CISO-Security Vendor Relationship Podcast

Play Episode Listen Later Sep 2, 2025 40:00


All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Mike Johnson, CISO, Rivian. Joining them is Jennifer Swann, CISO, Bloomberg Industry Group. In this episode: Vulnerability management vs. configuration control Open source security and supply chain trust Building security leadership presence AI governance and enterprise risk Huge thanks to our sponsor, Vanta Vanta's Trust Management Platform automates key areas of your GRC program—including compliance, internal and third-party risk, and customer trust—and streamlines the way you gather and manage information. A recent IDC analysis found that compliance teams using Vanta are 129% more productive. Get started today at Vanta.com/CISO.

Artificial Intelligence in Industry with Daniel Faggella
How Financial Institutions Can Prepare for the Future of Fraud with Responsible AI Deployments - with JoAnn Stonier of Mastercard

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Sep 2, 2025 29:22


As AI capabilities evolve, financial institutions face pressure to balance innovation with security, trust, and regulatory compliance. From fraud prevention to customer experience, deterministic AI applications continue to form the backbone of financial services—even as new technologies like generative and agentic AI emerge. In this episode, JoAnn Stonier, Data and AI Fellow at Mastercard, joins us to share how Mastercard is navigating these dynamics. She explains how AI-driven analytics reduce false positives in fraud detection, why “agent-ish” AI marks an important transition toward more autonomous systems, and how responsible governance ensures privacy and security remain at the forefront. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

Design As
Design As S3: Trailer

Design As

Play Episode Listen Later Sep 2, 2025 3:17


Design As is back! Starting September 9th Design As will release episodes weekly. Joining host Lee Moreau to speculate on the future of design through a range of different perspectives.  This season you'll hear discussions about Responsible AI from technologists, educators, designers, and industry leaders plus bonus episodes directly recorded at the Shapeshift Summit hosted by the Institute of Design May 2025 in Chicago. Follow Design Observer on Instagram to keep up and see even more Design As content. For more information about this season and full transcription of the trailer visit us on our website. 

Narratives of Purpose
On Harnessing Information and Technology for Health Equity - HIMSS Europe Series with Hal Wolf

Narratives of Purpose

Play Episode Listen Later Aug 21, 2025 14:26 Transcription Available


Advancing Digital Transformation To Pioneer Global Health Solutions.In this first episode of Narratives of Purpose's special series from the 2025 HIMSS European Health Conference, host Claire Murigande speaks with Hal Wolf, the President and CEO of HIMSS.HIMSS (Healthcare Information and Management Systems Society) is a non-profit organization with a strong commitment to advancing global health through technology, supporting the transformation of the health ecosystem and fostering health equity.In this interview, Hal emphasizes the necessity for a significant leap forward in our approach to healthcare, particularly in the context of artificial intelligence and its transformative potential that can propel us towards more effective care delivery.Be sure to visit our podcast website for the full episode transcript.LINKS:Article covering HIMSS Europe 2025: AI capacity building in healthcareLinkedIn posts covering HIMSS Europe 2025: The future of the healthcare workforce | Responsible AI in health | Cybersecurity | The European Health Data Space | Women in Health IT | Women's Health in focusLearn more about HIMSS activities and events at himss.org Follow HIMSS on their social media channels: LinkedIn | Facebook | Instagram |

Pondering AI
Generating Safety Not Abuse with Dr. Rebecca Portnoff

Pondering AI

Play Episode Listen Later Aug 20, 2025 46:35


Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse.  Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn's Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable.  Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn's seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn's critical work by donating here. Related Resources Thorn's Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/  Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/  Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report  A transcript of this episode is here.

The Joe Reis Show
The Insane Dangers of AI Influence Ops and More w/ Disesdi Susanna Cox

The Joe Reis Show

Play Episode Listen Later Aug 20, 2025 55:12


What are the hidden dangers lurking beneath the surface of vibe coded apps and hyped-up CEO promises? And what is Influence Ops?I'm joined by Susanna Cox (Disesdi), an AI security architect, researcher, and red teamer who has been working at the intersection of AI and security for over a decade. She provides a masterclass on the current state of AI security, from explaining the "color teams" (red, blue, purple) to breaking down the fundamental vulnerabilities that make GenAI so risky.We dive into the recent wave of AI-driven disasters, from the Tea dating app that exposed its users' sensitive data to the massive Catholic Health breach. We also discuss why the trend of blindly vibe coding is an irresponsible and unethical shortcut that will create endless liabilities in the near term.Susanna also shares her perspective on AI policy, the myth of separating "responsible" from "secure" AI, and the one threat that truly keeps her up at night: the terrifying potential of weaponized globally scaled Influence Ops to manipulate public opinion and democracy itself.Find Disesdi Susanna Cox:Substack: https://disesdi.substack.com/Socials (LinkedIn, X, etc.): @DisesdiKEY MOMENTS:00:26 - Who is Disesdi Susanna Cox?03:52 - What are Red, Blue, and Purple Teams in Security?07:29 - Probabilistic vs. Deterministic Thinking: Why Data & Security Teams Clash12:32 - How GenAI Security is Different (and Worse) than Classical ML14:39 - Recent AI Disasters: Catholic Health, Agent Smith & the "T" Dating App18:34 - The Unethical Problem with "Vibe Coding"24:32 - "Vibe Companies": The Gaslighting from CEOs About AI30:51 - Why "Responsible AI" and "Secure AI" Are the Same Thing33:13 - Deconstructing the "Woke AI" Panic44:39 - What Keeps an AI Security Expert Up at Night? Influence Ops52:30 - The Vacuous, Haiku-Style Hellscape of LinkedIn

ServiceNow Podcasts
Governing AI at Scale – Welcome to AI Control Tower

ServiceNow Podcasts

Play Episode Listen Later Aug 20, 2025 24:15


Governing AI – Welcome to AI Control Tower Episode Overview Governance accelerates AI success. Control enables innovation. In this premiere episode, we explore how ServiceNow governs AI at scale through AI Control Tower, a groundbreaking solution that transforms how enterprises manage their artificial intelligence landscape. Host Bobby Brill brings together three visionary leaders. They reveal the hidden AI sprawl within enterprises. They expose the risks. And they illuminate a path forward - one where governance and innovation dance in perfect harmony, where control mechanisms become the very foundation of sustainable AI transformation. The Conversation AI is everywhere. Sometimes visible, often hidden, always expanding. Our experts navigate the complex terrain of enterprise AI governance, from shadow AI implementations that operate beyond IT oversight to the regulatory frameworks demanding immediate attention. Ravi Krishnamurthy, Vice President of Product Management for AI Platform and Responsible AI, unveils AI Control Tower—a solution that brings clarity to chaos. Sampada Chavan, Senior Principal Product Manager, demonstrates how regulatory compliance becomes achievable through systematic governance. Peter Weigt, Sr. Director of Inbound Product Management for Responsible AI, challenges conventional thinking about the innovation paradox and reveals governance as the strategic enabler organizations desperately need. Discovery meets responsibility. Compliance meets creativity. Control meets capability. Expert Panel Ravi Krishnamurthy | Vice President of Product Management, AI Platform and Responsible AI Sampada Chavan | Senior Principal Product Manager, AI Control Tower Peter Weigt | Sr. Director, Inbound Product Management, Responsible AI Host: Bobby Brill 00:00 Introduction to Governing AI at Scale 00:31 Meet the Experts 00:54 Hidden AI in Your Enterprise 02:56 Discovery and Responsible AI Practices 11:28 The Innovation Paradox 14:59 AI Control Tower and Governance 19:42 Conclusion and Final Thoughts Links to AICT deep-dive blogs: https://www.servicenow.com/community/now-assist-articles/part-0-our-journey-to-the-ai-control-tower-nbsp/ta-p/3280295 https://www.servicenow.com/community/now-assist-articles/part-1-the-gathering-storm-ai-s-rising-tension-nbsp/ta-p/3280309 https://www.servicenow.com/community/now-assist-articles/part-2-the-architecture-of-control-building-the-ai-control-tower/ta-p/3344257 See omnystudio.com/listener for privacy information.

ServiceNow TechBytes
Governing AI at Scale – Welcome to AI Control Tower

ServiceNow TechBytes

Play Episode Listen Later Aug 20, 2025 24:15


Governing AI – Welcome to AI Control Tower Episode Overview Governance accelerates AI success. Control enables innovation. In this premiere episode, we explore how ServiceNow governs AI at scale through AI Control Tower, a groundbreaking solution that transforms how enterprises manage their artificial intelligence landscape. Host Bobby Brill brings together three visionary leaders. They reveal the hidden AI sprawl within enterprises. They expose the risks. And they illuminate a path forward - one where governance and innovation dance in perfect harmony, where control mechanisms become the very foundation of sustainable AI transformation. The Conversation AI is everywhere. Sometimes visible, often hidden, always expanding. Our experts navigate the complex terrain of enterprise AI governance, from shadow AI implementations that operate beyond IT oversight to the regulatory frameworks demanding immediate attention. Ravi Krishnamurthy, Vice President of Product Management for AI Platform and Responsible AI, unveils AI Control Tower—a solution that brings clarity to chaos. Sampada Chavan, Senior Principal Product Manager, demonstrates how regulatory compliance becomes achievable through systematic governance. Peter Weigt, Sr. Director of Inbound Product Management for Responsible AI, challenges conventional thinking about the innovation paradox and reveals governance as the strategic enabler organizations desperately need. Discovery meets responsibility. Compliance meets creativity. Control meets capability. Expert Panel Ravi Krishnamurthy | Vice President of Product Management, AI Platform and Responsible AI Sampada Chavan | Senior Principal Product Manager, AI Control Tower Peter Weigt | Sr. Director, Inbound Product Management, Responsible AI Host: Bobby Brill 00:00 Introduction to Governing AI at Scale 00:31 Meet the Experts 00:54 Hidden AI in Your Enterprise 02:56 Discovery and Responsible AI Practices 11:28 The Innovation Paradox 14:59 AI Control Tower and Governance 19:42 Conclusion and Final Thoughts Links to AICT deep-dive blogs: https://www.servicenow.com/community/now-assist-articles/part-0-our-journey-to-the-ai-control-tower-nbsp/ta-p/3280295 https://www.servicenow.com/community/now-assist-articles/part-1-the-gathering-storm-ai-s-rising-tension-nbsp/ta-p/3280309 https://www.servicenow.com/community/now-assist-articles/part-2-the-architecture-of-control-building-the-ai-control-tower/ta-p/3344257 See omnystudio.com/listener for privacy information.

Together Digital Power Lounge
Empower Your Team with Responsible AI

Together Digital Power Lounge

Play Episode Listen Later Aug 18, 2025 60:08 Transcription Available


Welcome back to The Power Lounge, a space dedicated to meaningful conversations with industry leaders. In today's episode, "Empower Your Team with Responsible AI," host Amy Vaughan, Together Digital's Chief Empowerment Officer, explores a critical challenge for digital teams: adopting AI responsibly without compromising ethical standards.Joining Amy is Nikki Ferrell, Associate Director of Online Enrollment and Marketing Communications at Miami University. Nikki has been instrumental in launching an AI steering committee to manage the swift integration of generative AI in higher education. Together, they examine the potential risks of unmanaged AI use, the importance of establishing clear policies, and how continuous learning and experimentation can cultivate ethical and innovative teams.Whether you're a team leader, a business owner, or simply interested in the complexities of AI, this episode offers a practical framework for implementing technology that prioritizes people, purpose, and ethics. Gain actionable insights and hear real-world experiences right here on The Power Lounge.Chapters:00:00 - Introduction01:24 - AI's Impact: Unprepared Marketing Practices05:08 - Creating AI Steering Committees09:32 - Normalize Open AI Use at Work14:42 - Adopting AI for Organizational Success16:30 - Take Initiative to Lead21:00 - Cautious Marketing on Mother's Day25:25 - AI in Education: Gen Z & Alpha Hesitations29:19 - "AI as Amplifying Tool"30:55 - AI's Impact on Cognitive Skills36:31 - AI Augments, Not Replaces, Workforce38:30 - "Embracing Tech Amidst Red Tape"41:45 - "Responsible AI Adoption Insights"44:19 - AI Use Case Library Development48:03 - Embracing AI for Strategic Future51:01 - Exploring AI for Everyday Tasks54:58 - AI-Assisted Strategy Development58:51 - Subscribe for Updates & Community59:45 - OutroQuotes:"Empowerment begins when we stop being afraid of new technology and start building community around it."- Amy Vaughan"You don't need a title to lead the way with AI—start small, learn together, and let your curiosity spark real change."- Nikki FerrellKey TakeawaysStart Small, Stay Grounded in ResearchPolicies Aren't Optional—They EmpowerOpenness Overgoing UndergroundYou Don't Need a Title to LeadAlign with Mission and ValuesBuild a Culture of ExperimentationTransparency Builds Trust (and Avoids Backlash)AI Augments, Not ReplacesMeet People Where They AreThe Future is CollaborativeConnect with Nikki Ferrell:LinkedIn: https://www.linkedin.com/in/nferrell/Website: https://miamioh.edu/Connect with the host Amy Vaughan:LinkedIn: http://linkedin.com/in/amypvaughanPodcast:Power Lounge Podcast  - Together DigitalLearn more about Together Digital and consider joining the movement by visitingHome - Together DigitalSupport the show

The Tech Blog Writer Podcast
3387: How Tableau's Srinivas Chippagiri Thinks About Responsible AI and Cloud Systems

The Tech Blog Writer Podcast

Play Episode Listen Later Aug 17, 2025 31:41


What does it take to build intelligent systems that are not only AI-powered but also secure, scalable, and grounded in real-world needs? In this episode of Tech Talks Daily, I speak with Srinivas Chippagiri, a senior technology leader and author of Building Intelligent Systems with AI and Cloud Technologies. With over a decade of experience spanning Wipro, GE Healthcare, Siemens, and now Tableau at Salesforce, Srinivas offers a practical view into how AI and cloud infrastructure are evolving together. We explore how AI is changing cloud-native development through predictive maintenance, automated DevOps pipelines, and developer co-pilots. But this is not just about technology. Srinivas highlights why responsible AI needs to be part of every system design, sharing examples from his own research into anomaly detection, fuzzy logic, and explainable models that support trust in regulated industries. The conversation also covers the rise of hybrid and edge computing, the real challenges of data fragmentation and compute costs, and how teams are adapting with new skills like prompt engineering and model observability. Srinivas gives a thoughtful view on what ethical AI deployment looks like in practice, from bias audits to AI governance boards. For those looking to break into this space, his advice is refreshingly clear. Start with small, end-to-end projects. Learn by doing. Contribute to open-source communities. And stay curious. Whether you're scaling AI systems, building a career in cloud tech, or just trying to keep pace with fast-moving trends, this episode offers a grounded and insightful guide to where things are heading next. Srinivas's book is available on Amazon under Building Intelligent Systems with AI and Cloud Technologies, and you can connect with him on LinkedIn to continue the conversation.

This Week in Google (MP3)
IM 832: Surrounded by Zuck - Inside Google Gemini

This Week in Google (MP3)

Play Episode Listen Later Aug 14, 2025 196:51 Transcription Available


GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 832: Surrounded by Zuck

All TWiT.tv Shows (MP3)

Play Episode Listen Later Aug 14, 2025 196:51 Transcription Available


GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit

Radio Leo (Audio)
Intelligent Machines 832: Surrounded by Zuck

Radio Leo (Audio)

Play Episode Listen Later Aug 14, 2025 196:51 Transcription Available


GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit

This Week in Google (Video HI)
IM 832: Surrounded by Zuck - Inside Google Gemini

This Week in Google (Video HI)

Play Episode Listen Later Aug 14, 2025 196:51 Transcription Available


GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit

The FIT4PRIVACY Podcast - For those who care about privacy
Building Secure AI Systems with Santosh Kaveti and Punit Bhatia in the FIT4PRIVACY Podcast E145 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Aug 14, 2025 26:44


As AI becomes deeply embedded in every industry, building AI systems that are secure, responsible, and privacy-centric is more crucial than ever. But where do you begin? At the strategy level? Design? Or implementation? How do organizations tackle the challenges of AI risks, data governance, and compliance while keeping pace with innovation?Join us for an insightful conversation with Punit Bhatia and Santosh Kaveti, CEO of Pro Arch, as we explore the evolving landscape of responsible AI, key foundational steps, and the practical approaches to secure AI deployment.If you're looking to understand how to build AI systems that are not only innovative but also secure and trustworthy, this episode is for you!KEY CONVERSION 00:01:58 Responsible AI 00:04:30 AI Strategy 00:11:43 Role of standards and Approach 00:15:35 Good practices of Data Governance 00:19:55 AI Talent 00:23:10 Pro Arch Role in costumers 00:25:00 Contact Information of Santosh  ABOUT GUEST Santosh Kaveti is CEO & Founder at Proarch. With over 18 years of experience as a technologist, entrepreneur, investor, and advisor, Santosh Kaveti is the CEO and Founder of ProArch, a purpose-driven enterprise that accelerates value and increases resilience for its clients with consulting and technology services, enabled by cloud, guided by data, fueled by apps, and secured by design. Santosh's vision and leadership have propelled ProArch to become a dominant force in key industry verticals, such as Energy, Healthcare & Lifesciences, and Manufacturing, where he leverages his expertise in manufacturing process improvement, mentoring, and consulting. Operationalizing AI: From Strategy to Execution Navigating AI Risks: Ensuring Security and Compliance Prioritizing AI Initiatives: Aligning with Business Goals Attracting and Retaining Top AI Talent Integrating AI into Core Business Functions The Data Foundation: Governance, Quality, and Culture in AI  Santosh's journey is marked by resilience, ambition, and self-awareness, as he has learned from his successes and failures, and continuously evolved his skills and perspective. He has traveled across 23 countries, gaining insights into the global diversity and interconnectedness of human experiences. He is passionate about blending technology with a human-centric approach and making a meaningful societal impact through his support for initiatives that uplift underprivileged children, assist disadvantaged families, and promote social awareness.Santosh's ethos extends to his investments in and mentorship of promising startups, as well as his role as the Chairman of the Board at Enhops and iV4, two ProArch companies.  ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.  Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.  As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe.  RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com, https://www.linkedin.com/in/santoshkaveti/ , https://www.proarch.com/  Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy   

Play Big Faster Podcast
#206: Custom AI vs ChatGPT: Local Systems That Keep Data Secure | Sam Sammane

Play Big Faster Podcast

Play Episode Listen Later Aug 14, 2025 43:41


AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.

All TWiT.tv Shows (Video LO)
Intelligent Machines 832: Surrounded by Zuck

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Aug 14, 2025 196:51 Transcription Available


GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit

What's Wrong With: The Podcast
Transforming AI into a Force for Good ft. Alexandra Car

What's Wrong With: The Podcast

Play Episode Listen Later Aug 13, 2025 46:52


Follow Alexandra on LinkedIn and X!Follow us on Instagram and on LinkedIn!Created by SOUR, this podcast is part of the studio's "Future of X,Y,Z" research, where the collaborative discussion outcomes serve as the base for the futuristic concepts built in line with the studio's mission of solving urban, social and environmental problems through intelligent designs.Make sure to visit our website and subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you never miss an episode. If you found value in this show, we would appreciate it if you could head over to iTunes to rate and leave a review – or you can simply tell your friends about the show!Don't forget to join us next week for another episode. Thank you for listening!

The Tea on Cybersecurity
Key Lessons from Season 4 of The Tea on Cybersecurity

The Tea on Cybersecurity

Play Episode Listen Later Aug 12, 2025 6:08


If there's one key takeaway from Season 4 of The Tea on Cybersecurity, it's that cybersecurity is a shared responsibility. With this in mind, host Jara Rowe wraps up the season by sharing valuable insights that everyone can use. She reflects on the most impactful lessons about compliance, AI, and penetration testing. Key takeaways:The importance of vCISOs and cyber engineersHow to approach penetration testing and PTaaSWhy transparency and training are essential for AI safetyEpisode highlights:(00:00) Today's topic: Key insights from this season (01:16) The role of vCISOs and cyber engineers(02:47) Responsible AI use(03:51) Penetration testing and PTaaS for small teamsConnect with the host:Jara Rowe's LinkedIn - @jararoweConnect with Trava:Website - www.travasecurity.comBlog - www.travasecurity.com/learn-with-trava/blogLinkedIn - @travasecurityYouTube - @travasecurity

Pondering AI
Inclusive Innovation with Hiwot Tesfaye

Pondering AI

Play Episode Listen Later Aug 6, 2025 50:48


Hiwot Tesfaye disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course. Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI. Hiwot Tesfaye is a Technical Advisor in Microsoft's Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the Global Perspectives: Responsible AI Fellowship.  Related Resources#35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot TesfayeA transcript of this episode is here.   

House of #EdTech
How to Build a Responsible AI Policy for Your Classroom - HoET262

House of #EdTech

Play Episode Listen Later Aug 3, 2025 31:16


In Episode 262 of the House of #EdTech, Chris Nesi explores the timely and necessary topic of creating a responsible AI policy for your classroom. With artificial intelligence tools becoming more integrated into educational spaces, the episode breaks down why teachers need to set clear expectations and how they can do it with transparency, collaboration, and flexibility. Chris offers a five-part framework that educators can use to guide students toward ethical and effective AI use. Before the featured content, Chris reflects on a growing internal debate: is it time to step back from tech-heavy classrooms and return to more analog methods? He also shares three edtech recommendations, including tools for generating copyright-free images, discovering daily AI tool capabilities, and randomizing seating charts for better classroom dynamics. Topics Discussed: EdTech Thought: Chris debates the “Tech or No Tech” question in modern classrooms EdTech Recommendations: https://nomorecopyright.com/ - Upload an image to transform it into a unique, distinct version designed solely for inspiration and creative exploration. https://www.shufflebuddy.com/ - Never worry about seating charts again Foster a strong classroom community by frequently shuffling your seating charts while respecting your students' individual needs. https://whataicandotoday.com/ - We've analysed 16362 AI Tools and identified their capabilities with OpenAI GPT-4.1, to bring you a free list of 83054 tasks of what AI can do today. Why classrooms need a responsible AI policy A five-part framework to build your AI classroom policy Define What AI Is (and Isn't) Clarify When and How AI Can Be Used Promote Transparency and Attribution Include Privacy and Tool Approval Guidelines Make It Collaborative and Flexible The importance of modeling digital citizenship and AI literacy Free editable AI policy template by Chris for grades K–12 Mentions: Mike Brilla – The Inspired Teacher podcast Jake Miller – Educational Duct Tape podcast // Educational Duct Tape Book