POPULARITY
We're in Los Angeles at Adobe MAX 2025 to break down the announcements that will change how creators work, including Adobe's game-changing partnership with YouTube. We're joined by a legendary lineup of guests to discuss the future of creativity. Mark Rober reveals his $55 million secret project for the first time ever, Cleo Abram (Huge If True) shares her POV on editorial freedom and advancements in tech, and Adobe's GM of Creators Mike Polner, explains the new AI tools that will save you hours of work.What you'll learn:-- Mark Rober's strategy for building a 100-person company.-- The AI audio tool that creates studio-quality sound anywhere.-- How to edit YouTube Shorts inside the new Premiere Mobile app.-- Why creative freedom is more important than ever for creators.If you want to stay ahead in the creator economy, subscribe and hit the bell so you don't miss our next episode!00:00 Live From Adobe MAX!01:01 Adobe's ChatGPT Integration01:45 The New Adobe x YouTube Partnership04:09 YouTube's New TV Experience07:48 Welcome Mark Rober!08:40 Is AI Cheating for Creators?12:25 Building the Mark Rober Business16:51 Mark Rober's $55M Secret Project23:53 Welcome Cleo Abram!26:12 Why I Left Vox31:20 AI Tools Lower The Barrier37:24 Welcome Adobe's Mike Polner!39:31 Adobe's Top 3 New Tools44:27 What is "Responsible AI"?52:06 Upload: Steven Bartlett's Big RaiseCreator Upload is your creator economy podcast, hosted by Lauren Schnipper and Joshua Cohen.Follow Lauren: https://www.linkedin.com/in/schnipper/Follow Josh: https://www.linkedin.com/in/joshuajcohen/Original music by London Bridge: https://www.instagram.com/londonbridgemusic/Edited and produced by Adam Conner: https://www.linkedin.com/in/adamonbrand
Dr. Julia Stoyanovich is Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, Director of the Center for Responsible AI, and member of the Visualization and Data Analytics Research Center at New York University. She is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) and a Senior member of the Association of Computing Machinery (ACM). Julia's goal is to make “Responsible AI” synonymous with “AI”. She works towards this goal by engaging in academic research, education and technology policy, and by speaking about the benefits and harms of AI to practitioners and members of the public. Julia's research interests include AI ethics and legal compliance, and data management and AI systems. Julia is engaged in technology policy and regulation in the US and internationally, having served on the New York City Automated Decision Systems Task Force, by mayoral appointment, among other roles. She received her M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.Links:https://engineering.nyu.edu/faculty/julia-stoyanovich https://airesponsibly.net/nyaiexchange_2025/ Hosted on Acast. See acast.com/privacy for more information.
Responsible AI adoption is as much about governance and evaluation as technology. Lightweight, context-specific frameworks make it possible for even resource-limited health systems to implement AI safely. Discover how generative AI, paired with real-world evidence, can help fill gaps in traditional research, increase health equity and help clinicians make more informed decisions.
Ireland's foremost digital marketing event, 3XE Digital, returns this November 26th with a bold new focus on the transformative power of Artificial Intelligence. 3XE AI will take place on Wednesday, November 26th at The Alex Hotel, Dublin, bringing together hundreds of marketers, social media professionals and business leaders to explore how AI is reshaping marketing strategy, creativity and performance. Delegates from top Irish brands including Chadwicks, Kepak, Chartered Accountants Ireland, Sage, The Travel Department, Finlay Motor Group, Hardware Association, and many more have already booked to attend this dynamic one-day conference designed to inspire, educate and empower. The event will be co-chaired by Anthony Quigley, Co-Founder of the Digital Marketing Institute, and Sinéad Walsh of Content Plan. Attendees will hear from leading voices in AI and digital marketing, discovering how to harness new technologies to deliver smarter, more efficient, and measurable campaigns. Key Highlights: Expert speakers from: Google, OpenAI, Content Plan, Women in AI, AI Certified, The Corporate Governance Institute, and more will share their wealth of knowledge on how clever use of AI can significantly improve all digital marketing and social media strategies and campaigns and continue to change how we do business and can massively increase sales. Topics include: ? Winning with AI in Business with Christina Barbosa-Gress, Google ? AI-Powered Operations for Irish SMEs with Denis Jastrzebski, Content Plan ? Education for Unlocking AI's Potential with Ian Dodson, AiCertified ? Practical and Responsible AI with Boris Gersic, Corporate Governance Institute ? The Compliance Edge in the AI Era with Colin Cosgrove, Movizmo Coaching Solutions ? Unlocking AI's True Potential in Business with Naomh McElhatton, Irish Ambassador for Women in AI Adrian Hopkins, Founder, 3XE Digital commented: "Reviving the 3XE Digital conference series felt timely, and AI presented the perfect opportunity. Artificial Intelligence is reshaping the entire marketing landscape - enhancing performance, improving efficiency and offering unprecedented creative possibilities. We're excited to bring this crucial conversation to the forefront once again." The 3XE AI Conference, organised in partnership with Content Plan, is proudly supported by Friday Agency, GS1 Ireland, and AI Certified. All details, including full speaker lineup, conference agenda and online bookings are available at https://3xe.ie. Early bookings remain open at 3xe.ie - including group discounts for teams. See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone's radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.Related ResourcesHBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication A transcript of this episode is here.
Die AI Talk Hosts Jakob Steinschaden (Trending Topics, newsrooms) und Clemens Wasner (enliteAI, AI Austria) diskutieren in dieser Folge diese Themen:
In this episode of the CX Innovators podcast, Mark Frumkin, director of customer success at Modulate, shares expert insight on how online retailers can deploy AI tools to improve fraud detection, reduce harmful customer interactions and elevate both the agent and the customer experience.The podcast is produced by Networld Media Group and sponsored by Modulate, which provides real-time conversational and voice intelligence technology for customer service and contact centers. Its solutions include ToxMod (real-time detection of harassment and abuse) and VoiceVault (fraud and identity verification through voice), which work in a safeguard agent role against abusive callers, protecting customers from scams and fraud, and ensuring compliance — all while reducing call times and friction in the customer experience.Frumkin leads a team in partnering with customers in the gaming and retail delivery space, including Activision, Riot Games and Rockstar Games. He oversees Modulate's customer success strategy, ensuring smooth onboarding and ongoing successful outcomes for customers, helping them to detect and prevent abuse and fraud in their voice channels and phone lines. Before joining Modulate, Mark was a consultant at Deloitte, where he led projects to deliver cutting-edge technology solutions across various industries.[RELATED_MEDIA}
HOT OFF THE PRESSES: In this special episode of In AI We Trust?, EqualAI President and CEO Miriam Vogel is joined by her two co-authors of Governing the Machine: How to navigate the risks of AI and unlock its true potential, Dr. Paul Dongha, Head of Responsible AI and AI Strategy at NatWest Group, and Ray Eitel-Porter, Accenture Luminary and Senior Research Associate at the Intellectual Forum, Jesus College, Cambridge, to launch their new book released TODAY (October 28, 2025). Miriam, Paul, and Ray share their motivation for writing the book, some of the big takeaways on AI governance, why it is for companies and consumers alike, and what they hope readers will learn from their book. We hope that you enjoy this episode, and please be sure to purchase a copy of Governing the Machine at the link above! And share your feedback at contact@equalai.org!
Co-hosts Mark Thompson and Steve Little explore how Google's Nano Banana photo restoration tool will revolutionize image restoration by integrating with Adobe Photoshop. This move will greatly reduce unintended changes to historical photos when editing them with AI.Next, they unpack OpenAI's move to make ChatGPT Projects available to free-tier users, making research organization more accessible for genealogists.This week's Tip of the Week provides essential guidance on the responsible use AI when editing historical photos using AI tools like Nano Banana, ensuring transparency and trust in historical photographs.In RapidFire, they cover OpenAI's new Sora 2 AI-video social media platform, Claude's new ability to create and edit Microsoft Office files, memory features in Claude Projects, advancements in local language models, and how OpenAI's massive infrastructure deals are changing the AI landscape.Timestamps:In the News:02:43 Adobe improves historical photo restoration by adding Nano Banana to Photoshop09:34 ChatGPT Projects are Now FreeTip of the Week:13:36 Citations for AI-Restored Images Build Trust in AI-Modified PhotosRapidFire:21:24 Sora 2 Goes Social27:23 Claude Adds Microsoft Office Creation and Editing34:26 Memory Features Come to Claude Projects38:32 Apple and Amazon both create Local Language Model tools44:47 OpenAI's Big Data Centre Deal with Oracle Resource LinksOpenAI announces free access to ChatGPT Projectshttps://help.openai.com/en/articles/6825453-chatgpt-release-notesEngadget: OpenAI Rolls Out ChatGPT Projects to Free Usershttps://www.engadget.com/ai/openai-rolls-out-chatgpt-projects-to-free-users-215027802.htmlForbes: OpenAI Makes ChatGPT Projects Freehttps://www.forbes.com/sites/quickerbettertech/2025/09/14/small-business-technology-roundup-microsoft-copilot-does-not-improve-productivity-and-openai-makes-chatgpt-project-free/Responsible AI Photo Restorationhttps://makingfamilyhistory.com/responsible-ai-photo-restoration/Claude now has memory, but only for certain usershttps://mashable.com/article/anthropic-claude-ai-now-has-memory-for-some-usersNew Apple Intelligence features are available todayhttps://www.apple.com/newsroom/2025/09/new-apple-intelligence-features-are-available-today/Introducing Amazon Lens Livehttps://www.aboutamazon.com/news/retail/search-image-amazon-lens-live-shopping-rufusAmazon Lens Live Can Scan and Pull Up Matcheshttps://www.pcmag.com/news/spot-an-item-you-wish-to-buy-amazon-lens-live-can-scan-and-pull-up-matchesA Joint Statement from OpenAI and Microsoft About Their Changing Partnershiphttps://openai.com/index/joint-statement-from-openai-and-microsoft/The Verge: OpenAI and Oracle Pen $300 Billion Compute Dealhttps://www.theverge.com/ai-artificial-intelligence/776170/oracle-openai-300-billion-contract-project-stargateReuters: OpenAI and Oracle Sign $300 Billion Computing Dealhttps://www.reuters.com/technology/openai-oracle-sign-300-billion-computing-deal-wsj-reports-2025-09-10/?utm_source=chatgpt.comTagsArtificial Intelligence, Genealogy, Family History, Photo Restoration, AI Tools, OpenAI, Google, Adobe Photoshop, ChatGPT Projects, Nano Banana, Image Editing, AI Citations, Sora 2, Video Generation, Claude, Microsoft Office, Apple Intelligence, Amazon Lens, Oracle, Cloud Computing, Local Language Models, AI Infrastructure, Responsible AI, Historical Photos
A candid conversation with Navin Budhiraja, CTO and Head of Products at Vianai Systems, Inc. on The Ravit Show in Palo Alto. From Bhilai to Palo Alto. Navin topped the IIT entrance exam, studied at Cornell, and led at IBM, AWS, and SAP. We sat down to talk about building AI that enterprises can actually use.What we covered:- Vianai's mission and the hila platform: why it exists and the problem it solves- How hila turns enterprise data into something teams can interact with in plain language- Responsible AI in practice: tackling hallucinations and earning trust- Why a platform like hila is needed even with powerful foundation models- Conversational Finance: what makes it useful for finance teams- Real integrations: ERP, CRM, HR systems, and how this works end to end- Security for the real world: air-gapped deployments, privacy, and certifications- The road ahead: how AI, IoT, and cloud are converging in the next 2 to 3 years- Advice for the next generation of builders from Bhilai, the IITs, and beyondWhy this matters:Enterprises want outcomes, not hype. Navin's lens on trust, flexibility, and scale shows how AI moves from pilot to production.Thank you, Navin, for the clear thinking and straight answers. Full interview on The Ravit Show YouTube channel is live.#data #ai #vianai #theravitshow
David and Kate delve into the ongoing AI boom, questioning whether it's mere hype or has real substance. They explore the ethical and responsible use of AI, emphasizing the importance of making technology accessible and beneficial to low-resource communities. They argue that small language models could provide specific, efficient solutions. The conversation also touches on the societal impacts of AI, the need for regulatory frameworks, and the potential for AI to democratize technology, moving away from its current gatekept state.
Send Bidemi a Text Message!In this episode, host Bidemi Ologunde spoke with Shannon Noonan, CEO/Founder of HiNoon Consulting, and US Global Ambassador - Global Council for Responsible AI. The conversation addressed how to turn “checkbox” programs into real business value, right-sized controls, third-party risk, AI guardrails, and data habits that help teams move faster—while strengthening security, compliance, and privacy.Support the show
This week on Taking The Pulse, Heather and Lauren record at the NCLifeSci 2025 Annual Meeting with Dr. Justin Collier, Chief Technology Officer for Healthcare at Lenovo North America. A practicing physician turned tech leader, Dr. Collier shares how AI is transforming the health care industry, from medical imaging and ambient documentation to administrative workflows and clinical efficiency. We explore the importance of governance, education, and ethical deployment of AI, and how health systems can start small to build trust and drive measurable results. Tune in for an insightful discussion on the future of healthcare!
The Artificial Intelligence Collaboration Centre (AICC) has launched Northern Ireland's first Responsible AI Hub - a groundbreaking online resource designed to help businesses, policymakers and individuals understand, adopt and apply Artificial Intelligence (AI) responsibly. Developed by the AICC - a collaborative initiative led by Ulster University in partnership with Queen's University Belfast - and spearheaded by Tadhg Hickey, Head of AI and Digital Ethics Policy, the Hub is built on one simple principle: responsible AI is everyone's responsibility. Whether you're completely new to AI or already developing and deploying AI solutions, the Hub provides practical, accessible tools and guidance to help users 'be good with AI'. Supported by Invest Northern Ireland and the Department for the Economy, the Responsible AI Hub brings together clear guidance, ethical frameworks, and practical governance tools, all designed to make responsible AI accessible to everyone. By helping organisations integrate good governance from the outset, the Hub enables faster, safer innovation and reduces the risk of costly retrofits or regulatory breaches later on. From business leaders and policymakers to developers, researchers and the general public, the Hub offers step-by-step support to help people understand what responsible AI means and how to put it into practice. Among the resources are a Data Fact Sheet Developer, Harm Assessments, an Idea Testing Tool, an AI Policy Builder, and a suite of Project Governance Tools, all created by AICC's in-house team of applied researchers and data scientists. These tools are already being embedded across SME collaborations to promote responsible and transparent AI development in Northern Ireland. Tadhg Hickey, Head of AI and Digital Ethics Policy at AICC, said: "We built the Responsible AI Hub because AI shouldn't feel out of reach. Whether you're curious about what responsible AI means or designing complex AI solutions, this Hub gives you the confidence, language and tools to make good choices. Responsible AI isn't just for data scientists - it's for everyone. The more people who understand and apply these principles, the more we can build trust and unlock AI's potential for good." As AI continues to transform industries and daily life, the Responsible AI Hub aims to make ethics and accountability part of Northern Ireland's innovation DNA, ensuring technology serves people - not the other way around. David Crozier CBE, Director of the AICC, added: "The Responsible AI Hub is about building a culture where innovation and integrity go hand in hand. It empowers businesses, individuals, and communities to be confident and capable with AI, strengthening Northern Ireland's position as a global leader in trusted, human-centred innovation. This Hub will help local businesses adopt AI not only quickly, but responsibly and productively." Anne Beggs, Chief Commercial Officer at Invest Northern Ireland, said: "The development of AICC's Responsible AI Hub directly supports our business strategy, which prioritises accelerating innovation and fostering collaboration as part of our role to support City and Growth Deals project delivery. It will help Northern Ireland's businesses and innovators embrace AI in ways that are not only productive and competitive, but also safe, inclusive and ethical. By equipping organisations with the tools to innovate with integrity, we are laying the foundations for a world-class, responsible digital economy." Since its establishment, the AICC has rapidly become the driving force behind responsible AI adoption in Northern Ireland. In just over a year, it has assembled a team of 19 experts across Belfast and Derry~Londonderry, engaged more than 100 SMEs through its flagship Transformer Programme, supported 260 postgraduate scholars and delivered AI training to over 360 professionals. With its remit now extended to 2029, the AICC is set to expand its impact - accelerating innovation, strengt...
What happens when AI stops making mistakes… and starts misleading you?This discussion dives into one of the most important — and least understood — frontiers in artificial intelligence: AI deception.We explore how AI systems evolve from simple hallucinations (unintended errors) to deceptive behaviors — where models selectively distort truth to achieve goals or please human feedback loops. We unpack the coding incentives, enterprise risks, and governance challenges that make this issue critical for every executive leading AI transformation.Key Moments:00:00 What is AI Deception and Why It Matters3:43 Emergent Behaviors: From Hallucinations to Alignment to Deception4:40 Defining AI Deception6:15 Does AI Have a Moral Compass?7:20 Why AI Lies: Incentives to “Be Helpful” and Avoid Retraining15:12 Is Deception Built into LLMs? (And Can It Ever Be Solved?)18:00 Non-Human Intelligence Patterns: Hallucinations or Something Else?19:37 Enterprise Impact: What Business Leaders Need to Know27:00 Measuring Model Reliability: Can We Quantify AI Quality?34:00 Final Thoughts: The Future of Trustworthy AI Mentions:Scientists at OpenAI and Apollo Research showed in a paper that AI models lie and deceive: https://www.youtube.com/shorts/XuxVSPwW8I8TIME: New Tests Reveal AI's Capacity for DeceptionOpenAI: Detecting and reducing scheming in AI modelsStartupHub: OpenAI and Apollo Research Reveal AI Models Are Learning to Deceive: New Detection Methods Show PromiseMarcus WellerHugging Face Watch next: https://www.youtube.com/watch?v=plwN5XvlKMg&t=1s -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Keren Katz exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans. Keren Katz is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (OWASP) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection. Related ResourcesArticle: The Silent Breach: Why Agentic AI Demands New OversightState of Agentic AI Security and Governance (whitepaper): https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/ The LLM Top 10: https://genai.owasp.org/llm-top-10/A transcript of this episode is here.
Noelle Russell compares AI to a baby tiger, it's super cute when it's small but it can quickly grow into something huge and dangerous. As the CEO and founder of the AI Leadership Institute and as an early developer on Amazon Alexa, Noelle has a deep understanding of scaling and selling AI. This week Noelle joins Tammy to discuss why she's so passionate about teaching individuals and organizations about AI and how companies can leverage AI in the right way. It's time to learn how to tame the tiger! Please note that the views expressed may not necessarily be those of NTT DATA.Links: Noelle Russell Scaling Responsible AI AI Leadership Institute Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
What does it take to build a billion-dollar company with fewer than 100 people, all while placing customer obsession and responsible AI at its core? In this episode of Predictable B2B Success, host Vinay Koshy speaks with Phillip Swan, Chief Product and Go-to-Market Officer of the AI Solution Group, to unlock the secrets behind blending innovative tech, ethical AI, and truly frictionless customer experiences. Philip shares his journey from co-founding PI Partners to merging with AI Solution Group, revealing untold stories about how he and his team leverage AI to drive unprecedented operational momentum and organizational growth. From identifying “migraine-level” pain points to eliminating data leaks caused by shadow AI, Phillip's insights challenge conventional thinking and tackle the big questions: Can AI really build trust and customer advocacy? How do you systemize culture and alignment across traditional business silos? And what is “pre-awareness”, the surprising stage most companies ignore in the buyer's journey? Packed with real-world examples, bold perspectives, and practical frameworks for change, this episode will get you rethinking your approach to product, leadership, and revenue growth. If you're ready to turn customer-centricity from a buzzword into your breakthrough strategy, don't miss this conversation! Some areas we explore in this episode include: Responsible and Safe AI – Ethics, guardrails, and compliance in AI development.Shadow AI Risks – Dangers of ungoverned AI and protecting company data.Customer Obsession – Making customer outcomes a core organizational focus.Revenue Momentum – Using AI and alignment to drive sustained business growth.Breaking Down Silos – Connecting all business functions for better collaboration and KPIs.Pre-awareness in the Buyer Journey – Building trust and influence before customers identify their needs.Change Management & Culture – CEO-driven culture and effective organizational change strategies.AI Agents & Agentic Systems – Defining and building true autonomous AI agents.Customer-driven Product Development – Co-creating solutions with customers based on real pain points.Scaling Customer Experience – Turning every touchpoint, including support and legal, into a customer experience advantage.And much, much more...
100m sprinter Bebe Jackson, 19, won a bronze medal on her debut at the IPC World Para Athletics Championships in Delhi, India, last week. Bebe was born with congenital talipes equinovarus, widely known as club foot, and when she's not competing for Britain, she works nights caring for children with complex disabilities. She tells Anita Rani how she does it.In Sally Wainwright's new BBC drama Riot Women, a group of women in mid-life escape the pressures of caring for parents and kids - and the menopause - by forming a rock band. Rosalie Craig stars as the incredible singer that brings them together. Anita Rani talks to Sally and actor Rosalie about the power of female friendship.Nuala McGovern talks to the French philosopher Manon Garcia. Manon watched the court proceedings of the Pelicot case in France, in which Dominique Pelicot and 46 other men were found guilty of the rape of Dominique's wife Gisèle. In her book Living with Men, she examines French and other societies in light of the case and questions what more needs to be done.When you think about music from 500 years ago, you might picture monks chanting, or the voices of choirboys, but what's been largely forgotten over the course of history is that some of the most striking music during this time was being written and sung by nuns, hidden away in convents across Europe. Nuala speaks to Laurie Stras, Director of Musica Secreta, an all-female renaissance ensemble.Elon Musk's Artificial Intelligence company xAI recently introduced two sexually explicit chatbots. He's a high-profile presence in a growing field where developers are banking on users interacting and forming intimate relationships with the AI chatbots. Nuala McGovern speaks to journalist Amelia Gentleman, who has just returned from an adult industry conference in Prague, where she saw a sharp rise in new websites offering an increasingly realistic selection of AI girlfriends, and Gina Neff, Professor of Responsible AI at the Queen Mary University of London, who tells us what this means for women.EastEnders actor Kellie Bright took part in a Woman's Hour special last year which asked whether the SEND system is working for children with special educational needs and disabilities. Tonight Kellie presents a special one-hour BBC Panorama. Drawing on her own experience as the mother of an autistic son, she investigates how parents navigate the complex system to secure the right help at school. Kellie joins Nuala McGovern to talk about what she found.Presenter: Anita Rani Producer: Simon Richardson
October 10, 2025: A new era of Responsible Intelligence is emerging. Governments are considering human-quota laws to keep people in the loop. Kroger is rolling out a values-based AI assistant that redefines trust and transparency. And legal experts warn that AI bias in HR could soon become a courtroom reality. In today's Future-Ready Today, Jacob Morgan explores how these stories signal the end of reckless automation and the rise of accountable leadership. He shares how the future of work will be shaped not by faster machines, but by wiser humans—and offers one simple “1%-a-Day” challenge to help you lead responsibly in the age of AI.
In this episode 187 of the Disruption Now Podcast, we sit down with Benjamin Ko, the CEO of Kaleidoscope Innovation, a firm leading the way in human-centered design and engineering — especially in healthcare. From developing wearable technologies for spinal cord injury patients to crafting surgical tools built around human ergonomics, Ben and his team are proving that empathy is a competitive advantage in the age of AI.We dive into the central question: If AI can optimize everything, where do we still matter? Ben argues that empathy isn't just a soft skill — it's a design superpower. He discusses how Kaleidoscope's cross-functional teams of designers, engineers, and researchers bridge the gap between physical and digital worlds, why 95% of AI projects fail due to lack of human context, and how clarity of thought and ethical design can shape a better, more responsible tech future.If you're a founder, product designer, healthcare innovator, engineer, or policymaker interested in building smarter systems with deeper purpose — this episode is for you.
Cisco's Vijoy Pandey - SVP & GM of Outshift by Cisco - explains how AI agents and quantum networks could completely redefine how software, infrastructure, and security function in the next decade.You'll learn:→ What “Agentic AI” and the “Internet of Agents” actually are→ How Cisco open-sourced the Internet of Agents framework and why decentralization matters→ The security threat of “store-now, decrypt-later” attacks—and how post-quantum cryptography will defend against them→ How Outshift's “freedom to fail” model fuels real innovation inside a Fortune-500 company→ Why the next generation of software will blur the line between humans, AI agents, and machines→ The vision behind Cisco's Quantum Internet—and two real-world use cases you can see today: Quantum Sync and Quantum AlertAbout Today's Guest:Meet Vijoy Pandey, the mind behind Cisco's Outshift—a team pushing the boundaries of what's next in AI, quantum computing, and the future internet. With 80+ patents to his name and a career spent redefining how systems connect and think, he's one of the few leaders truly building the next era of computing before the rest of us even see it coming.Key Moments:00:00 Meet Vijoy Pandey & Outshift's mission04:30 The two hardest problems in computer science: Superintelligence & Quantum Computing06:30 Why “freedom to fail” is Cisco's innovation superpower10:20 Inside the Outshift model: incubating like a startup inside Cisco21:00 What is Agentic AI? The rise of the Internet of Agents27:00 AGNTCY.org and open-sourcing the Internet of Agents32:00 What would an Internet of Agents actually look like?38:19 Responsible AI & governance: putting guardrails in early49:40 What is quantum computing? What is quantum networking?55:27 The vision for a global Quantum InternetWatch Next: https://youtu.be/-Jb2tWsAVwI?si=l79rdEGxB-i-Wrrn -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What does it take to lead digital transformation when fear, culture, and AI disruption collide?
Carnegie Mellon business ethics professor Derek Leben joins Kevin Werbach to trace how AI ethics evolved from an early focus on embodied systems—industrial robots, drones, self-driving cars—to today's post-ChatGPT landscape that demands concrete, defensible recommendations for companies. Leben explains why fairness is now central: firms must decide which features are relevant to a task (e.g., lending or hiring) and reject those that are irrelevant—even if they're predictive. Drawing on philosophers such as John Rawls and Michael Sandel, he argues for objective judgments about a system's purpose and qualifications. Getting practical about testing for AI fairness, he distinguishes blunt outcome checks from better metrics, and highlights counterfactual tools that reveal whether a feature actually drives decisions. With regulations uncertain, he urges companies to treat ethics as navigation, not mere compliance: Make and explain principled choices (including how you mitigate models), accept that everything you do is controversial, and communicate trade-offs honestly to customers, investors, and regulators. In the end, Leben argues, we all must become ethicists to address the issues AI raises...whether we want to or not. Derek Leben is Associate Teaching Professor of Ethics at the Tepper School of Business, Carnegie Mellon University, where he teaches courses such as “Ethics of Emerging Technologies,” “Fairness in Business,” and “Ethics & AI.” Leben is the author of Ethics for Robots (Routledge, 2018) and AI Fairness (MIT Press, 2025). He founded the consulting group Ethical Algorithms, through which he advises governments and corporations on how to build fair, socially responsible frameworks for AI and autonomous Transcript AI Fairness: Designing Equal Opportunity Algorithms (MIT Press 2025) Ethics for Robots: How to Design a Moral Algorithm (Routledge 2019) The Ethical Challenges of AI Agents (Blog post, 2025)
In this episode of Careers and the Business of Law, David Cowen sits down with Nathan Reichardt, PwC's Lead Managed Services Director and AI Champion, for a conversation that bridges technology and humanity. They unpack why “observability” isn't just a technical concept, it's the foundation of trust in an age of autonomous agents. From building glass-box systems that make AI accountable to recognizing the invisible pressures on professionals, this discussion explores what it really takes to lead responsibly in the era of AI. Key Topics Covered: Agents aren't magic, you must observe them. Why oversight is essential as AI agents act and learn autonomously. From black box to glass box. Transparency, explainability, and compliance as non-negotiable design principles. Responsible AI in practice. What observability really means for governance, risk, and trust. The rise of new roles. Why “AI Observer” and “Observability Lead” may soon become critical titles inside legal and business ops. The human dimension. How leaders can apply observability to people spotting stress, isolation, and burnout before it's too late. From pilot to practice. PwC's approach to scaling agentic AI safely through iteration, measurement, and feedback.
In this episode of the Data Science Salon Podcast, we sit down with Swati Tyagi, an AI/ML expert and responsible AI advocate. With deep expertise in large language models (LLMs), generative AI, and AI automation, Swati has led AI-driven innovation in FinTech, healthcare, and finance, helping organizations build scalable, ethical AI systems. Currently at JPMorgan Chase, Swati's work focuses on automating financial applications and leveraging LLMs for real-time inferencing. Her passion for responsible AI is central to her approach, ensuring that AI systems are not only powerful but also ethical and scalable. Key Highlights: -AI and FinTech: Swati discusses her work in credit risk and predictive modeling to optimize financial decision-making and risk assessment. -Responsible AI: Insight into how to design AI systems that are both scalable and ethical, addressing challenges around bias and NLP. -AI Automation in Finance: Learn how LLMs are transforming FinTech, and how MLOps and cloud solutions play a role in scaling these applications.
Send us a textImagine an AI tutor that changes color to teach colors, points to an arm to name body parts, and keeps lessons fun without losing focus. That's Buddy AI, a 3D animated study buddy for kids that turns screen time into meaningful learning.In this episode of AI for Kids, host Amber Ivey talks with Ivan Crewkov, the founder of Buddy AI, to explore how a personal challenge of helping his daughter learn English grew into a global learning platform used by over 60 million children.You'll learn how Buddy AI was built from the ground up with AI safety, child privacy, and strong educational design. Ivan shares how Buddy learned from 25,000 hours of kids' voices and accents, creating an AI that truly understands how children speak. Unlike general chatbots, Buddy runs on a purpose-built model stack focused on educational outcomes, guided by a structured curriculum and gamified experiences that make learning playful yet productive.We also talk about Buddy's COPPA-certified privacy standards, including minimal data collection, deletion on request, and a privacy-by-design architecture that keeps conversations secure. Plus, a sneak peek at what's next: a Tamagotchi-style home, customizable virtual pets, and new lessons designed with top mobile game creators.Parents, teachers, and caregivers will walk away with practical tips on choosing safe AI apps for kids, understanding why embodiment matters in AI education, and how to treat AI as a tool, not a toy.Listen now to hear how Buddy AI blends AI, education, and empathy to boost kids' confidence, language skills, and curiosity.Resourced mentioned:Buddy.aiBuddy's YouTube channelBuddy.ai kidSAFE COPPATalking TomTamagotchi Support the showHelp us become the #1 podcast for AI for Kids.Buy our new book "Let Kids Be Kids, Not Robots!: Embracing Childhood in an Age of AI"Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Gift or get our books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform. Apple Podcasts Amazon Music Spotify YouTube Other Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales...
Send us a textWhat happens when a robot colors the sun green? That playful mistake helps us unlock a bigger idea: AI needs you. Not as a spectator, but as the guide who brings context, empathy, and fairness.In this episode, Y stands for “You + AI.” We explore how people and AI work together in the real world, where speed and pattern spotting meet human judgment and care. We explain the idea of “human in the loop,” a simple way to make sure people stay in charge of goals, guardrails, and final decisions.You'll hear how AI helps doctors flag issues in X-rays while physicians decide treatment, supports teachers by grading routine work while educators respond to emotions and needs, and boosts artists by creating quick sketches while humans bring meaning and message. Along the way, we talk about bias, brittle rules, and why unchecked automation can lead to unfair results. The solution isn't magic code, it's a culture of curiosity, feedback, and review.We also share a fun family activity called “Who's in the Loop?” that helps kids practice spotting bad rules and adding nuance. Try saying “All fruit is round” and see how bananas save the day. Then talk together about where people and AI work as partners, when humans should have the final say, and which choices are safe to automate.Join us as we celebrate kids' questions, creativity, and courage, the real drivers of responsible AI. If we want smarter tools that serve people, your voice matters most.Subscribe, share this episode, and leave a review to help more families explore AI with curiosity and care.Resources:Sign up for the AiDigiCards waitlistFollow our KickstarterBig Emotions: Kids Listen Mashups About FeelingsSupport the showHelp us become the #1 podcast for AI for Kids.Buy our new book "Let Kids Be Kids, Not Robots!: Embracing Childhood in an Age of AI"Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Gift or get our books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform. Apple Podcasts Amazon Music Spotify YouTube Other Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales...
Artificial intelligence is changing everything.How we work, how we make decisions, and how we connect with one another. But as powerful as AI is, it also carries the risk of reinforcing the very inequities many of us have spent years trying to dismantle.Inclusion in AI isn't just a technical issue — it's a human one. As we continue to integrate AI into everyday life — from hiring and lending to healthcare and education — we must ensure these systems reflect the full diversity of the people they serve.The Problem with Biased DataAI systems are only as good as the data we feed them. When that data is incomplete or biased, the results can be harmful.A facial recognition system trained primarily on lighter skin tones struggles to identify darker ones.A healthcare algorithm trained on white patients misdiagnoses patients of color.These aren't “what if” scenarios — they're real-world examples of what happens when inclusion isn't built in from the start.Bias in AI happens when development teams lack diversity, when datasets don't represent real populations, and when ethical concerns are treated as add-ons instead of fundamentals.Valuing Diversity in AI DevelopmentInclusion starts with who's at the table.When teams are diverse across race, gender, culture, and lived experience, they bring perspectives that identify blind spots others might miss.This isn't just about fairness — it's about better outcomes. Diverse teams design more adaptive, ethical, and market-ready tools.Organizations must embed values, equity, and accountability into their AI strategies — not as PR afterthoughts but as guiding principles. A truly inclusive culture listens to those most impacted, prioritizes accessibility, and makes ethical conversations part of how innovation happens.Empowering Communities to Lead SolutionsCommunities know their own needs best. When we empower them with the tools and data to solve problems, solutions become more sustainable and relevant.In AI, this means involving communities in design, not just testing.When farmers use AI to predict droughts based on local data — or healthcare systems integrate community health data into diagnostics — the outcomes are more accurate, fair, and impactful.Consumers also play a role by being conscious of how our data is used and advocating for transparency and fairness. Inclusion in AI is a collective effort — not just a corporate one.Inclusive Culture = Responsible AIResponsible AI starts with culture. Psychological safety within organizations allows people to raise concerns about bias or harm without fear. That's how innovation and accountability grow together.True AI governance requires more than just engineers — it needs ethicists, sociologists, and community voices. Responsible AI isn't just about algorithms; it's about aligning technology with human values like fairness, trust, and equity.Inclusion Drives Business SuccessLet's be clear — inclusion isn't just a moral imperative. It's a strategic advantage.Inclusive organizations make better decisions, innovate faster, and attract top talent. In AI and data science, diversity of thought leads to better products and fewer ethical pitfalls.When technical and non-technical teams collaborate effectively, they build tools that serve broader audiences and strengthen brand trust — the foundation for sustainable growth.The Power of Community ConnectionAt the heart of all innovation is connection.AI may be powered by data, but its impact is deeply human. Strong communities — within organizations and across sectors — are what make inclusive, ethical technology possible.When people feel connected, supported, and valued, they bring the creativity and courage needed to build tools that reflect the world we want, not just the one we have.Community isn't just about belonging; it's about resilience — aligning purpose with progress.Final ThoughtInclusion in AI is not optional — it's essential.It's how we ensure technology serves humanity, not the other way around.By valuing diversity, empowering communities, and building inclusive cultures, we can create AI systems that are ethical, responsible, and reflective of the best of who we are.Innovation and inclusion must move forward together.What's your take?Have you seen examples — good or bad — of how AI is impacting inclusion in your industry? Share your thoughts in the comments or reply to this week's DEI After 5 episode featuring Catherine Goetz. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit deiafter5.substack.com/subscribe
Most artists get left behind in tech—but not Kaila Love.Once a homeless teen sleeping in her car, Kaila became a UC Berkeley grad, hip-hop artist, and founder of KailaLove.ai—a pioneering AI education company blending music, automation, and empowerment. Known as The AI Homegirl, she teaches creatives how to protect their IP, grow fanbases with AI, and own their digital destiny.Timestamps:00:00 – From Homeless to Berkeley05:00 – Sync Deals & Music Wins10:00 – Building with AI17:00 – Bootleg Brain & IP25:00 – Responsible Tech32:00 – Future Vision35:45 – Final Message
The best time to regulate AI was yesterday, and the next best time is now. There is a clear and urgent need for responsible AI development that implements reasonable guidelines to mitigate harms and foster innovation, yet the conversation in DC and capitals around the world remains muddled. NYU's Dr. Julia Stoyanovich joins David Rothkopf to explore the role of collective action in AI development and why responsible AI is the responsibility of each of us. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Kevin Werbach interviews Heather Domin, Global Head of the Office of Responsible AI and Governance at HCLTech. Domin reflects on her path into AI governance, including her pioneering work at IBM to establish foundational AI ethics practices. She discusses how the field has grown from a niche concern to a recognized profession, and the importance of building cross-functional teams that bring together technologists, lawyers, and compliance experts. Domin emphasizes the advances in governance tools, bias testing, and automation that are helping developers and organizations keep pace with rapidly evolving AI systems. She describes her role at HCLTech, where client-facing projects across multiple industries and jurisdictions create unique governance challenges that require balancing company standards with client-specific risk frameworks. Domin notes that while most executives acknowledge the importance of responsible AI, few feel prepared to operationalize it. She emphasizes the growing demand for proof and accountability from regulators and courts, and finds the work exciting for its urgency and global impact. She also talks about the new chalenges of agentic AI, and the potential for "oversight agents" that use AI to govern AI. Heather Domin is Global Head of the Office of Responsible AI and Governance at HCLTech and co-chair of the IAPP AI Governance Professional Certification. A former leader of IBM's AI ethics initiatives, she has helped shape global standards and practices in responsible AI. Named one of the Top 100 Brilliant Women in AI Ethics™ 2025, her work has been featured in Stanford executive education and outlets including CNBC, AI Today, Management Today, Computer Weekly, AI Journal, and the California Management Review. Transcript AI Governance in the Agentic Era Implementing Responsible AI in the Generative Age - Study Between HCL Tech and MIT
The best time to regulate AI was yesterday, and the next best time is now. There is a clear and urgent need for responsible AI development that implements reasonable guidelines to mitigate harms and foster innovation, yet the conversation in DC and capitals around the world remains muddled. NYU's Dr. Julia Stoyanovich joins David Rothkopf to explore the role of collective action in AI development and why responsible AI is the responsibility of each of us. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems. Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work. Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.Related ResourcesMedium: https://medium.com/@maximilian.vogelA transcript of this episode is here.
In this episode, host Sandy Vance, along with Anand Iyer, discuss Welldoc's core philosophy of responsible innovation, particularly how they are pushing the boundaries of AI while maintaining a strong commitment to safety, compliance, and member trust. Anand reveals how Welldoc is shaping the future of AI in healthcare by collaborating with the FDA, addressing bias, and leading a national effort for interoperability. Discover why responsibility, trust, and consumer empowerment are the keys to turning digital health innovation into safer, more proactive care. Healthcare innovation must be responsible in order to be effective.In this episode, they talk about:Healthcare innovation must be responsible.The four levels of AI and how they are best usedWorking with the FDA to advance safe, high-risk features.Driving interoperability, reducing friction, and encouraging consumer empowerment.Addressing Bias by using diverse data, garnering trust, and establishing the right guardrailsGovernance through consistent standardsPartnerships are the key to growing AI responsiblyKey Takeaways:Trust, safety, and compliance are non-negotiable foundations of innovation.Regulators are partners in shaping the future of AI, not barriers.Responsible AI creates safer, more equitable, and more proactive careA Little About Anand Iyer:Anand is a respected global digital health innovator and leader, most known for his insights on and experience with technology, strategy, and regulatory policy. Anand has been instrumental in Welldoc's success and the development of BlueStar®, the first FDA-cleared digital therapeutic for adults with type 2 diabetes. Since joining Welldoc in 2008, he has held core leadership positions that included President and Chief Operating Officer and Chief Strategy Officer. In 2013, Anand was named “Maryland Healthcare Innovator of the Year” in the field of mobile health. Anand was also recognized as a top AI thought leader globally when he was named to Constellation Research's prestigious AI 150 list in 2024.
AI is reshaping industries at a rapid pace, but as its influence grows, so do the ethical concerns that come with it. This episode examines how AI is being applied across sectors such as healthcare, finance, and retail, while also exploring the crucial issue of ensuring that these technologies align with human values. In this conversation, Lois Houston and Nikita Abraham are joined by Hemant Gahankari, Senior Principal OCI Instructor, who emphasizes the importance of fairness, inclusivity, transparency, and accountability in AI systems. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about how Oracle integrates AI capabilities into its Fusion Applications to enhance business workflows, and we focused on Predictive, Generative, and Agentic AI. Lois: Today, we'll discuss the various applications of AI. This is the final episode in our AI series, and before we close, we'll also touch upon ethical and responsible AI. 01:01 Nikita: Taking us through all of this is Senior Principal OCI Instructor Hemant Gahankari. Hi Hemant! AI is pretty much everywhere today. So, can you explain how it is being used in industries like retail, hospitality, health care, and so on? Hemant: AI isn't just for sci-fi movies anymore. It's helping doctors spot diseases earlier and even discover new drugs faster. Imagine an AI that can look at an X-ray and say, hey, there is something sketchy here before a human even notices. Wild, right? Banks and fintech companies are all over AI. Fraud detection. AI has got it covered. Those robo advisors managing your investments? That's AI too. Ever noticed how e-commerce companies always seem to know what you want? That's AI studying your habits and nudging you towards that next purchase or binge watch. Factories are getting smarter. AI predicts when machines will fail so they can fix them before everything grinds to a halt. Less downtime, more efficiency. Everyone wins. Farming has gone high tech. Drones and AI analyze crops, optimize water use, and even help with harvesting. Self-driving cars get all the hype, but even your everyday GPS uses AI to dodge traffic jams. And if AI can save me from sitting in bumper-to-bumper traffic, I'm all for it. 02:40 Nikita: Agreed! Thanks for that overview, but let's get into specific scenarios within each industry. Hemant: Let us take a scenario in the retail industry-- a retail clothing line with dozens of brick-and-mortar stores. Maintaining proper inventory levels in stores and regional warehouses is critical for retailers. In this low-margin business, being out of a popular product is especially challenging during sales and promotions. Managers want to delight shoppers and increase sales but without overbuying. That's where AI steps in. The retailer has multiple information sources, ranging from point-of-sale terminals to warehouse inventory systems. This data can be used to train a forecasting model that can make predictions, such as demand increase due to a holiday or planned marketing promotion, and determine the time required to acquire and distribute the extra inventory. Most ERP-based forecasting systems can produce sophisticated reports. A generative AI report writer goes further, creating custom plain-language summaries of these reports tailored for each store, instructing managers about how to maximize sales of well-stocked items while mitigating possible shortages. 04:11 Lois: Ok. How is AI being used in the hospitality sector, Hemant? Hemant: Let us take an example of a hotel chain that depends on positive ratings on social media and review websites. One common challenge they face is keeping track of online reviews, leading to missed opportunities to engage unhappy customers complaining on social media. Hotel managers don't know what's being said fast enough to address problems in real-time. Here, AI can be used to create a large data set from the tens of thousands of previously published online reviews. A textual language AI system can perform a sentiment analysis across the data to determine a baseline that can be periodically re-evaluated to spot trends. Data scientists could also build a model that correlates these textual messages and their sentiments against specific hotel locations and other factors, such as weather. Generative AI can extract valuable suggestions and insights from both positive and negative comments. 05:27 Nikita: That's great. And what about Financial Services? I know banks use AI quite often to detect fraud. Hemant: Unfortunately, fraud can creep into any part of a bank's retail operations. Fraud can happen with online transactions, from a phone or browser, and offsite ATMs too. Without trust, banks won't have customers or shareholders. Excessive fraud and delays in detecting it can violate financial industry regulations. Fraud detection combines AI technologies, such as computer vision to interpret scanned documents, document verification to authenticate IDs like driver's licenses, and machine learning to analyze patterns. These tools work together to assess the risk of fraud in each transaction within seconds. When the system detects a high risk, it triggers automated responses, such as placing holds on withdrawals or requesting additional identification from customers, to prevent fraudulent activity and protect both the business and its client. 06:42 Nikita: Wow, interesting. And how is AI being used in the health industry, especially when it comes to improving patient care? Hemant: Medical appointments can be frustrating for everyone involved—patients, receptionists, nurses, and physicians. There are many time-consuming steps, including scheduling, checking in, interactions with the doctors, checking out, and follow-ups. AI can fix this problem through electronic health records to analyze lab results, paper forms, scans, and structured data, summarizing insights for doctors with the latest research and patient history. This helps practice reduced costs, boost earnings, and deliver faster, more personalized care. 07:32 Lois: Let's take a look at one more industry. How is manufacturing using AI? Hemant: A factory that makes metal parts and other products use both visual inspections and electronic means to monitor product quality. A part that fails to meet the requirements may be reworked or repurposed, or it may need to be scrapped. The factory seeks to maximize profits and throughput by shipping as much good material as possible, while minimizing waste by detecting and handling defects early. The way AI can help here is with the quality assurance process, which creates X-ray images. This data can be interpreted by computer vision, which can learn to identify cracks and other weak spots, after being trained on a large data set. In addition, problematic or ambiguous data can be highlighted for human inspectors. 08:36 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:20 Nikita: Welcome back! AI can be used effectively to automate a variety of tasks to improve productivity, efficiency, cost savings. But I'm sure AI has its constraints too, right? Can you talk about what happens if AI isn't able to echo human ethics? Hemant: AI can fail due to lack of ethics. AI can spot patterns, not make moral calls. It doesn't feel guilt, understand context, or take responsibility. That is still up to us. Decisions are only as good as the data behind them. For example, health care AI underdiagnosing women because research data was mostly male. Artificial narrow intelligence tends to automate discrimination at scale. Recruiting AI downgraded resumes just because it had a word "women's" (for example, women's chess club). Who is responsible when AI fails? For example, if a self-driving car hits someone, we cannot blame the car. Then who owns the failure? The programmer? The CEO? Can we really trust corporations or governments having programmed the use of AI not to be evil correctly? So, it's clear that AI needs oversight to function smoothly. 10:48 Lois: So, Hemant, how can we design AI in ways that respect and reflect human values? Hemant: Think of ethics like a tree. It needs all parts working together. Roots represent intent. That is our values and principles. The trunk stands for safeguards, our systems, and structures. And the branches are the outcomes we aim for. If the roots are shallow, the tree falls. If the trunk is weak, damage seeps through. The health of roots and trunk shapes the strength of our ethical outcomes. Fairness means nothing without ethical intent behind it. For example, a bank promotes its loan algorithm as fair. But it uses zip codes in decision-making, effectively penalizing people based on race. That's not fairness. That's harm disguised as data. Inclusivity depends on the intent sustainability. Inclusive design isn't just a check box. It needs a long-term commitment. For example, controllers for gamers with disabilities are only possible because of sustained R&D and intentional design choices. Without investment in inclusion, accessibility is left behind. Transparency depends on the safeguard robustness. Transparency is only useful if the system is secure and resilient. For example, a medical AI may be explainable, but if it is vulnerable to hacking, transparency won't matter. Accountability depends on the safeguard privacy and traceability. You can't hold people accountable if there is no trail to follow. For example, after a fatal self-driving car crash, deleted system logs meant no one could be held responsible. Without auditability, accountability collapses. So remember, outcomes are what we see, but they rely on intent to guide priorities and safeguards to support execution. That's why humans must have a final say. AI has no grasp of ethics, but we do. 13:16 Nikita: So, what you're saying is ethical intent and robust AI safeguards need to go hand in hand if we are to truly leverage AI we can trust. Hemant: When it comes to AI, preventing harm is a must. Take self-driving cars, for example. Keeping pedestrians safe is absolutely critical, which means the technology has to be rock solid and reliable. At the same time, fairness and inclusivity can't be overlooked. If an AI system used for hiring learns from biased past data, say, mostly male candidates being hired, it can end up repeating those biases, shutting out qualified candidates unfairly. Transparency and accountability go hand in hand. Imagine a loan rejection if the AI's decision isn't clear or explainable. It becomes impossible for someone to challenge or understand why they were turned down. And of course, robustness supports fairness too. Loan approval systems need strong security to prevent attacks that could manipulate decisions and undermine trust. We must build AI that reflects human values and has safeguards. This makes sure that AI is fair, inclusive, transparent, and accountable. 14:44 Lois: Before we wrap, can you talk about why AI can fail? Let's continue with your analogy of the tree. Can you explain how AI failures occur and how we can address them? Hemant: Root elements like do not harm and sustainability are fundamental to ethical AI development. When these roots fail, the consequences can be serious. For example, a clear failure of do not harm is AI-powered surveillance tools misused by authoritarian regimes. This happens because there were no ethical constraints guiding how the technology was deployed. The solution is clear-- implement strong ethical use policies and conduct human rights impact assessment to prevent such misuse. On the sustainability front, training AI models can consume massive amount of energy. This failure occurs because environmental costs are not considered. To fix this, organizations are adopting carbon-aware computing practices to minimize AI's environmental footprint. By addressing these root failures, we can ensure AI is developed and used responsibly with respect for human rights and the planet. An example of a robustness failure can be a chatbot hallucinating nonexistent legal precedence used in court filings. This could be due to training on unverified internet data and no fact-checking layer. This can be fixed by grounding in authoritative databases. An example of a privacy failure can be AI facial recognition database created without user consent. The reason being no consent was taken for data collection. This can be fixed by adopting privacy-preserving techniques. An example of a fairness failure can be generated images of CEOs as white men and nurses as women, minorities. The reason being training on imbalanced internet images reflecting societal stereotypes. And the fix is to use diverse set of images. 17:18 Lois: I think this would be incomplete if we don't talk about inclusivity, transparency, and accountability failures. How can they be addressed, Hemant? Hemant: An example of an inclusivity failure can be a voice assistant not understanding accents. The reason being training data lacked diversity. And the fix is to use inclusive data. An example of a transparency and accountability failure can be teachers could not challenge AI-generated performance scores due to opaque calculations. The reason being no explainability tools are used. The fix being high-impact AI needs human review pathways and explainability built in. 18:04 Lois: Thank you, Hemant, for a fantastic conversation. We got some great insights into responsible and ethical AI. Nikita: Thank you, Hemant! If you're interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…. Lois: And Lois Houston, signing off! 18:26 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Scott heads to Microsoft's campus for the VS Code Insider Summit to sit down with Dr. Sarah Bird and explore what “Responsible AI” really means for developers. From protecting user privacy to keeping humans in the loop, they dig into how everyday coders can play a role in shaping AI's future. Show Notes 00:00 Welcome to Syntax! 01:27 Brought to you by Sentry.io. 03:13 The path the machine learning. 04:44 How do you get to ‘Responsible AI'? 06:43 Is there such a thing as ‘Responsible AI'? 07:34 Does the average developer have a part to play? 09:12 How can AI tools protect inexperienced users? 11:55 Let's talk about user and company privacy. 13:57 Are local tools and services becoming more viable? 15:06 Are people right to be skeptical? 16:58 The software developer role is fundamentally changing. 17:43 Human in the loop. 19:37 The career path to Responsible AI. 21:21 Sick Picks. Sick Picks Sarah: Japanese pottery Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Should AI in health records be considered a medical device? Emily Lewis, an AI thought leader, compares the U.S. FDA and U.K. NHS approaches to AI regulation in healthcare. She shares how global discrepancies affect responsible AI implementation and what leaders must do to stay compliant. Discover why local adaptation and ongoing education are critical.
During this age of AI, we will talk about the national movement, Responsible AI for America's Youth, which is a movement that ensures every child in the United States has the opportunity to become a responsible and confident user of artificial intelligence (AI). This effort puts students and educators at the center of shaping how AI is brought into schools, elevating their voices in the national conversation. Jeff Riley, former Massachusetts Commissioner of Elementary and Secondary Education, now the Executive Director of Day of AI, checked in to discuss.
In this podcast, Carrie Cobb, chief data and AI officer at Blackbaud and one of DataIQ's 100 Most Influential People in Data, sits down fora powerful conversation on the foundational principles of responsible AI. As AI becomes increasingly embedded in the daily operations of social impact organizations, Carrie shares how fairness, transparency, and inclusiveness must guide every step of AI development and deployment. From inclusive data practices to human-centered design, this episode offers a roadmap for organizations seeking to build trust and drive impact through responsible innovation. This episode will inspire you to lead with values and build technology that truly serves people and communities.
Artificial intelligence has long been part of our world, but the rapid rise of generative AI has brought new urgency to questions of how we use it and how we use it responsibly. In this episode of Degrees of Impact, host Michelle Apuzzio speaks with Dr. Jeffrey Allan, assistant professor and director of the Institute for Responsible Technology at Nazareth University. Together, they explore the Institute's work, the ethical dilemmas that come with AI-driven innovation, and what it means for both universities and businesses striving to harness AI productively. Thank you for tuning in to this episode of Degrees of Impact, where we explore innovative ideas and the people behind them in higher education. To learn more about NACU and our programs, visit nacu.edu. Connect with us on LinkedIn: NACU If you enjoyed this episode, don't forget to subscribe, rate, and share it with your network.
Artificial intelligence has the power to reshape economies, societies, and our daily lives. But with its rapid rise comes an important question: how can we ensure AI is developed and applied ethically so that it serves humanity instead of harming it? Responsible use requires transparency, accountability, and inclusivity—but defining and implementing these is complex. JUST Capital, a nonprofit dedicated to advancing just business practices, is addressing this challenge by exploring what “just AI” looks like, while also giving both the public and companies a voice in shaping its future.We invited Martin Whittaker, CEO of JUST Capital, to speak about how companies can responsibly navigate the opportunities and risks of AI. He highlighted the importance of aligning AI strategies with company values, building strong governance, and listening to stakeholders to guide ethical decision-making. Martin also shared insights from JUST Capital's new research, which reveals a gap between companies acknowledging AI and those taking meaningful steps, such as workforce training and transparency. He ultimately challenges business leaders to reflect on what it means to be a truly human company in an AI-driven world while assuming the responsibility that comes with this technology.Listen for insights on:How AI layoffs may require new ethical standards and practicesWhy company culture determines success in AI adoption and useLessons from early leaders like IBM and Boston ScientificThe growing role of investors in shaping AI accountabilityResources + Links:Martin Whittaker's LinkedInJUST CapitalThe JUST Report: An Early Measure of JUST AI2025 JUST 100 (00:00) - Welcome to Purpose 360 (00:13) - Martin Whittaker, JUST Capital, and AI (02:40) - Who Is JUST Capital? (03:33) - Describing Justness (04:44) - Responsible AI (08:25) - Early Measure of Just AI (11:12) - Martin's AI Usage (12:49) - AI Use Principles (14:58) - AI Study (17:04) - What Stood Out (21:44) - Adding AI Methodology (24:27) - Advice for Companies Slow to Adopt AI (26:38) - Last Thoughts (28:15) - Can AI Replace Humanity in Business? (29:57) - Wrap Up
Artificial intelligence (AI) is no longer just a buzzword in financial services. From lending to fraud detection to customer service, AI is steadily finding its way into community banks and credit unions. But for leaders, boards, and compliance teams, one pressing question remains: how do we adopt AI responsibly?In this episode of the Banking on Data podcast, host Ed Vincent sits down with Beth Nilles, Director of Implementation, who brings more than 30 years of banking leadership across lending, operations, and compliance. Beth offers practical guidance for financial institution leaders who may be exploring AI for the first time - or wrestling with how to scale responsibly without falling behind on regulatory expectations.Follow us to stay in the know!
Membership | Donations | Spotify | YouTube | Apple PodcastsThis week we hear from Larry Muhlstein, who worked on Responsible AI at Google and DeepMind before leaving to found the Holistic Technology Project. In Larry's words:“Care is crafted from understanding, respect, and will. Once care is deep enough and in a generative reciprocal relationship, it gives rise to self-expanding love. My work focuses on creating such systems of care by constructing a holistic sociotechnical tree with roots of philosophical orientation, a trunk of theoretical structure, and technological leaves and fruit that offer nourishment and support to all parts of our world. I believe that we can grow love through technologies of togetherness that help us to understand, respect, and care for each other. I am committed to supporting the responsible development of such technologies so that we can move through these trying times towards a world where we are all well together.”In this episode, Larry and I explore the “roots of philosophical orientation” and “trunk of theoretical structure” as he lays them out in his Technological Love knowledge garden, asking how technologies for reality, perspectives, and karma can help us grow a world in love. What is just enough abstraction? When is autonomy desirable and when is it a false god? What do property and selfhood look like in a future where the ground truths of our interbeing shape design and governance?It's a long, deep conversation on fundamentals we need to reckon with if we are to live in futures we actually want. I hope you enjoy it as much as we did.Our next dialogue is with Sam Arbesman, resident researcher at Lux Capital and author of The Magic of Code. We'll interrogate the distinctions between software and spellcraft, explore the unique blessings and challenges of a world defined by advanced computing, and probe the good, bad, and ugly of futures that move at the speed of thought…✨ Show Links• Hire me for speaking or consulting• Explore the interactive knowledge garden grown from over 250 episodes• Explore the Humans On The Loop dialogue and essay archives• Browse the books we discuss on the show at Bookshop.org• Dig into nine years of mind-expanding podcasts✨ Additional Resources“Growing A World In Love” — Larry Muhlstein at Hurry Up, We're Dreaming“The Future Is Both True & False” — Michael Garfield on Medium“Sacred Data” — Michael Garfield at Hurry Up, We're Dreaming“The Right To Destroy” — Lior Strahilevitz at Chicago Unbound“Decentralized Society: Finding Web3's Soul” — Puja Ohlhaver, E. Glen Weyl, and Vitalik Buterin at SSRN✨ MentionsKarl Schroeder's “Degrees of Freedom”Joshua DiCaglio's Scale TheoryGeoffrey West's ScaleHannah ArendtKen WilberDoug Rushkoff's Survival of the RichestManda Scott's Any Human Power Torey HaydenChaim Gingold's Building SimCityJames P. Carse's Finite & Infinite GamesJohn C. Wright's The Golden OecumeneEckhart Tolle's The Power of Now✨ Related Episodes This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe
Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. Henrik and Kimberly discuss AI's impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google's experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward. Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.Related ResourcesGoogle Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=enHow to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVLA transcript of this episode is here.
Send me a messageIn this week's episode of the Sustainable Supply Chain Podcast, I sat down with fellow Irishman Paul Byrnes, CEO of Mavarick AI, to explore how manufacturers can use AI and data to tackle the notoriously difficult challenge of Scope 3 emissions.Paul brings a unique perspective, rooted in both deep data science and hands-on manufacturing experience, and he didn't shy away from the hard truths: most companies still struggle with messy, unreliable data and limited supplier engagement. We unpack why primary data will soon become table stakes, why spend-based estimates can be 40% off the mark, and how engaging suppliers requires a simple but often overlooked question, what's in it for them?We also discussed where AI genuinely moves the needle:Boosting confidence in data accuracy by identifying gaps and “contaminated” entriesProviding personalised training to help suppliers meet sustainability requestsUncovering and prioritising decarbonisation levers with clear ROIPaul shared real-world examples, from medical devices to automotive, that show how targeted projects, rather than trying to tackle all 15 Scope 3 categories at once, deliver the best results. We also touched on the environmental footprint of AI itself, energy, water, rare materials, and how responsible computing and smaller, purpose-built models can reduce the impact.For leaders wrestling with emissions strategy, Paul's advice is simple: start by mapping your data landscape. Know where you're rich, where you're poor, and build from there.This is a practical, candid conversation about making sustainability and profitability work hand-in-hand, and why efficiency wins are so often sustainability wins.Elevate your brand with the ‘Sustainable Supply Chain' podcast, the voice of supply chain sustainability.Last year, this podcast's episodes were downloaded over 113,000 times by senior supply chain executives around the world.Become a sponsor. Lead the conversation.Contact me for sponsorship opportunities and turn downloads into dialogues.Act today. Influence the future.Podcast supportersI'd like to sincerely thank this podcast's generous Subscribers: Alicia Farag Kieran Ognev And remember you too can become a Sustainable Supply Chain+ subscriber - it is really easy and hugely important as it will enable me to continue to create more excellent episodes like this one and give you access to the full back catalog of over 460 episodes.Podcast Sponsorship Opportunities:If you/your organisation is interested in sponsoring this podcast - I have several options available. Let's talk!FinallyIf you have any comments/suggestions or questions for the podcast - feel free to just send me a direct message on LinkedIn, or send me a text message using this link.If you liked this show, please don't forget to rate and/or review it. It makes a big difference to help new people discover it. Thanks for listening.
Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI), discusses the consequences of AI misuse and the importance of building trust in clinical applications of AI. She highlights the need for human-centered solutions, emphasizing ethics in healthcare, and discusses the evolution of the healthcare industry.
How do you get your organization trained up to use AI tools? Richard talks to Stephanie Donahue about her work implementing AI tools at Avanade and with Avanade's customers. Stephanie discusses how many workers are bringing their own AI tools, such as ChatGPT, to work and the risks that represent to the organization. Having an approved set of tools helps people work in the right direction, but they will still need some education. The challenge lies in the rapidly shifting landscape and the lack of certifications. However, you'll have some individuals eager to utilize these tools, often on the younger side, and they can help build your practice. The opportunities are tremendous!LinksChatGPT EnterpriseLearning M365 CopilotSecure and Govern Microsoft 365 CopilotMicrosoft PurviewResponsible AI at MicrosoftMicrosoft Viva EngageRecorded July 18, 2025
Join us as we sit down with Christina Stathopoulos, founder of Dare to Data and former Google and Waze data strategist, to discuss the unique challenges and opportunities for women in data science and AI. In this episode, you'll learn how data bias and AI algorithms can impact women and minority groups, why diversity in tech teams is crucial, and how inclusive design can lead to better, fairer technology. Christina shares her personal journey as a woman in data, offers actionable advice for overcoming imposter syndrome, and highlights the importance of education and allyship in building a more inclusive future for data and AI. Panelists: Christina Stathopoulos, Founder of Dare to Data - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: Dare to DataDiversity at AlteryxInvisible WomenUnmasking AI Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music.
All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Mike Johnson, CISO, Rivian. Joining them is Jennifer Swann, CISO, Bloomberg Industry Group. In this episode: Vulnerability management vs. configuration control Open source security and supply chain trust Building security leadership presence AI governance and enterprise risk Huge thanks to our sponsor, Vanta Vanta's Trust Management Platform automates key areas of your GRC program—including compliance, internal and third-party risk, and customer trust—and streamlines the way you gather and manage information. A recent IDC analysis found that compliance teams using Vanta are 129% more productive. Get started today at Vanta.com/CISO.