POPULARITY
In honor of National Safety Month, this special compilation episode of AI and the Future of Work brings together powerful conversations with four thought leaders focused on designing AI systems that protect users, prevent harm, and promote trust. Featuring past guests:Silvio Savarese (Executive Vice President and Chief Scientist, Salesforce) -Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/15548310Navindra Yadav (Co-founder & CEO, Theom) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/12370356Eric Siegel (CEO, Gooder AI & Author ) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14464391Ben Kus (CTO, Box) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14789034✅ What You'll Learn: What it means to design AI with safety, transparency, and human oversight in mindHow leading enterprises approach responsible AI development at scaleWhy data privacy and permissions are critical to safe AI deploymentHow to detect and mitigate bias in predictive modelsWhy responsible AI requires balancing speed with long-term impactHow trust, explainability, and compliance shape the future of enterprise AI ResourcesSubscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe Other special compilation episodes Ethical AI in Hiring: How to Stay Compliant While Building a Fairer Future of Work (HR Day Special Episode)Data Privacy Day Special Episode: AI, Deepfakes & The Future of TrustThe Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & TrustWorld Health Day Special: How AI Is Making Healthcare Smarter, Cheaper, and Kinder
Michael Strange has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well. Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system. Michael Strange is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health & Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures). Related Resources If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/ Beyond ‘Our product is trusted!' – A processual approach to trust in AI healthcare (paper) https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539 Michael Strange (website): https://mau.se/en/persons/michael.strange/ A transcript of this episode is here.
The integration of Artificial Intelligence (AI) in healthcare presents both opportunities and challenges that demand careful consideration. The complex interplay between innovation, regulation, and ethical governance are central themes at the heart of global discussions on health AI. This dialogue was brought to the forefront in a recent conversation with Ricardo Baptista Leite, CEO of Health AI - Global Agency for Responsible AI in Healthcare. Understanding Health AI and Its Mission Health AI, the global agency for responsible AI in health, is at the forefront of steering the development and adoption of AI solutions through collaborative regulatory mechanisms and global standards. www.facesofdigitalhealth.com https://fodh.substack.com/ Youtube:
“Climate activist in a suit”. This is how Rainer Karcher describes himself. It is an endless debate between people advocating for the system to change from the outside and those willing to change it from the inside. In this episode Gaël Duez welcomes a strong advocate of moving the corporate world into the right direction from within? Having spent 2 decades in companies such as Siemens or Allianz, Rainer Karsher knows the corporate world well, which he now advises on sustainability. In this Green IO episode, they analyse the current backlash against ESG in our corporate world and what can be done to keep big companies aligned with the Paris agreement, but also caring about biodiversity or human rights across their supply chain. Many topics were covered such as: Why ESB has nothing to with “saving the planet”, 3 tips to tackle the end of the month vs end-of-the world dilemma, Embracing a global perspective on ESG and why the current backlash is a western world only issue, Knowing the price we pay for AI and how to avoid rebound effect, the challenge with shadow AI and why training is pivotal, and yes they talked about whales also and many more things!
Kevin Werbach interviews Dale Cendali, one of the country's leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution. Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm's nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute's Copyright Restatement project and sits on the Board of the International Trademark Association. Transcript Thompson Reuters Wins Key Fair Use Fight With AI Startup Dale Cendali - 2024 Law360 MVP Copyright Office Report on Generative AI Training
In this episode of AI Answers, Paul Roetzer and Cathy McPhillips tackle 20 of the most pressing questions from our 48th Intro to AI class—covering everything from building effective AI roadmaps and selecting the right tools, using GPTs, navigating AI ethics, understanding great prompting, and more. Access the show notes and show links here Timestamps: 00:00:00 — Intro 00:08:46 — Question #1: How do you define a “human-first” approach to AI? 00:11:33 — Question #2: What uniquely human qualities do you believe we must preserve in an AI-driven world? 00:15:55 — Question #3: Where do we currently stand with AGI—and how close are OpenAI, Anthropic, Google, and Meta to making it real? 00:17:53 — Question #4: If AI becomes smarter, faster, and more accessible to all—how do individuals or companies stand out? 00:23:17 — Question #5: Do you see a future where AI agents can collaborate like human teams? 00:28:40 — Question #6: For those working with sensitive data, when does it make sense to use a local LLM over a cloud-based one? 00:30:50 — Question #7: What's the difference between ChatGPT Projects and Custom GPTs? 00:32:36 — Question #8: If an agency or consultant is managing dozens of GPTs, what are your best tips for organizing workflows, versioning, and staying sane at scale? 00:36:12 — Question #9: How do you personally decide which AI tools to use—and do you see a winner emerging? 00:38:53 — Question #10: What tools or platforms in the agent space are actually ready for production today? 00:43:10 — Question #11: For companies just getting started, how do you recommend they identify the right pain points and build their AI roadmap? 00:45:34 — Question #12: What AI tools do you believe deliver the most value to marketing leaders right now? 00:46:20 — Question #13: How is AI forcing agencies and consultants to rethink their models, especially with rising efficiency and lower costs? 00:51:14 — Question #14: What does great prompting actually look like? And how should employers think about evaluating that skill in job candidates? 00:54:40 — Question #15: As AI reshapes roles, does age or experience become a liability—or can being the most informed person in the room still win out? 00:56:52 — Question #16: What kind of changes should leaders expect in workplace culture as AI adoption grows? 01:00:54 — Question #17: What is ChatGPT really storing in its “memory,” and how persistent is user data across sessions? 01:02:11 — Question #18: How can businesses safely use LLMs while protecting personal or proprietary information? 01:02:55 — Question #19: Why do you think some companies still ban AI tools internally—and what will it take for those policies to shift? 01:04:13 — Question #20: If AI tools are free or low-cost, does that make us the product? Or is there a more optimistic future where creators and users both win This week's episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types. For more information on MAICON and to register for this year's conference, visit www.MAICON.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Tess Posner is the CEO and founding leader of AI4ALL, a nonprofit that works to ensure the next generation of AI leaders is diverse and well-quipped to innovate. Since joining in 2017, she has focused on embedding ethics, responsibility, and real-world impact into AI education. Her work connects students from underrepresented backgrounds to hands-on projects and mentorships that prepare them to lead in tech. Beyond her role at AI4ALL, Tess is a musician whose 2023 EP Alchemy has over 600,000 streams on Spotify. She was named a 2020 Brilliant Woman in AI Ethics Hall of Fame Honoree and holds degrees from St. John's University and Columbia University.In this conversation, we discuss:Why AI literacy is becoming essential for everyone, from casual users to future developersThe role of project-based learning in helping students see the real-world impact of AIWhat it takes to expand AI access for underrepresented communitiesHow AI can either reinforce bias or drive real change, depending on who's leading its developmentWhy schools should stop penalizing AI use and instead teach students to use it with curiosity and responsibilityTess's views on balancing optimism and caution in the development of AI toolsResources:Subscribe to the AI & The Future of Work NewsletterConnect with Tess on LinkedIn or learn more about AI4ALLAI fun fact articleOn How To Build and Activate a Powerful NetworkPast episodes mentioned in this conversation:[With Tess in 2020] - About what leaders do in a crisis[With Tess in 2019] - About how to mitigate AI bias and hiring best practices [With Chris Caren, Turnitin CEO] - On Using AI to Prevent Students from Cheating[With Marcus "Bellringer" Bell] - On Creating North America's First AI Artist
Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum. Transcript AI Audits: Who, When, How...Or Even If? Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda
Andriy Burkov talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents. Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word). Andriy Burkov holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His Artificial Intelligence Newsletter reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer. Related Resources The Hundred Page Language Models Book: https://thelmbook.com/ The Hundred Page Machine Learning Book: https://themlbook.com/ True Positive Weekly (newsletter): https://aiweekly.substack.com/ A transcript of this episode is here.
June 11, 2025 ~ Misleading photographs, videos and text have spread widely on social media as protests against immigrant raids have unfolded in Los Angeles. Anjana Susarla, professor of Responsible AI at Michigan State University, talks with Chris and Lloyd about verifying sources and authenticity, government oversight, and much more.
Immaginate di dover colorare una cartina geografica sulla base degli impatti sociali dell'intelligenza artificiale, scurendo le aree più a rischio di essere colpite. Quali vi aspettereste che siano? Le città o le campagne? I centri o le periferie? Le città medie o i grandi centri urbani? Uno studio condotto dai Nokia Bell Labs di Cambridge e dal Politecnico di Torino ha esaminato il probabile impatto dellIA sul lavoro nelle principali aree urbane degli Stati Uniti e ne è uscita una mappa che offre diversi spunti di riflessione e che segnala un rischio: quello di acuire i divari sociali presenti nel paese, colpendo in particolare le città di medie dimensioni fortemente legate a un solo tipo di industria. Ce ne parla Daniele Quercia, Director of Responsible AI at Nokia Bell Labs Cambridge.
How is sustainability covered in main tech conferences? Sure cybersecurity, DevOps, or anything related to SRE, is covered at length. Not to mention AI… But what room is left for the environmental impact of our job ? And what are the main trends which are filtered out from specialized conferences in Green IT such as Green IO, GreenTech Forum or eco-compute to generic Tech conferences? To talk about it Gaël Duez sat down in this latest Green IO episode with Erica Pisani who was the MC of the Performance and Sustainability track at QCon London this year. Together they discussed: - The inspiring speakers in the track - Why Qcon didn't become AIcon - How to get C-level buy-in by highlighting the new environmental rik - The limit to efficiency: fine balancing between hardware stress and usage optimization - Why performance and sustainability are tight in technology - Why assessing Edge computing's positive and negative impact is tricky And much more! ❤️ Subscribe, follow, like, ... stay connected the way you want to never miss an episode, twice a month, on Tuesday!
Join us in this episode to engage with the thought-provoking theme of AI's disruption of content creation as Lukas Egger elucidates its impact on business trust. Understand the transformation from a content-driven to a trust-driven market and grasp how leading businesses are leveraging trust as their currency of choice. This episode is a must-listen for anyone eager to navigate through the evolving challenges and opportunities presented by AI.
Eric Brown Jr. is the founder of ELVTE Coaching and Consulting and a Generative AI innovation lead at Microsoft. In this powerful conversation with Rob Richardson, he unpacks how early adversity became fuel for legacy. From mentoring underserved youth to helping enterprise teams align tech with purpose, Eric proves that impact isn't just about innovation — it's about elevation.Disruption Now Episode 180Inside This Episode:-Life Hacker Mindset: How reframing pain unlocks potential-AI with Empathy: Why tech that doesn't center people fails-The Power of Context: Making technology relatable and actionable for allConnect with Eric Brown Jr.:LinkedIn: www.linkedin.com/in/ericbrownjrForbes Council: councils.forbes.com/profile/Eric-Brown-Jr-Founder-%7C-Chief-Transformation-Officer-ELVTE-Coaching-and-Consulting/440ec31a-0e0d-4650-ae7c-a2b401148572Thought Leadership: linkedin.com/pulse/empowering-dreams-lessons-learned-from-any-fellow-eric-brown-jrDisruption Now Apply to be a guest: form.typeform.com/to/Ir6AgmzrWatch more episodes: podcast.disruptionnow.comDisruption Now: Building a fair share for the Culture and Media. Join us and disrupt.Apply to get on the Podcast: https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comLinkedIn: https://www.linkedin.com/in/robrichardsonjr/Instagram: https://www.instagram.com/robforohio/Website: https://podcast.disruptionnow.com/
Nach einer kurzen Frühlingspause melden sich Alois und Oliver mit einer Jam-Session zurück und widmen sich der Frage, wie Unternehmen – vom Start-up bis zum Weltkonzern – sowie Staaten eine praxisnahe KI-Strategie entwickeln und umsetzen können.Vom Buzzword zur WertschöpfungKI wird inflationär benutzt; echte Orientierung fehlt.Erster Schritt: konkrete Use-Cases in den eigenen Kernprozessen identifizieren, statt PowerPoint-Strategien zu schreiben oder externe Schablonen blind zu kopieren.Schnell ins Tun kommenArbeitskreise bremsen – lieber Sandbox-Ansätze nutzen und mit fertigen Toolchains experimentieren.Erfolg wird an harten Metriken gemessen: Durchlaufzeiten, Qualität, Kosten.Datenkompetenz & Compliance als FundamentAufbau von Data-Literacy und Klarheit über IP-relevante Daten.Recht, Datenschutz und Betriebsrat müssen mitgenommen werden, aber erst nach dem Use-Case-Proof, um Hürden nicht künstlich aufzublähen.Legacy als Chance statt LastAlte Systemlandschaften gleichen Jenga-Türmen – riskant, aber ersetzbar.KI ermöglicht Refactoring in Wochen statt Jahren und bietet Hidden Champions ein „Leapfrog“-Potenzial, Märkte zu disruptieren.Ressourcen & KulturwandelRiesige IT-Abteilungen müssen neu gedacht werden: Produktivität steigt x-fach durch KI-gestützte Entwicklung.Organisationen – bis hin zu Staaten – brauchen Tempo, Agilität und eine klare Haltung zu Responsible AI.IP neu definierenIn einer Welt generativer Modelle verlieren Source-Code-Monopole an Gewicht; Patente und Marktdurchdringung gewinnen relativ an Bedeutung.Globale Standardisierungsgremien suchen nach neuen Regeln für Urheberrecht und Patentschutz.FazitZaudern ist keine Option: Wer die Lernkurve verpasst, wird für Talente unattraktiv und verliert Wettbewerbsvorteile.Zugleich braucht es bewusste Grenzen („Wo setzen wir KI nicht ein?“), um Werte, Ethik und Unabhängigkeit zu sichern.
with Audrey Watters | Episode 903 | Tech Tool Tuesday Are we racing toward an AI future without asking the right questions? Author and ed-tech critic Audrey Watters joins me to show teachers how to hit pause, get thoughtful, and keep classroom relationships at the center. Sponsored by Rise Vision Did you know the same solution that powers my AI classroom also drives campus-wide emergency alerts and digital signage? See how Rise Vision can save your school thousands: RiseVision.com/10MinuteTeacher Highlights Include Why “human first” still beats the newest AI tool: Audrey explains how relationships drive real learning. Personalized learning myths busted: How algorithmic “solutions” can isolate students. Practical guardrails for AI: Three reflection questions every teacher should ask before hitting “assign.”
We welcome Karla Childers to AI Uncovered. Karla is a long-standing leader in bioethics and data transparency in the pharmaceutical industry. As part of the Office of the Chief Medical Officer at Johnson & Johnson, she brings deep expertise in navigating the ethical implications of emerging technologies, especially artificial intelligence, in medicine and drug development.In this episode, Tim and Karla explore the intersection of AI, bioethics and patient-centered development. They discuss how existing ethical frameworks are being challenged by the rise of generative AI and why maintaining human oversight is critical—especially in high-context areas like clinical trial design, consent and medical communications. Karla also shares her views on the future of data privacy, the complexity of patient agency and how to avoid losing trust in the race for efficiency.Karla is a strong advocate for using innovation responsibly. From her work with internal bioethics committees to her perspective on evolving regulatory expectations, she offers bold insights into how the industry can modernize without compromising ethics or equity.Welcome to AI Uncovered, a podcast for technology enthusiasts that explores the intersection of generative AI, machine learning, and innovation across regulated industries. With the AI software market projected to reach $14 trillion by 2030, each episode features compelling conversations with an innovator exploring the impact of generative AI, LLMs, and other rapidly evolving technologies across their organization. Hosted by Executive VP of Product at Yseop, Tim Martin leads a global team and uses his expertise to manage the wonderful world of product.
We only talk about the upside of agentic AI.But why don't we talk about the risks? As AI agents grow exponentially more capable, so too does the likelihood of something going wrong.So how can we take advantage of agentic AI while also addressing the risks head-on? Join us to learn from a global leader on Responsible AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft's AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
From WEDI's March 27th's virtual spotlight on Artificial Intelligence, former WEDI Board Chair Ed Hafner chats with an impressive group of health care professionals about the benefits, challenges, and future of artificial intelligence in health care. The panel: Robert Laumeyer, CTO, Availity Nick Marzotto, Product Informaticist, Epic Andy Chu, SVP of Product and Technology Incubation, Providence Peter Clardy, MD, Senior Staff Clinical Specialist, Google Health Merage Ghane, PhD, Director of Responsible AI in Health, Coalition for Health AI (CHAI)
Host Kevin Werbach interviews Uthman Ali, Global Responsible AI Officer at BP, to delve into the complexities of implementing responsible AI practices within a global energy company. Ali emphasizes how the culture of safety in the industry influences BP's willingness to engage in AI governance. He discusses the necessity of embedding ethical AI principles across all levels of the organization, emphasizing tailored training programs for various employee roles—from casual AI users to data scientists—to ensure a comprehensive understanding of AI's ethical implications. He also highlights the importance of proactive governance, advocating for the development of ethical policies and procedures that address emerging technologies such as robotics and wearables. Ali's approach underscores the balance between innovation and ethical responsibility, aiming to foster an environment where AI advancements align with societal values and regulatory standards. Uthman Ali is BP's first Global Responsible AI Officer, and has been instrumental in establishing the company's Digital Ethics Center of Excellence. He advises prominent organizations such as the World Economic Forum and the British Standards Institute on AI governance and ethics. Additionally, Ali contributes to research and policy discussions as an advisor to Oxford University's Oxethica spinout and various AI safety institutes. Transcript Prioritizing People and Planet as the Metrics for Responsible AI (IEEE Standards Association) Robocops and Superhumans: Dilemmas of Frontier Technology (2024 podcast interview)
Ravit Dotan, PhD asserts that beneficial AI adoption requires clarity of purpose, good judgment, ethical leadership, and making responsibility integral to innovation. Ravit and Kimberly discuss the philosophy of science; why all algorithms incorporate values; how technical judgements centralize power; not exempting AI from established norms; when lists of risks lead us astray; wasting water, eating meat, and using AI responsibly; corporate ethics washing; patterns of ethical decoupling; reframing the relationship between responsibility and innovation; measuring what matters; and the next phase of ethical innovation in practice. Ravit Dotan, PhD is an AI ethics researcher and governance advisor on a mission to enable everyone to adopt AI the right way. The Founder and CEO of TechBetter, Ravit holds a PhD in Philosophy from UC Berkeley and is a sought-after advisor on the topic of responsible innovation.Related ResourcesThe AI Treasure Chest (Substack): https://techbetter.substack.com/The Values Embedded in Machine Learning Research (Paper): https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083A transcript of this episode is here.
Send us a textDebbie Reynolds “The Data Diva” talks to Temi Odesanya, Director, Responsible AI and Automation. We discuss her extensive background in artificial intelligence automation, detailing her journey that began with her family's business in the Netherlands. This early exposure to technology ignited her interest in data science, leading her to pursue postgraduate degrees and a scholarship in Italy. Throughout her career, she has become increasingly aware of the critical importance of data governance and compliance, particularly regarding customer data. The conversation highlights Temi's curiosity and the significance of ethical considerations in technology, especially in the context of automated decision-making and data privacy.Support the show
AI can do more than it's ever done… but there's a lot of unfounded hype, especially when it comes to user research. When should you delegate tasks to AI? And when should you insist on keeping the human in the loop? In this episode, Therese Fessenden sits down with Alexander Knoll, co-founder of Condens, to discuss the strengths and limitations of AI tools for research, and the evolving role of the user researcher.About Alexander & Condens: LinkedIn | Condens.ioResearch Repository Guide: https://condens.io/guides/research-repository-guide/On-Demand Recording of Condens Event: Making an Impact with User Research: How to Drive Change and Get NoticedAlex's Article with NN/g: Common-Sense AI Integration: Lessons from the Cofounder of CondensOther Related NN/g Articles & Courses:Free Articles about AI & UXCourse: UX Basic TrainingCourse: Accelerating Research with AICourse: AI for Design WorkflowsCourse: Designing AI Experiences
To coincide with International Human Resources Day (May 20th), this special compilation episode of AI and the Future of Work explores the promises and pitfalls of AI in hiring.HR leaders are under pressure to innovate—but how can we automate hiring ethically, avoid bias, and stay compliant with evolving laws and expectations?In this episode, we revisit key moments from past interviews with four top voices shaping the future of ethical workforce automation:
In this episode of The Beat, host Sandy Vance sits down with Dr. Heather Bassett, Chief Medical Officer at Xsolis and creator of the proprietary Care Level Score. Together, they explore the future of AI in healthcare and how real-world AI applications are already driving improved operational efficiency, reducing clinician burnout, and enhancing payer-provider collaboration. Dr. Bassett also shares insights from her recent involvement with CHAI.org, emphasizing why healthcare leaders must take initiative in developing responsible AI—without waiting for government mandates. Tune in to hear how Xsolis is helping health systems move from spreadsheets to smart automation, making data more actionable, and building a more transparent, interoperable ecosystem.In this episode, they talk about:How Xsolis is working toward creating a frictionless healthcare systemHow Xsolis reduces manual tasks, decreasing clinician burnout, and boosting productivityXsolis' use of data aggregation to minimize redundancy in the healthcare industryMoving healthcare teams off spreadsheets and into AI-driven solutionsHow client collaboration helps maximize the value Xsolis deliversCMS recognition of the need to eliminate unnecessary steps to accelerate patient careThe role of interoperability in standardizing data exchange and enhancing contextWhy transparency is critical when vendors integrate artificial intelligenceEvaluating whether vendors have the people and processes to support AI change managementA Little About Heather:Dr. Heather Bassett is the Chief Medical Officer at Xsolis, an AI-driven health technology company transforming healthcare through a human-centered approach. With over 20 years of experience in clinical care and health IT, she leads Xsolis' medical and data science teams and co-developed the company's signature innovation—the Care Level Score, which blends clinical expertise with AI and machine learning to assess patient status in real time.A board-certified internist and former hospitalist, Dr. Bassett oversees Xsolis' award-winning physician advisor program, denials management, and AI model development. She's a frequent speaker at national healthcare conferences, including ACMA and HFMA, and has been featured in Becker's, MedCity News, and Medical Economics. Recognized as CMO of the Year by the Nashville Business Journal and named one of Becker's Women in Health IT to Know (2023, 2024), Dr. Bassett is also a member of CHAI.org, advocating for responsible AI in healthcare.
In this episode of Numbers and Narratives, Sean and Ibby dive deep into the world of responsible AI with guest Sarah Payne, AI Strategy and Program Lead at Coinbase. Sarah shares her expertise on implementing AI across workflows while prioritizing ethics and user trust. The conversation explores the challenges of developing AI systems that are not just efficient, but also ethically sound and safe for users.Sarah discusses the importance of having humans in the loop during AI development, gradually reducing human involvement as systems are validated over time. The hosts and guest also delve into the complexities of designing guardrails for AI, especially when dealing with non-declarative systems like large language models. Sarah provides valuable insights on using multiple models to cross-check responses and flag potential issues, as well as leveraging real customer interactions to test and improve AI workflows. Tune in to gain a deeper understanding of responsible AI practices and the challenges facing companies as they navigate this rapidly evolving landscape.
What makes AI trustworthy, ethical, and compliant in business? In this episode, we explore how Chief AI Officers lead governance efforts to align innovation with regulation. Learn how the CAIO bridges strategy, risk, and ethics to ensure responsible AI use across the enterprise. Ideal for executives, managers, and consultants navigating AI transformation.
Dr. Ash Watson studies how stories ranging from classic Sci-Fi to modern tales invoking moral imperatives, dystopian futures and economic logic shape our views of AI. Ash and Kimberly discuss the influence of old Sci-Fi on modern tech; why we can't escape the stories we're told; how technology shapes society; acting in ways a machine will understand; why the language we use matters; value transference from humans to AI systems; the promise of AI's promise; grounding AI discourse in material realities; moral imperatives and capitalizing on crises; economic investment as social logic; AI's claims to innovation; who innovation is really for; and positive developments in co-design and participatory research. Dr. Ash Watson is a Scientia Fellow and Senior Lecturer at the Centre for Social Research in Health at UNSW Sydney. She is also an Affiliate of the Australian Research Council (ARC) Centre of Excellence for Automated Decision-Making and Society (CADMS). Related Resources:Ash Watson (Website): https://awtsn.com/The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies (Article): https://doi.org/10.1111/1467-9566.13840An imperative to innovate? Crisis in the sociotechnical imaginary (Article): https://doi.org/10.1016/j.tele.2024.102229A transcript of this episode is here.
Is your AI helping—or quietly hurting—your business? In this episode, we uncover how hidden biases in large language models can quietly erode trust, derail decision-making, and expose companies to legal and reputational risk. You'll learn actionable strategies to detect, mitigate, and govern AI bias across high-stakes domains like hiring, finance, and healthcare. Perfect for corporate leaders and consultants navigating AI transformation, this episode offers practical insights for building ethical, accountable, and high-performing AI systems.
Agentic AI is equally as daunting as it is dynamic. So…… how do you not screw it up? After all, the more robust and complex agentic AI becomes, the more room there is for error. Luckily, we've got Dr. Maryam Ashoori to guide our agentic ways. Maryam is the Senior Director of Product Management of watsonx at IBM. She joined us at IBM Think 2025 to break down agentic AI done right. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Agentic AI Benefits for EnterprisesWatson X's New Features & AnnouncementsAI-Powered Enterprise Solutions at IBMResponsible Implementation of Agentic AILLMs in Enterprise Cost OptimizationDeployment and Scalability EnhancementsAI's Impact on Developer ProductivityProblem-Solving with Agentic AITimestamps:00:00 AI Agents: A Business Imperative06:14 "Optimizing Enterprise Agent Strategy"09:15 Enterprise Leaders' AI Mindset Shift09:58 Focus on Problem-Solving with Technology13:34 "Boost Business with LLMs"16:48 "Understanding and Managing AI Risks"Keywords:Agentic AI, AI agents, Agent lifecycle, LLMs taking actions, WatsonX.ai, Product management, IBM Think conference, Business leaders, Enterprise productivity, WatsonX platform, Custom AI solutions, Environmental Intelligence Suite, Granite Code models, AI-powered code assistant, Customer challenges, Responsible AI implementation, Transparency and traceability, Observability, Optimization, Larger compute, Cost performance optimization, Chain of thought reasoning, Inference time scaling, Deployment service, Scalability of enterprise, Access control, Security requirements, Non-technical users, AI-assisted coding, Developer time-saving, Function calling, Tool calling, Enterprise data integration, Solving enterprise problems, Responsible implementation, Human in the loop, Automation, IBM savings, Risk assessment, Empowering workforce.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
When AI goes wrong, who takes the blame? In this episode, we unpack the high-stakes risks of ungoverned AI and reveal why clear accountability is vital for business leaders. Discover practical steps to safeguard your organisation, align AI with ethical standards, and turn governance into a strategic advantage. Perfect for executives, consultants, and transformation leaders navigating AI's complex landscape.
On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/politics-and-polemics
On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/political-science
In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday's legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday's AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government's first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly's influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact. Transcript Responsible AI: Empowering Innovation with Integrity Putting Responsible AI into Action (video masterclass)
The endless excitement around Agentic AI might seem to eclipse the traditional blocking and tackling of data management, but don't be fooled. The fundamentals of working with data are now more important than ever. If anything, the lure of AI puts added pressure on teams to button down their data pipelines and move closer to optimal data orchestration, whether for data warehousing, RAG models, or training the next generation of deep learning modules. Register for this episode of InsideAnalysis to learn best practices for getting your data house in order! Host @eric_kavanagh will explain why Responsible AI starts and ends with data quality. He'll be joined by Ariel Pohoryles and Mani Gill of Boomi, who will demonstrate why optimal data flows will be crucial for success with AI. Attendees will learn: the power of data orchestration for optimizing AI why a platform approach to data management is crucial the importance of feeding AI Agents with trusted, real-time data how organizations can overcome data inertia to catch the AI train
Minister Jack Chambers is launching 'Guidelines for the Responsible use of Artificial Intelligence in the Public Service'. Artificial Intelligence is changing how we live, work, and engage with the world around us. Governments worldwide face the challenge of meeting the digital expectations of their end-users while keeping pace with advancements in technology. These Guidelines compliment and inform strategies regarding the adoption of innovative technology and ways of working already underway in the public service, and seek to set a high standard for public service transformation and innovation, while prioritising public trust and people's rights. The Guidelines have been developed to actively empower public servants to use AI in the delivery of services. By firmly placing the human in the process, these guidelines aim to enhance public trust in how Government uses AI. A range of resources designed to support the adoption of AI have been developed, including clear information on Government's Principles for Responsible AI, a Decision Framework for evaluating the potential use of AI, a Responsible AI Canvas Tool to be used at planning stage, and the AI Lifecycle Guidance tool. Other government supports available to public service organisations also include learning and development materials and courses for public servants at no cost. In this regard, and in addition to its existing offering on AI, the Institute for Public Administration will provide a tutorial and in-person training dedicated to the AI Guidelines to further assist participants in applying the guidelines in their own workplaces. The guidelines contain examples of how AI is already being used across public services, including: St. Vincent's University Hospital exploring the potential for AI to assist with performing heart ultrasound scans, in order to help reduce waiting times for patients. The Revenue Commissioners using Large Language Models to route taxpayer queries more efficiently, ensuring faster and more accurate responses. The Department of Agriculture, Food and the Marine developing an AI-supported solution to detect errors in grant applications and reduce processing times for applications. Minister Jack Chambers said: "AI offers immense possibilities to improve the provision of public services. These guidelines support public service bodies in undertaking responsible innovation in a way that is practical, helpful and easy to follow. "In keeping with Government's AI strategy, the guidance as well as the learning and development supports being offered by the Institute for Public Administration, will help public servants to pursue those opportunities in a way that is responsible. "AI is already transforming our world and it is crucial that we embrace that change and adapt quickly in order to deliver better policy and better public services for the people of Ireland." Minister of State for Public Procurement, Digitalisation and eGovernment, Emer Higgins said: "AI holds the potential to revolutionise how we deliver services, make decisions, and respond to the needs of our people. These guidelines will support thoughtful integration of AI into our public systems, enhance efficiency, and reduce administrative burdens and financial cost. Importantly, this will be done with strong ethical and human oversight, ensuring fairness, transparency, accountability, and the protection of rights and personal data at every step." Minister of State for Trade Promotion, Artificial Intelligence and Digital Transformation, Niamh Smyth said: "Government is committed to leveraging the potential of AI for unlocking productivity, addressing societal challenges, and delivering enhanced services. The guidelines launched today are part of a whole of government approach to putting in place the necessary enablers to underpin responsible and impactful AI adoption across the public service. They are an important step in meeting government's objective of better outcomes through AI adopti...
In this episode of Risk Management Brick by Brick, The Power of AI in Risk - Episode 7, host Jason Reichl sits down with Rohan Sen, Principal in Data Risk and Privacy Practice at PwC, to explore the critical intersection of AI innovation and risk management. They dive into how organizations can implement responsible AI practices while maintaining technological progress.
With AI tools becoming more common across HR and people functions, HR leaders across the globe are asking the same question: how do we use AI without compromising on empathy, ethics, and culture. So, in this special bonus episode of the Digital HR Leaders podcast, host David Green welcomes Kevin Heinzelman, SVP of Product at Workhuman to discuss this very critical topic. David and Kevin share a core belief: that technology should support people, not replace them, and in this conversation, they explore what that means in practice. Tune in as they discuss: Why now is a critical moment for HR to lead with a human-first mindset How HR can retain control and oversight over AI-driven processes The unique value of human intelligence, and how it complements AI How recognition can support skills-based transformation and company culture during times of radical transformation What ethical, responsible AI looks like in day-to-day HR practice How to avoid common pitfalls like bias and data misuse Practical ways to integrate AI without losing sight of culture and care Whether you're early in your AI journey or looking to scale responsibly, this episode, sponsored by Workhuman, offers clear, grounded insight to help HR lead the way - with purpose and with people in mind. Workhuman is on a mission to help organisations build more human-centred workplaces through the power of recognition, connection, and Human Intelligence. By combining AI with the rich data from their #1 rated employee recognition platform, Workhuman delivers the insights HR leaders need to drive engagement, culture, and meaningful change at scale. To learn more, visit Workhuman.com and discover how Human Intelligence can help your organisation lead with purpose. Hosted on Acast. See acast.com/privacy for more information.
Join host Nick Schutt on Robots & Red Tape as he sits down with Christopher Teixeira, Principal Data Scientist at MITRE, to explore the critical role of data quality in powering AI-driven decisions for public health and human welfare.Dive into the complexities of building reliable AI models in high-stakes environments, where data integrity can mean the difference between success and unintended consequences. In this thoughtful conversation, discover:How MITRE leverages high-quality data to support agencies like the CDC and CMS, driving impactful public health outcomes.Real-world lessons from Chris's career, including how strategic data selection can optimize AI performance without sacrificing accuracy.The importance of diverse teams and continuous model evaluation to ensure ethical, effective AI applications.Strategies for balancing AI's potential with human oversight to address challenges in child welfare and beyond.Ideal for data scientists, policymakers, and tech enthusiasts eager to understand how AI and data can shape a better future when guided by rigor and responsibility.#AI #DataScience #PublicHealth #ChildWelfare #MITRE #RobotsAndRedTape #TechForGood #DataQuality #EthicalAI #GovTech #Innovation #TechnologyPodcast #ArtificialIntelligence #Podcast #GovernmentInnovation #AIEthics #DataDriven #PublicSector #NextGenAI
On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Professor Werbach interviews David Weinberger, author of several books and a long-time deep thinker on internet trends, about the broader implications of AI on how we understand and interact with the world. They examine the idea that throughout history, dominant technologies—like the printing press, the clock, or the computer—have subtly but profoundly shaped our concepts of knowledge, intelligence, and identity. Weinberger argues that AI, and especially machine learning, represents a new kind of paradigm shift: unlike traditional computing, which requires humans to explicitly encode knowledge in rules and categories, AI systems extract meaning and make predictions from vast numbers of data points without needing to understand or generalize in human terms. He describes how these systems uncover patterns beyond human comprehension—such as identifying heart disease risk from retinal scans—by finding correlations invisible to human experts. Their discussion also grapples with the disquieting implications of this shift, including the erosion of explainability, the difficulty of ensuring fairness when outcomes emerge from opaque models, and the way AI systems reflect and reinforce cultural biases embedded in the data they ingest. The episode closes with a reflection on the tension between decentralization—a value long championed in the internet age—and the current consolidation of AI power in the hands of a few large firms, as well as Weinberger's controversial take on copyright and data access in training large models. David Weinberger is a pioneering thought-leader about technology's effect on our lives, our businesses, and ideas. He has written several best-selling, award-winning books explaining how AI and the Internet impact how we think the world works, and the implications for business and society. In addition to writing for many leading publications, he has been a writer-in-residence, twice, at Google AI groups, Editor of the Strong Ideas book series for MIT Press, a Fellow at the Harvarrd Berkman-Klein Center for Internet and Society, contributor of dozens of commentaries on NPR's All Things Considered, a strategic marketing VP and consultant, and for six years a Philosophy professor. Transcript Everyday Chaos Our Machines Now Have Knowledge We'll Never Understand (Wired) How Machine Learning Pushes Us to Define Fairness (Harvard Business Review)
On today's episode, we're joined by Shub Agarwal, author of Successful AI Product Creation: A 9-Step Framework available from Wiley and a professor of the University of Southern California teaching AI and Generative AI product management to graduate students. He is also Senior Vice President of Product Management for AI and Generative AI at U.S. Bank. Shub joins Emerj's Managing Editor Matthew DeMello on the show today to offer his perspective on what responsible AI adoption truly looks like in a regulated environment - and why method matters more than models. With over 15 years of experience bringing enterprise-grade AI products to life, he explains why “AI is the new UX” - and what that means for the future of digital interaction in banking and beyond. He also dives into the nuances of responsible AI adoption - not as a buzzword but as a framework rooted in decades of data governance and enterprise rigor. The opinions that Shub expresses in today's show are his own and do not reflect that of U.S. Bank, the University of Southern California, or their respective leadership. This episode is sponsored by Searce. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
In this episode of the podcast, we are joined again by Dr Jamie Smith, Executive Chairman at C-Learning and author of the new book The Responsible AI Revolution. Jamie joins us to discuss the intersection of AI and education, emphasising the need for a responsible approach to AI implementation. Jamie introduces his book, which addresses the potential consequences of AI in education and the importance of asking deeper questions about its role. The conversation explores ethical considerations, the need for upskilling, and the redefinition of roles in the workforce as AI continues to evolve. In this conversation, we explore the transformative impact of AI on productivity, leadership, and organisational culture. We also discuss the necessity for leaders to embrace discomfort and innovation, the importance of a supportive culture for AI adoption, and the potential for a collective approach to AI governance. The dialogue also touches on the need for a reimagined educational framework that prioritises human well-being over standardised assessments, as well as the importance of living in the present and making meaningful contributions to society.Chapters00:00 Introduction and Context of AI in Education06:04 The Responsible AI Revolution12:09 Ethical Considerations and Unintended Consequences18:01 Upskilling and Redefining Roles in the Age of AI27:26 Embracing AI: A Paradigm Shift30:08 Positive Disruption and Innovation32:03 Leadership in the Age of AI36:03 The Role of Culture in AI Adoption39:44 The Future of AI and Our Collective Responsibility46:51 Rethinking Education for the AI EraGrab a copy of The Responsible AI Revolution.Thanks so much for joining us again for another episode - we appreciate you.Ben & Steve xChampioning those who are making the future of education a reality.Follow us on XFollow us on LinkedInCheck out all about EdufuturistsWant to sponsor future episodes or get involved with the Edufuturists work?Get in touchGet your tickets for Edufuturists Uprising 2025
Today's podcast is a little more niche than usual, which oddly ends up being a message that I think all of us would benefit from hearing in one way or another. AI use is becoming more and more common in our lives, and it affects our brains in was that we need to be thoughtful about. I describe that process here and send a message to younger people about the implications our use of AI has in our lives and personal development. Thanks for listening. As always, Much Love ❤️ and please take care.
We discussed a few things including:1. Their career journeys 2. History of NFHA3. Michael's impact on organization; AI in housing/financial services4. April 28-30 Responsible AI Symposium https://events.nationalfairhousing.org/2025AISymposium5. Trends, challenges and opportunities re fair housing and technologyLisa Rice is the President and CEO of the National Fair Housing Alliance (NFHA), the nation's only national civil rights agency solely dedicated to eliminating all forms of housing discrimination and ensuring equitable housing opportunities for all people and communities. Lisa has led her team in using civil rights principles to bring fairness and equity into the housing, lending, and technology sectors. She is a member of the Leadership Conference on Civil and Human Rights Board of Directors, Center for Responsible Lending Board of Directors, FinRegLab Board of Directors, JPMorgan Chase Consumer Advisory Council, Mortgage Bankers Association Consumer Advisory Council, Freddie Mac Affordable Housing Advisory Council, Fannie Mae Affordable Housing Advisory Council, Quicken Loans Advisory Forum, Bipartisan Policy Center's Housing Advisory Council, and Berkeley's The Terner Center Advisory Council. She has received numerous awards including the National Housing Conference's Housing Visionary Award and was selected as one of TIME Magazine's 2024 ‘Closers.'----Dr. Michael Akinwumi translates values and principles to math and code. He ensures critical and emerging technologies like AI and blockchain enhance innovation, security, trust, and access in housing and financial systems, preventing historical injustices. As a senior leader, he collaborates with policymakers and industry to strengthen protections and advance innovation. A Rita Allen Civic Science Fellow at Rutgers, he developed an AI policy tool for state-level impact and co-developed an AI Readiness (AIR) Index to help state governments assess their AI maturity. Michael also advises AI companies on developing, deploying and adopting responsible innovations, driven by his belief that a life lived for others is most meaningful, aiming for lasting societal change through technology.#podcast #afewthingspodcast
DJ Patil was the first-ever US Chief Data Scientist and has led some of the biggest data initiatives in government and business. He has also been at the forefront of leveraging AI to solve the thorniest problems companies face, as well as “stupid, boring problems in the back office.” He joins the WorkLab podcast to discuss the potential of AI to change business, how leaders can drive technological transformation, and why it's vital for data scientists to never lose sight of the human element. WorkLab Subscribe to the WorkLab newsletter
Robert Mahari examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn't negate accountability; AI's negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research. Robert Mahari is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. A transcript of this episode is here. Additional Resources:The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/Robert Mahari (website): https://robertmahari.com/
Noelle Russell on harnessing the power of AI in a responsible and ethical way Noelle Russell compares AI to a baby tiger, it's super cute when it's small but it can quickly grow into something huge and dangerous. As the CEO and founder of the AI Leadership Institute and as an early developer on Amazon Alexa, Noelle has a deep understanding of scaling and selling AI. This week Noelle joins Tammy to discuss why she's so passionate about teaching individuals and organizations about AI and how companies can leverage AI in the right way. It's time to learn how to tame the tiger! Please note that the views expressed may not necessarily be those of NTT DATA.Links: Noelle Russell Scaling Responsible AI AI Leadership Institute Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.