POPULARITY
We only talk about the upside of agentic AI.But why don't we talk about the risks? As AI agents grow exponentially more capable, so too does the likelihood of something going wrong.So how can we take advantage of agentic AI while also addressing the risks head-on? Join us to learn from a global leader on Responsible AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft's AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
AI can do more than it's ever done… but there's a lot of unfounded hype, especially when it comes to user research. When should you delegate tasks to AI? And when should you insist on keeping the human in the loop? In this episode, Therese Fessenden sits down with Alexander Knoll, co-founder of Condens, to discuss the strengths and limitations of AI tools for research, and the evolving role of the user researcher.About Alexander & Condens: LinkedIn | Condens.ioResearch Repository Guide: https://condens.io/guides/research-repository-guide/On-Demand Recording of Condens Event: Making an Impact with User Research: How to Drive Change and Get NoticedAlex's Article with NN/g: Common-Sense AI Integration: Lessons from the Cofounder of CondensOther Related NN/g Articles & Courses:Free Articles about AI & UXCourse: UX Basic TrainingCourse: Accelerating Research with AICourse: AI for Design WorkflowsCourse: Designing AI Experiences
To coincide with International Human Resources Day (May 20th), this special compilation episode of AI and the Future of Work explores the promises and pitfalls of AI in hiring.HR leaders are under pressure to innovate—but how can we automate hiring ethically, avoid bias, and stay compliant with evolving laws and expectations?In this episode, we revisit key moments from past interviews with four top voices shaping the future of ethical workforce automation:
In this episode of The Beat, host Sandy Vance sits down with Dr. Heather Bassett, Chief Medical Officer at Xsolis and creator of the proprietary Care Level Score. Together, they explore the future of AI in healthcare and how real-world AI applications are already driving improved operational efficiency, reducing clinician burnout, and enhancing payer-provider collaboration. Dr. Bassett also shares insights from her recent involvement with CHAI.org, emphasizing why healthcare leaders must take initiative in developing responsible AI—without waiting for government mandates. Tune in to hear how Xsolis is helping health systems move from spreadsheets to smart automation, making data more actionable, and building a more transparent, interoperable ecosystem.In this episode, they talk about:How Xsolis is working toward creating a frictionless healthcare systemHow Xsolis reduces manual tasks, decreasing clinician burnout, and boosting productivityXsolis' use of data aggregation to minimize redundancy in the healthcare industryMoving healthcare teams off spreadsheets and into AI-driven solutionsHow client collaboration helps maximize the value Xsolis deliversCMS recognition of the need to eliminate unnecessary steps to accelerate patient careThe role of interoperability in standardizing data exchange and enhancing contextWhy transparency is critical when vendors integrate artificial intelligenceEvaluating whether vendors have the people and processes to support AI change managementA Little About Heather:Dr. Heather Bassett is the Chief Medical Officer at Xsolis, an AI-driven health technology company transforming healthcare through a human-centered approach. With over 20 years of experience in clinical care and health IT, she leads Xsolis' medical and data science teams and co-developed the company's signature innovation—the Care Level Score, which blends clinical expertise with AI and machine learning to assess patient status in real time.A board-certified internist and former hospitalist, Dr. Bassett oversees Xsolis' award-winning physician advisor program, denials management, and AI model development. She's a frequent speaker at national healthcare conferences, including ACMA and HFMA, and has been featured in Becker's, MedCity News, and Medical Economics. Recognized as CMO of the Year by the Nashville Business Journal and named one of Becker's Women in Health IT to Know (2023, 2024), Dr. Bassett is also a member of CHAI.org, advocating for responsible AI in healthcare.
In this episode of Numbers and Narratives, Sean and Ibby dive deep into the world of responsible AI with guest Sarah Payne, AI Strategy and Program Lead at Coinbase. Sarah shares her expertise on implementing AI across workflows while prioritizing ethics and user trust. The conversation explores the challenges of developing AI systems that are not just efficient, but also ethically sound and safe for users.Sarah discusses the importance of having humans in the loop during AI development, gradually reducing human involvement as systems are validated over time. The hosts and guest also delve into the complexities of designing guardrails for AI, especially when dealing with non-declarative systems like large language models. Sarah provides valuable insights on using multiple models to cross-check responses and flag potential issues, as well as leveraging real customer interactions to test and improve AI workflows. Tune in to gain a deeper understanding of responsible AI practices and the challenges facing companies as they navigate this rapidly evolving landscape.
What makes AI trustworthy, ethical, and compliant in business? In this episode, we explore how Chief AI Officers lead governance efforts to align innovation with regulation. Learn how the CAIO bridges strategy, risk, and ethics to ensure responsible AI use across the enterprise. Ideal for executives, managers, and consultants navigating AI transformation.
Is your AI helping—or quietly hurting—your business? In this episode, we uncover how hidden biases in large language models can quietly erode trust, derail decision-making, and expose companies to legal and reputational risk. You'll learn actionable strategies to detect, mitigate, and govern AI bias across high-stakes domains like hiring, finance, and healthcare. Perfect for corporate leaders and consultants navigating AI transformation, this episode offers practical insights for building ethical, accountable, and high-performing AI systems.
Agentic AI is equally as daunting as it is dynamic. So…… how do you not screw it up? After all, the more robust and complex agentic AI becomes, the more room there is for error. Luckily, we've got Dr. Maryam Ashoori to guide our agentic ways. Maryam is the Senior Director of Product Management of watsonx at IBM. She joined us at IBM Think 2025 to break down agentic AI done right. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Agentic AI Benefits for EnterprisesWatson X's New Features & AnnouncementsAI-Powered Enterprise Solutions at IBMResponsible Implementation of Agentic AILLMs in Enterprise Cost OptimizationDeployment and Scalability EnhancementsAI's Impact on Developer ProductivityProblem-Solving with Agentic AITimestamps:00:00 AI Agents: A Business Imperative06:14 "Optimizing Enterprise Agent Strategy"09:15 Enterprise Leaders' AI Mindset Shift09:58 Focus on Problem-Solving with Technology13:34 "Boost Business with LLMs"16:48 "Understanding and Managing AI Risks"Keywords:Agentic AI, AI agents, Agent lifecycle, LLMs taking actions, WatsonX.ai, Product management, IBM Think conference, Business leaders, Enterprise productivity, WatsonX platform, Custom AI solutions, Environmental Intelligence Suite, Granite Code models, AI-powered code assistant, Customer challenges, Responsible AI implementation, Transparency and traceability, Observability, Optimization, Larger compute, Cost performance optimization, Chain of thought reasoning, Inference time scaling, Deployment service, Scalability of enterprise, Access control, Security requirements, Non-technical users, AI-assisted coding, Developer time-saving, Function calling, Tool calling, Enterprise data integration, Solving enterprise problems, Responsible implementation, Human in the loop, Automation, IBM savings, Risk assessment, Empowering workforce.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
When AI goes wrong, who takes the blame? In this episode, we unpack the high-stakes risks of ungoverned AI and reveal why clear accountability is vital for business leaders. Discover practical steps to safeguard your organisation, align AI with ethical standards, and turn governance into a strategic advantage. Perfect for executives, consultants, and transformation leaders navigating AI's complex landscape.
On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/politics-and-polemics
On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/political-science
In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday's legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday's AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government's first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly's influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact. Transcript Responsible AI: Empowering Innovation with Integrity Putting Responsible AI into Action (video masterclass)
The endless excitement around Agentic AI might seem to eclipse the traditional blocking and tackling of data management, but don't be fooled. The fundamentals of working with data are now more important than ever. If anything, the lure of AI puts added pressure on teams to button down their data pipelines and move closer to optimal data orchestration, whether for data warehousing, RAG models, or training the next generation of deep learning modules. Register for this episode of InsideAnalysis to learn best practices for getting your data house in order! Host @eric_kavanagh will explain why Responsible AI starts and ends with data quality. He'll be joined by Ariel Pohoryles and Mani Gill of Boomi, who will demonstrate why optimal data flows will be crucial for success with AI. Attendees will learn: the power of data orchestration for optimizing AI why a platform approach to data management is crucial the importance of feeding AI Agents with trusted, real-time data how organizations can overcome data inertia to catch the AI train
Minister Jack Chambers is launching 'Guidelines for the Responsible use of Artificial Intelligence in the Public Service'. Artificial Intelligence is changing how we live, work, and engage with the world around us. Governments worldwide face the challenge of meeting the digital expectations of their end-users while keeping pace with advancements in technology. These Guidelines compliment and inform strategies regarding the adoption of innovative technology and ways of working already underway in the public service, and seek to set a high standard for public service transformation and innovation, while prioritising public trust and people's rights. The Guidelines have been developed to actively empower public servants to use AI in the delivery of services. By firmly placing the human in the process, these guidelines aim to enhance public trust in how Government uses AI. A range of resources designed to support the adoption of AI have been developed, including clear information on Government's Principles for Responsible AI, a Decision Framework for evaluating the potential use of AI, a Responsible AI Canvas Tool to be used at planning stage, and the AI Lifecycle Guidance tool. Other government supports available to public service organisations also include learning and development materials and courses for public servants at no cost. In this regard, and in addition to its existing offering on AI, the Institute for Public Administration will provide a tutorial and in-person training dedicated to the AI Guidelines to further assist participants in applying the guidelines in their own workplaces. The guidelines contain examples of how AI is already being used across public services, including: St. Vincent's University Hospital exploring the potential for AI to assist with performing heart ultrasound scans, in order to help reduce waiting times for patients. The Revenue Commissioners using Large Language Models to route taxpayer queries more efficiently, ensuring faster and more accurate responses. The Department of Agriculture, Food and the Marine developing an AI-supported solution to detect errors in grant applications and reduce processing times for applications. Minister Jack Chambers said: "AI offers immense possibilities to improve the provision of public services. These guidelines support public service bodies in undertaking responsible innovation in a way that is practical, helpful and easy to follow. "In keeping with Government's AI strategy, the guidance as well as the learning and development supports being offered by the Institute for Public Administration, will help public servants to pursue those opportunities in a way that is responsible. "AI is already transforming our world and it is crucial that we embrace that change and adapt quickly in order to deliver better policy and better public services for the people of Ireland." Minister of State for Public Procurement, Digitalisation and eGovernment, Emer Higgins said: "AI holds the potential to revolutionise how we deliver services, make decisions, and respond to the needs of our people. These guidelines will support thoughtful integration of AI into our public systems, enhance efficiency, and reduce administrative burdens and financial cost. Importantly, this will be done with strong ethical and human oversight, ensuring fairness, transparency, accountability, and the protection of rights and personal data at every step." Minister of State for Trade Promotion, Artificial Intelligence and Digital Transformation, Niamh Smyth said: "Government is committed to leveraging the potential of AI for unlocking productivity, addressing societal challenges, and delivering enhanced services. The guidelines launched today are part of a whole of government approach to putting in place the necessary enablers to underpin responsible and impactful AI adoption across the public service. They are an important step in meeting government's objective of better outcomes through AI adopti...
In this episode of Risk Management Brick by Brick, The Power of AI in Risk - Episode 7, host Jason Reichl sits down with Rohan Sen, Principal in Data Risk and Privacy Practice at PwC, to explore the critical intersection of AI innovation and risk management. They dive into how organizations can implement responsible AI practices while maintaining technological progress.
With AI tools becoming more common across HR and people functions, HR leaders across the globe are asking the same question: how do we use AI without compromising on empathy, ethics, and culture. So, in this special bonus episode of the Digital HR Leaders podcast, host David Green welcomes Kevin Heinzelman, SVP of Product at Workhuman to discuss this very critical topic. David and Kevin share a core belief: that technology should support people, not replace them, and in this conversation, they explore what that means in practice. Tune in as they discuss: Why now is a critical moment for HR to lead with a human-first mindset How HR can retain control and oversight over AI-driven processes The unique value of human intelligence, and how it complements AI How recognition can support skills-based transformation and company culture during times of radical transformation What ethical, responsible AI looks like in day-to-day HR practice How to avoid common pitfalls like bias and data misuse Practical ways to integrate AI without losing sight of culture and care Whether you're early in your AI journey or looking to scale responsibly, this episode, sponsored by Workhuman, offers clear, grounded insight to help HR lead the way - with purpose and with people in mind. Workhuman is on a mission to help organisations build more human-centred workplaces through the power of recognition, connection, and Human Intelligence. By combining AI with the rich data from their #1 rated employee recognition platform, Workhuman delivers the insights HR leaders need to drive engagement, culture, and meaningful change at scale. To learn more, visit Workhuman.com and discover how Human Intelligence can help your organisation lead with purpose. Hosted on Acast. See acast.com/privacy for more information.
On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Professor Werbach interviews David Weinberger, author of several books and a long-time deep thinker on internet trends, about the broader implications of AI on how we understand and interact with the world. They examine the idea that throughout history, dominant technologies—like the printing press, the clock, or the computer—have subtly but profoundly shaped our concepts of knowledge, intelligence, and identity. Weinberger argues that AI, and especially machine learning, represents a new kind of paradigm shift: unlike traditional computing, which requires humans to explicitly encode knowledge in rules and categories, AI systems extract meaning and make predictions from vast numbers of data points without needing to understand or generalize in human terms. He describes how these systems uncover patterns beyond human comprehension—such as identifying heart disease risk from retinal scans—by finding correlations invisible to human experts. Their discussion also grapples with the disquieting implications of this shift, including the erosion of explainability, the difficulty of ensuring fairness when outcomes emerge from opaque models, and the way AI systems reflect and reinforce cultural biases embedded in the data they ingest. The episode closes with a reflection on the tension between decentralization—a value long championed in the internet age—and the current consolidation of AI power in the hands of a few large firms, as well as Weinberger's controversial take on copyright and data access in training large models. David Weinberger is a pioneering thought-leader about technology's effect on our lives, our businesses, and ideas. He has written several best-selling, award-winning books explaining how AI and the Internet impact how we think the world works, and the implications for business and society. In addition to writing for many leading publications, he has been a writer-in-residence, twice, at Google AI groups, Editor of the Strong Ideas book series for MIT Press, a Fellow at the Harvarrd Berkman-Klein Center for Internet and Society, contributor of dozens of commentaries on NPR's All Things Considered, a strategic marketing VP and consultant, and for six years a Philosophy professor. Transcript Everyday Chaos Our Machines Now Have Knowledge We'll Never Understand (Wired) How Machine Learning Pushes Us to Define Fairness (Harvard Business Review)
On today's episode, we're joined by Shub Agarwal, author of Successful AI Product Creation: A 9-Step Framework available from Wiley and a professor of the University of Southern California teaching AI and Generative AI product management to graduate students. He is also Senior Vice President of Product Management for AI and Generative AI at U.S. Bank. Shub joins Emerj's Managing Editor Matthew DeMello on the show today to offer his perspective on what responsible AI adoption truly looks like in a regulated environment - and why method matters more than models. With over 15 years of experience bringing enterprise-grade AI products to life, he explains why “AI is the new UX” - and what that means for the future of digital interaction in banking and beyond. He also dives into the nuances of responsible AI adoption - not as a buzzword but as a framework rooted in decades of data governance and enterprise rigor. The opinions that Shub expresses in today's show are his own and do not reflect that of U.S. Bank, the University of Southern California, or their respective leadership. This episode is sponsored by Searce. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
In this episode of the podcast, we are joined again by Dr Jamie Smith, Executive Chairman at C-Learning and author of the new book The Responsible AI Revolution. Jamie joins us to discuss the intersection of AI and education, emphasising the need for a responsible approach to AI implementation. Jamie introduces his book, which addresses the potential consequences of AI in education and the importance of asking deeper questions about its role. The conversation explores ethical considerations, the need for upskilling, and the redefinition of roles in the workforce as AI continues to evolve. In this conversation, we explore the transformative impact of AI on productivity, leadership, and organisational culture. We also discuss the necessity for leaders to embrace discomfort and innovation, the importance of a supportive culture for AI adoption, and the potential for a collective approach to AI governance. The dialogue also touches on the need for a reimagined educational framework that prioritises human well-being over standardised assessments, as well as the importance of living in the present and making meaningful contributions to society.Chapters00:00 Introduction and Context of AI in Education06:04 The Responsible AI Revolution12:09 Ethical Considerations and Unintended Consequences18:01 Upskilling and Redefining Roles in the Age of AI27:26 Embracing AI: A Paradigm Shift30:08 Positive Disruption and Innovation32:03 Leadership in the Age of AI36:03 The Role of Culture in AI Adoption39:44 The Future of AI and Our Collective Responsibility46:51 Rethinking Education for the AI EraGrab a copy of The Responsible AI Revolution.Thanks so much for joining us again for another episode - we appreciate you.Ben & Steve xChampioning those who are making the future of education a reality.Follow us on XFollow us on LinkedInCheck out all about EdufuturistsWant to sponsor future episodes or get involved with the Edufuturists work?Get in touchGet your tickets for Edufuturists Uprising 2025
In this episode, we sit down with Janet Xinyi Guo, a leading expert in data management and digital transformation at Lloyds. With over 7 years of experience driving one of the UK's largest digital transformation programs, Janet brings deep insights into the intersection of AI, data ethics, and responsible data usage. Content: 00:00 – Introduction 03:15 – What is Data Ethics? 06:40 – The Role of AI 10:20 – Responsible AI 14:05 – Data Centralisation 17:30 – Data Storage 21:10 – Enhancing Customer Experience 24:45 – Surfacing the Right Data 28:20 – Navigating Regulations like GDPR
This panel discussion will consider how ethical decisions will be influenced in the future by the many applications of Artificial Intelligence. An ethicist and philosopher, an engineer who will design intelligent robots, and a computer scientist whose goal will be to make "responsible AI" synonymous with "AI" will each present a view of future AI ethics and then discuss how their views will diverge. While each participant will be a specialist conducting research into AI ethics, this discussion will bring together scientific, technical, and humanistic issues under the broad category of responsibility. Panel Members Ludovic Righetti, Electrical and Computer Engineer; Director of Machines in Motion Laboratory, Autonomous Machines in Motion Jeff Sebo, Ethicist and Philosopher; Director of Center for Mind, Ethics and Policy, AI Moral Well Being Julia Stoyanovich, Computer Scientist, Director of Center for Responsible AI, AI Governance Moderated by Harold Sjursen, Professor Emeritus, NYU Tandon School of Engineering
Today's podcast is a little more niche than usual, which oddly ends up being a message that I think all of us would benefit from hearing in one way or another. AI use is becoming more and more common in our lives, and it affects our brains in was that we need to be thoughtful about. I describe that process here and send a message to younger people about the implications our use of AI has in our lives and personal development. Thanks for listening. As always, Much Love ❤️ and please take care.
We discussed a few things including:1. Their career journeys 2. History of NFHA3. Michael's impact on organization; AI in housing/financial services4. April 28-30 Responsible AI Symposium https://events.nationalfairhousing.org/2025AISymposium5. Trends, challenges and opportunities re fair housing and technologyLisa Rice is the President and CEO of the National Fair Housing Alliance (NFHA), the nation's only national civil rights agency solely dedicated to eliminating all forms of housing discrimination and ensuring equitable housing opportunities for all people and communities. Lisa has led her team in using civil rights principles to bring fairness and equity into the housing, lending, and technology sectors. She is a member of the Leadership Conference on Civil and Human Rights Board of Directors, Center for Responsible Lending Board of Directors, FinRegLab Board of Directors, JPMorgan Chase Consumer Advisory Council, Mortgage Bankers Association Consumer Advisory Council, Freddie Mac Affordable Housing Advisory Council, Fannie Mae Affordable Housing Advisory Council, Quicken Loans Advisory Forum, Bipartisan Policy Center's Housing Advisory Council, and Berkeley's The Terner Center Advisory Council. She has received numerous awards including the National Housing Conference's Housing Visionary Award and was selected as one of TIME Magazine's 2024 ‘Closers.'----Dr. Michael Akinwumi translates values and principles to math and code. He ensures critical and emerging technologies like AI and blockchain enhance innovation, security, trust, and access in housing and financial systems, preventing historical injustices. As a senior leader, he collaborates with policymakers and industry to strengthen protections and advance innovation. A Rita Allen Civic Science Fellow at Rutgers, he developed an AI policy tool for state-level impact and co-developed an AI Readiness (AIR) Index to help state governments assess their AI maturity. Michael also advises AI companies on developing, deploying and adopting responsible innovations, driven by his belief that a life lived for others is most meaningful, aiming for lasting societal change through technology.#podcast #afewthingspodcast
DJ Patil was the first-ever US Chief Data Scientist and has led some of the biggest data initiatives in government and business. He has also been at the forefront of leveraging AI to solve the thorniest problems companies face, as well as “stupid, boring problems in the back office.” He joins the WorkLab podcast to discuss the potential of AI to change business, how leaders can drive technological transformation, and why it's vital for data scientists to never lose sight of the human element. WorkLab Subscribe to the WorkLab newsletter
Robert Mahari examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn't negate accountability; AI's negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research. Robert Mahari is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. A transcript of this episode is here. Additional Resources:The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/Robert Mahari (website): https://robertmahari.com/
Noelle Russell on harnessing the power of AI in a responsible and ethical way Noelle Russell compares AI to a baby tiger, it's super cute when it's small but it can quickly grow into something huge and dangerous. As the CEO and founder of the AI Leadership Institute and as an early developer on Amazon Alexa, Noelle has a deep understanding of scaling and selling AI. This week Noelle joins Tammy to discuss why she's so passionate about teaching individuals and organizations about AI and how companies can leverage AI in the right way. It's time to learn how to tame the tiger! Please note that the views expressed may not necessarily be those of NTT DATA.Links: Noelle Russell Scaling Responsible AI AI Leadership Institute Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." – Bob PulverMy guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices. Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage. * Human-Centric AI * AI Adoption and Readiness * AI Regulation and GovernanceThe past year's progress explained through three pillars that are shaping ethical AI:These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year.1. Human-Centric AIChange from Last Year:* Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness.Reasons for Change:* Increasing comfort level with AI and experience with the benefits that it brings to our work* Continued exploration and development of low stakes, low friction use cases* AI continues to be seen as a partner and magnifier of human capabilitiesWhat to Expect in the Next Year:* Increased experience with human machine partnerships* Increased opportunities to build superpowers* Increased adoption of human centric tools by employers2. AI Adoption and ReadinessChange from Last Year:* Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives.* Significant growth in AI educational resources and adoption within teams, rather than just individuals.Reasons for Change:* Improved understanding of AI's benefits and limitations, reducing fears and resistance.* Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building.What to Expect in the Next Year:* More systematic frameworks for AI adoption across entire organizations.* Increased demand for formal AI proficiency assessments to ensure responsible and effective usage.3. AI Regulation and GovernanceChange from Last Year:* Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws).* Momentum to hold vendors of AI increasingly accountable for ethical AI use.Reasons for Change:* Growing awareness of risks associated with unchecked AI deployment.* Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness.What to Expect in the Next Year:* Implementation of stricter AI audits and compliance standards.* Clearer responsibilities for vendors and organizations regarding ethical AI practices.* Finally some concrete standards that will require fundamental changes in oversight and create messy situations.Practical Takeaways:What should I/we be doing to move the ball fwd and realize AI's full potential while limiting collateral damage?Prioritize Human-Centric AI Design* Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology's sake.* Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement.Build Robust AI Literacy and Education Programs* Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations.* Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness.Strengthen AI Governance and Oversight* Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation.* Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits.Monitor AI Effectiveness and Impact* Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality.* Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust.Email Bob- bob@cognitivepath.io Listen to Bob's awesome podcast - Elevate you AIQ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com
Lorri Rowlandson, head of strategy and innovation at BGIS, outlines a three-layer approach to AI integration: individual level desktop AI for job reengineering, operational level AI for enhancing internal efficiencies and client value-driven AI for service improvement. She emphasizes the importance of practical innovation, responsible AI use, employee engagement and continuous learning. This episode is sponsored by Envoy. Connect with Us:LinkedIn: https://www.linkedin.com/company/ifmaFacebook: https://www.facebook.com/InternationalFacilityManagementAssociation/Twitter: https://twitter.com/IFMAInstagram: https://www.instagram.com/ifma_hq/YouTube: https://youtube.com/ifmaglobalVisit us at https://ifma.org
"So you want trusted data, but you want it now? Building this trust really starts with transparency and collaboration. It's not just technology. It's about creating a single governed view of data that is consistent no matter who accesses it, " says Errol Rodericks, Director of Product Marketing at Denodo.In this episode of the 'Don't Panic, It's Just Data' podcast, Shawn Rogers, CEO at BARC US, speaks with Errol Rodericks from Denodo. They explore the crucial link between trusted data and successful AI initiatives. They discuss key factors such as data orchestration, governance, and cost management within complex cloud environments. We've all heard the horror stories – AI projects that fail spectacularly, delivering biased or inaccurate results. But what's the root cause of these failures? More often than not, it's a lack of focus on the data itself. Rodericks emphasises that "AI is only as good as the data it's trained on." This episode explores how organisations can avoid the "garbage in, garbage out" scenario by prioritising data quality, lineage, and responsible AI practices. Learn how to avoid AI failures and discover strategies for building an AI-ready data foundation that ensures trusted, reliable outcomes. Key topics include overcoming data bias, ETL processes, and improving data sharing practices.TakeawaysBad data leads to bad AI outputs.Trust in data is essential for effective AI.Organisations must prioritise data quality and orchestration.Transparency and collaboration are key to building trust in data.Compliance is a responsibility for the entire organisation, not just IT.Agility in accessing data is crucial for AI success.Chapters00:00 The Importance of Data Quality in AI02:57 Building Trust in Data Ecosystems06:11 Navigating Complex Data Landscapes09:11 Top-Down Pressure for AI Strategy11:49 Responsible AI and Data Governance15:08 Challenges in Personalisation and Compliance17:47 The Role of Speed in Data Utilisation20:47 Advice for CFOs on AI InvestmentsAbout DenodoDenodo is a leader in data management. The award-winning Denodo Platform is the leading logical data management platform for transforming data into trustworthy insights and outcomes for all data-related initiatives across the enterprise, including AI and self-service. Denodo's customers in all industries all over the world have delivered trusted AI-ready and business-ready data in a third of the time and with 10x better performance than with lakehouses and other mainstream data platforms alone.
Produced entirely using AI | Powered by Google Notebook LMIn this special edition of the AI Takeover Series—part of the Blockchain DXB main podcast—we dive deep into the Artificial Intelligence Index Report 2025, one of the most comprehensive and data-rich annual reports on global AI trends, curated by Stanford University's Human-Centered AI Institute.
How are enterprises leveraging AI, analytics, data platforms, and automation to drive real business impact? In this exclusive conversation, Chetan Manjarekar, SVP and Global Head of Digital with Eviden, an Atos Group Company, joins HFS Research to break down the latest trends, challenges, and innovations shaping the generative enterprise. From customer experience to data readiness and co-innovation, discover how Eviden leads the charge in digital transformation. The key points discussed include:AI & automation are reshaping enterprise transformation: GenAI and automation are now central to digital transformation, but success depends on aligning technology investments with clear business outcomes.Customer, employee, and partner experience are interlinked priorities: 75% of enterprises prioritize customer experience, but organizations are now equally focused on improving employee and partner experiences to drive holistic transformation.Data readiness is the biggest AI adoption challenge: 74% of organizations struggle with fragmented, inconsistent, and unstructured data, limiting AI's full potential. Eviden tackles this with its Data Readiness Assessment Framework to improve governance and accessibility.AI investments are surpassing traditional analytics: A major shift is happening; AI spending is set to grow from 19% to 31% in two years, overtaking analytics. AI is increasingly embedded in enterprise applications, automation, and smart platforms.Co-innovation & responsible AI will define industry leaders: Eviden emphasizes co-innovation with customers and partners to develop industry-specific AI solutions. Responsible AI, data governance, and security are critical for sustainable success. Watch now to explore key insights and what's next for AI-powered business evolution! Access the full report on AADA Quadfecta Services for the Generative Enterprise 2024: https://www.hfsresearch.com/research/hfs-horizons-aada-quadfecta-services-for-the-generative-enterprise-2024/
In this episode on the VUX World podcast, we chat all about the innovative AI journey of Citizens Advice with Stuart Pearson.We discover how a nonprofit organisation is revolutionising customer support through Caddy, an intelligent AI assistant that reduces average handle time by 50% while maintaining a rigorous ethical approach.Stuart shares the meticulous process of developing an internal AI tool that supports contact centre agents, highlighting the importance of responsible AI implementation. From initial challenges with chatbots to creating a sophisticated AI solution, this podcast reveals how organisations can leverage artificial intelligence to enhance productivity, improve service delivery, and ultimately help more people.Subscribe to VUX World.Subscribe to The AI Ultimatum Substack. Hosted on Acast. See acast.com/privacy for more information.
To mark World Health Day, we're revisiting powerful conversations with innovators using AI to improve healthcare access, reduce costs, and return empathy to the patient experience.In this special compilation episode, you'll hear from five leaders at the intersection of healthcare and emerging technologies—sharing how AI is already reshaping how we deliver care and what's next for clinical innovation.
What does it really mean to build AI responsibly, at Google scale?In this episode, we sit down with Jen Gennai, the person Google turned to when it needed to build ethical AI from the ground up. Jen founded Google's Responsible Innovation team and has spent her career at the intersection of trust, safety, and emerging tech.From advising world leaders on AI governance to navigating internal pushback, Jen opens up about what it actually takes to embed ethics into one of the fastest-moving industries on the planet.Tune in to this episode as we explore:The importance of early ethical considerationsTrust as a core pillar of technologyNavigating AI's impact on fairness and discriminationThe future of AI-human relationshipsBuilding AI literacy in organizationsThe intersection of regulation and innovationLinks mentioned:Connect with Jen Gennai on LinkedInT3 Website‘You Can Culture: Transformative Leadership Habits for a Thriving Workplace, Positive Impact and Lasting Success' is now available here.
Discusses the importance of fostering AI literacy in research and higher education. Our guest today is Sarah Florini who is an Associate Director and Associate Professor in the Lincoln Center for Applied Ethics at Arizona State University. Sarah's work focuses on technology, social media, technology ethics, digital ethnography, and Black digital culture. Among other things, Sarah is dedicated to fostering critical AI literacy and ethical engagement with AI/ML technologies. She founded the AI and Ethics Workgroup to serve as a catalyst for critical conversations about the role of AI models in higher education. Additional resources: Distributed AI Research Institute: https://www.dair-institute.org/ Mystery AI Hype Theater 3000: https://www.dair-institute.org/maiht3k/ Tech Won't Save Us: https://techwontsave.us/ CITI Program's Essentials of Responsible AI course: https://about.citiprogram.org/course/essentials-of-responsible-ai/
"I went from door-to-door sales to closing million-dollar tech deals — but the real secret? Asking boldly. Watch [12:33] to hear how Jarrett Albritton turned Clubhouse conversations into 80,000 followers and landed 40+ recruiters on stage. Spoiler: His AI platform is changing how careers are built!"-Why are tech sales misunderstood (and why are they not what you think)?-The biggest misconception about tech sales that's costing you $$$.-3 secrets to breaking into tech sales and dominating AI-driven careers.⏰ Timestamps: 0:00 – "Why Tech Sales Isn't Just for Geeks (Misconceptions)" 12:33 – "How Jarrett Albritton Leveraged Clubhouse to Empower 80K People" 25:47 – "The AI Tool Helping You Land a Job Faster (Why Right C is Different)" 38:55 – "Mastering Interviews with AI: The Future of Job Search" 45:00 – "3 P's to Succeed in Tech Sales (Persistence, Patience, Pivot)"✅ Try Job Search Genius FREE for 7 days: jobsearchgenius.ai
Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn't enough; and the hard work required to develop good AI. Phaedra Boinodiris is IBM's Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. A transcript of this episode is here. Additional Resources: Phaedra's Website - https://phaedra.ai/ The Future World Alliance - https://futureworldalliance.org/
Building Trust Through Technology: Responsible AI in Practice // MLOps Podcast #298 with Allegra Guinan, Co-founder of Lumiera.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractAllegra joins the podcast to discuss how Responsible AI (RAI) extends beyond traditional pillars like transparency and privacy. While these foundational elements are crucial, true RAI success requires deeply embedding responsible practices into organizational culture and decision-making processes. Drawing from Lumiera's comprehensive approach, Allegra shares how organizations can move from checkbox compliance to genuine RAI integration that drives innovation and sustainable AI adoption.// BioAllegra is a technical leader with a background in managing data and enterprise engineering portfolios. Having built her career bridging technical teams and business stakeholders, she's seen the ins and outs of how decisions are made across organizations. She combines her understanding of data value chains, passion for responsible technology, and practical experience guiding teams through complex implementations into her role as co-founder and CTO of Lumiera.// Related LinksWebsite: https://www.lumiera.ai/Weekly newsletter: https://lumiera.beehiiv.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Allegra on LinkedIn: /allegraguinanTimestamps:[00:00] Allegra's preferred coffee[00:14] Takeaways[01:11] Responsible AI principles[03:13] Shades of Transparency[07:56] Effective questioning for clarity [11:17] Managing stakeholder input effectively[14:06] Business to Tech Translation[19:30] Responsible AI challenges[23:59] Successful plan vs Retroactive responsibility[28:38] AI product robustness explained [30:44] AI transparency vs Engagement[34:10] Efficient interaction preferences[37:57] Preserving human essence[39:51] Conflict and growth in life[46:02] Subscribe to Allegra's Weekly Newsletter!
In this episode of the Disruption Now podcast, host Rob Richardson engages in a dynamic conversation with Seema Alexander, a seasoned entrepreneur and business strategist. They delve into Seema's rich background, from her upbringing in an entrepreneurial immigrant family to her current roles as co-founder of Virgent AI and DC Startup & Tech Week. The discussion traverses the evolution of entrepreneurship, the pivotal role of emerging technologies like AI, and the significance of community collaboration in fostering innovation.Three Key Takeaways for Disruptors:Embrace Emerging Technologies: Seema emphasizes entrepreneurs' need to stay abreast of technological advancements, particularly AI. She notes that the rapid pace of AI development means businesses must adapt swiftly or risk obsolescence.Focus on Strategic Positioning: Drawing from her experience with the UNIQUE Method™, Seema highlights the importance of businesses identifying their unique value propositions to stand out in competitive markets.Foster Community Engagement: Through her work with DC Startup & Tech Week, Seema illustrates how building and participating in entrepreneurial communities can lead to shared knowledge, resources, and opportunities for collaboration, driving collective growth and innovation.This episode offers valuable insights into navigating the evolving landscape of entrepreneurship and the critical role of adaptability and community in achieving sustained success.Seema Alexander:Co-Chair, DC Startup & Tech WeekCo-Founder & President, Virgent AICreator, U.N.I.Q.U.E. Method™Business Podcast HostFrom Spaghetti to GrowthConnectLinkedin Book a 15-Min Founder/CEO Meeting
Send us a textBeth White, Founder and CEO of MeBeBot, joins us this episode to discuss lessons learned from working with AI. She also shares concerns regarding the continued widespread integration of AI in organizations as well as hopes for the future of AI. [0:00] IntroductionWelcome, Beth!Today's Topic: The Evolution of AI in the Workplace[5:21] What has Beth learned over the years working with AI?How AI has evolved from basic if-then statementsThe journey toward developing conversational AI[10:20] What are Beth's concerns regarding AI?Generative AI consumes an unprecedented amount of energyAI regulation remain largely unstructured, especially within organizationsThe importance of grounding the AI with human checks and balances[26:23] What are Beth's hopes for the future of AI?Educational programs developed by a diverse set of professionalsThe natural integration of AI into HR practices[34:56] ClosingThanks for listening!Quick Quote“[If we] help educate and train people on AI today, we'll have a more employable workforce for the future.”Contact:Beth's LinkedInDavid's LinkedInPodcast Manager: Karissa HarrisEmail us!Production by Affogato Media
Ryan Carrier trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective. Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets. A transcript of this episode is here. Ryan Carrier is the Executive Director of ForHumanity, a non-profit organization improving AI outcomes through increased accountability and oversight.
In this thought-provoking episode of On the Brink with Andi Simon, we welcome Dan Nestle, a strategic communications expert and AI enthusiast, to explore the transformative role of artificial intelligence in marketing, branding, and storytelling. With over 20 years of corporate and agency experience, Dan has been at the forefront of digital and content innovation, helping businesses adapt to the rapidly evolving communications landscape. As AI tools become more sophisticated, many professionals are left wondering: Will AI replace human creativity? Can AI-generated content be authentic? How can businesses use AI without losing their unique voice? Dan tackles these pressing questions, offering real-world insights into how AI can serve as a powerful assistant—rather than a replacement—for communicators, marketers, and business leaders. During our conversation, Dan shares his fascinating career trajectory, from teaching English in Japan to leading global corporate communications teams. Now, as the founder of Inquisitive Communications, he helps organizations navigate AI's impact on content strategy, storytelling, and audience engagement. He also provides a step-by-step breakdown of the AI tools he uses daily to streamline content creation, repurpose valuable insights, and enhance branding efforts without sacrificing authenticity. We'll discuss the importance of curiosity in embracing new technologies, the fear and hesitation many professionals feel around AI, and why adopting AI-driven workflows can save time, increase efficiency, and improve creativity. Whether you're a seasoned marketer, an entrepreneur, or just starting to explore AI's potential, this episode is packed with actionable strategies to help you integrate AI into your communications and branding efforts. Get ready to rethink how you approach content in the age of AI, and learn why being human is still the most valuable differentiator in a tech-driven world. If you prefer to watch the video of our podcast, click here. About Dan Nestle
Coinciding with International Women's Day this week, this special episode of AI and the Future of Work highlights key conversations with women leading the way in AI ethics, governance, and accountability.In this curated compilation, we feature four remarkable experts working to create a more equitable, trustworthy, and responsible AI future:
Olivia Gambelin values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters. A transcript of this episode is here. Olivia Gambelin is a renowned AI Ethicist and the Founder of Ethical Intelligence, the world's largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI. Additional Resources: Responsible AI: Implement an Ethical Approach in Your Organization – BookPlato & a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes - Book The Values Canvas – RAI Design Tool Women Shaping the Future of Responsible AI – Organization In Pursuit of Good Tech | Subscribe - Newsletter
In this episode of the Disruption Now Podcast, host Rob Richardson sits down with Rachel Desrochers, the founder of Gratitude Collective and a passionate advocate for gratitude, entrepreneurship, and community building. Rachel shares her journey of turning a simple idea into a thriving business while fostering a culture of kindness and connection. She discusses the power of gratitude in both personal and professional life, the challenges of entrepreneurship, and how she supports other business owners through her work with the Incubator Kitchen Collective. Tune in for an inspiring conversation on purpose-driven business, resilience, and the impact of gratitude on success.Top 3 Things You'll Learn from This Episode:1. The Power of Gratitude in Business & Life – Practicing gratitude can fuel success, strengthen leadership, and build meaningful connections.2. Entrepreneurship with Purpose – Rachel Desrochers shares insights on growing a values-driven business while creating opportunities for others.3. Building a Community-Driven Brand – Lessons from Rachel's journey in launching Gratitude Collective and supporting entrepreneurs through the Incubator Kitchen Collective.Rachel's Social Media Pages:LinkedIn: https://www.linkedin.com/in/rachel-desrochers-b2356760/Websites:https://www.thegratitudecollective.org/ (Company)https://www.powertopursue.org/ (Company)https://www.incubatorkitchencollective.org/ (Company)Disruption Now: building a fair share for the Culture and Media. Join us and disrupt. Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comLinkedIn: https://www.linkedin.com/in/robrichardsonjr/Instagram: https://www.instagram.com/robforohio/Website: https://podcast.disruptionnow.com/
Discusses data privacy compliance and environmental, social, and governance (ESG) reporting. Our guest today is Katrina Destrée who is a globally experienced privacy and sustainability professional. Katrina's work in privacy and sustainability focuses on privacy programs, ESG reporting, awareness and training, and strategic communications. Additional resources: International Association of Privacy Professionals (IAPP): https://iapp.org/ ISACA: https://www.isaca.org/ Global Enabling Sustainability Initiative (GeSI): https://www.gesi.org/ Agréa Privacy & ESG: https://agreaprivacyesg.com/ CITI Program's “GDPR for Research and Higher Ed” course: https://about.citiprogram.org/course/gdpr-for-research-and-higher-ed/ CITI Program's “Big Data and Data Science Research Ethics” course: https://about.citiprogram.org/course/big-data-and-data-science-research-ethics/ CITI Program's “Essentials of Responsible AI” course: https://about.citiprogram.org/course/essentials-of-responsible-ai/
In this episode of AI, Government, and the Future, host Max Romanik is joined by Erica Werneman Root, Founder of EWR Consulting, to discuss the complex interplay between AI governance, regulation, and practical implementation. Drawing from her unique background in economics and law, Erica explores how organizations can navigate AI deployment while balancing innovation with responsible governance.
In this episode of Disruption Now, Tremain Davis shares forward-thinking insights on how innovation is upending traditional business models and reshaping entire industries. Here are three things you can learn from this episode:Adaptive Leadership: Davis disrupts conventional norms by staying agile, empowering his team, and constantly reevaluating strategies to drive transformation.Leveraging Disruptive Technologies: Learn how embracing new technologies can drive sustainable growth and create a competitive edge. Tremain's approach as a leader integrates cutting-edge digital solutions that challenge outdated business practices.Strategic Risk-Taking: Understand the value of taking calculated risks and maintaining a proactive mindset to turn challenges into opportunities. As a leader, Davis exemplifies disruption by challenging the status quo and fostering a culture of innovation that inspires others to break away from traditional molds.Davis's journey to becoming a disruptive leader in his community is rooted in his commitment to challenging outdated paradigms and championing local change. He transforms his business and drives community-wide innovation and resilience by empowering emerging entrepreneurs, mentoring future leaders, and building collaborative networks.Tremain Davis's social media page:LinkedIn: https://www.linkedin.com/in/tremain-davis-348504a4/Website: https://www.thinkpgc.org/Instagram: https://www.instagram.com/iamtremain/Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comLinkedIn: https://www.linkedin.com/in/robrichardsonjr/Instagram: https://www.instagram.com/robforohio/Website: https://podcast.disruptionnow.com/
Dr. Allison Scott, CEO of the Kapor Foundation, joins Mike Palmer on Trending in Education to discuss the crucial intersection of technology, education, and equity. The conversation explores the persistent lack of diversity in the tech industry and the urgent need to prepare students for the AI-driven future. Dr. Scott emphasizes the importance of critical thinking, ethical considerations, and creating a more inclusive tech ecosystem that benefits everyone. This episode offers valuable insights for educators, parents, and anyone interested in the transformative power of technology and its impact on society. We reference the WEF Future of Jobs Report and the Kapor's Guide to Responsible AI. Key Takeaways: The tech industry is not representative of the population. This lack of diversity limits innovation and economic opportunity. AI is rapidly changing the job market. The fastest-growing jobs and skills are related to AI, big data, and cybersecurity. Critical thinking and ethical considerations are essential in AI development and use. Students need to be prepared to analyze and evaluate AI technologies. Diversity in tech is crucial for creating AI solutions that benefit everyone. A broader understanding of AI will be beneficial across various fields. Educators have a vital role to play in preparing students for the age of AI. They need to foster critical thinking, curiosity, and a passion for learning. Why You Should Listen: Dr. Allison Scott provides a compelling vision for the future of tech education. She emphasizes the importance of diversity, critical thinking, and ethical considerations in the development and use of AI. This episode is a must-listen for anyone interested in the future of work, education, and technology. Subscribe to Trending in Education to stay informed about the latest trends and insights in the field. Timestamps: 00:00 Introduction and Guest Welcome 00:43 Dr. Allison Scott's Origin Story 01:07 Understanding Inequality in Education 02:55 The Kapor Foundation's Mission 03:36 The Leaky Tech Pipeline Framework 05:40 Responsible AI and Diversity 06:52 Preparing the Next Generation for AI 10:38 Critical Thinking and AI Education 11:25 Future of Work and Skills 13:56 Encouraging Innovation and Problem Solving 22:04 Philanthropy and Nonprofits in Tech 23:19 Conclusion and Takeaways