Podcasts about responsible ai

  • 552PODCASTS
  • 939EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 12, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about responsible ai

Show all podcasts related to responsible ai

Latest podcast episodes about responsible ai

New Books in Politics
Democracy for Sale: Death by Dark Money

New Books in Politics

Play Episode Listen Later May 12, 2025 71:04


On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/politics-and-polemics

New Books in Political Science
Democracy for Sale: Death by Dark Money

New Books in Political Science

Play Episode Listen Later May 9, 2025 71:04


On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/political-science

The Road to Accountable AI
Kelly Trindel: AI Governance Across the Enterprise? All in a Day's Work

The Road to Accountable AI

Play Episode Listen Later May 8, 2025 36:32


In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday's legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday's AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government's first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly's influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact.  Transcript Responsible AI: Empowering Innovation with Integrity   Putting Responsible AI into Action (video masterclass)  

Risk Management: Brick by Brick
Responsible AI Isn't Optional: New Strategies for Risk Management Success with Rohan Sen

Risk Management: Brick by Brick

Play Episode Listen Later May 7, 2025 25:06


In this episode of Risk Management Brick by Brick, The Power of AI in Risk - Episode 7, host Jason Reichl sits down with Rohan Sen, Principal in Data Risk and Privacy Practice at PwC, to explore the critical intersection of AI innovation and risk management. They dive into how organizations can implement responsible AI practices while maintaining technological progress.

Digital HR Leaders with David Green
How Human Intelligence Can Guide Responsible AI in the Workplace (an Interview with Kevin Heinzelman)

Digital HR Leaders with David Green

Play Episode Listen Later May 6, 2025 52:37


With AI tools becoming more common across HR and people functions, HR leaders across the globe are asking the same question: how do we use AI without compromising on empathy, ethics, and culture. So, in this special bonus episode of the Digital HR Leaders podcast, host David Green welcomes Kevin Heinzelman, SVP of Product at Workhuman to discuss this very critical topic. David and Kevin share a core belief: that technology should support people, not replace them, and in this conversation, they explore what that means in practice. Tune in as they discuss: Why now is a critical moment for HR to lead with a human-first mindset How HR can retain control and oversight over AI-driven processes The unique value of human intelligence, and how it complements AI How recognition can support skills-based transformation and company culture during times of radical transformation What ethical, responsible AI looks like in day-to-day HR practice How to avoid common pitfalls like bias and data misuse Practical ways to integrate AI without losing sight of culture and care Whether you're early in your AI journey or looking to scale responsibly, this episode, sponsored by Workhuman, offers clear, grounded insight to help HR lead the way - with purpose and with people in mind. Workhuman is on a mission to help organisations build more human-centred workplaces through the power of recognition, connection, and Human Intelligence. By combining AI with the rich data from their #1 rated employee recognition platform, Workhuman delivers the insights HR leaders need to drive engagement, culture, and meaningful change at scale. To learn more, visit Workhuman.com and discover how Human Intelligence can help your organisation lead with purpose. Hosted on Acast. See acast.com/privacy for more information.

New Books Network
Democracy for Sale: Death by Dark Money

New Books Network

Play Episode Listen Later May 2, 2025 71:04


On this edition of Ctrl Alt Deceit: Democracy in Danger, we are live at the Royal United Services Institute. Nina Dos Santos and Owen Bennett Jones are joined by a world-class panel to discuss the dangers posed by the waves of dark money threatening to overwhelm our democratic institutions.Panelists:--Tom Keatinge, Director, Centre for Finance and Security, RUSI--Darren Hughes, Chief Executive, Electoral Reform Society--Gina Neff, Executive Director, Minderoo Centre for Technology & Democracy at the University of Cambridge, and Professor of Responsible AI, Queen Mary University London Producer: Pearse Lynch Executive Producer: Lucinda Knight Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

The Road to Accountable AI
David Weinberger: How AI Challenges Our Fundamental Ideas

The Road to Accountable AI

Play Episode Listen Later May 1, 2025 35:53


Professor Werbach interviews David Weinberger, author of several books and a long-time deep thinker on internet trends, about the broader implications of AI on how we understand and interact with the world. They examine the idea that throughout history, dominant technologies—like the printing press, the clock, or the computer—have subtly but profoundly shaped our concepts of knowledge, intelligence, and identity. Weinberger argues that AI, and especially machine learning, represents a new kind of paradigm shift: unlike traditional computing, which requires humans to explicitly encode knowledge in rules and categories, AI systems extract meaning and make predictions from vast numbers of data points without needing to understand or generalize in human terms. He describes how these systems uncover patterns beyond human comprehension—such as identifying heart disease risk from retinal scans—by finding correlations invisible to human experts. Their discussion also grapples with the disquieting implications of this shift, including the erosion of explainability, the difficulty of ensuring fairness when outcomes emerge from opaque models, and the way AI systems reflect and reinforce cultural biases embedded in the data they ingest. The episode closes with a reflection on the tension between decentralization—a value long championed in the internet age—and the current consolidation of AI power in the hands of a few large firms, as well as Weinberger's controversial take on copyright and data access in training large models. David Weinberger is a pioneering thought-leader about technology's effect on our lives, our businesses, and ideas. He has written several best-selling, award-winning books explaining how AI and the Internet impact how we think the world works, and the implications for business and society. In addition to writing for many leading publications, he has been a writer-in-residence, twice, at Google AI groups, Editor of the Strong Ideas book series for MIT Press, a Fellow at the Harvarrd Berkman-Klein Center for Internet and Society, contributor of dozens of commentaries on NPR's All Things Considered, a strategic marketing VP and consultant, and for six years a Philosophy professor.  Transcript Everyday Chaos Our Machines Now Have Knowledge We'll Never Understand (Wired)  How Machine Learning Pushes Us to Define Fairness (Harvard Business Review)

Artificial Intelligence in Industry with Daniel Faggella
How Responsible AI is Shaping the Future of Banking and Finance - with Shub Agarwal of U.S. Bank and USC

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Apr 29, 2025 21:28


On today's episode, we're joined by Shub Agarwal, author of Successful AI Product Creation: A 9-Step Framework available from Wiley and a professor of the University of Southern California teaching AI and Generative AI product management to graduate students. He is also Senior Vice President of Product Management for AI and Generative AI at U.S. Bank. Shub joins Emerj's Managing Editor Matthew DeMello on the show today to offer his perspective on what responsible AI adoption truly looks like in a regulated environment - and why method matters more than models. With over 15 years of experience bringing enterprise-grade AI products to life, he explains why “AI is the new UX” - and what that means for the future of digital interaction in banking and beyond. He also dives into the nuances of responsible AI adoption - not as a buzzword but as a framework rooted in decades of data governance and enterprise rigor. The opinions that Shub expresses in today's show are his own and do not reflect that of U.S. Bank, the University of Southern California, or their respective leadership. This episode is sponsored by Searce. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.

EduFuturists
Edufuturists #289 Responsible AI with Jamie Smith

EduFuturists

Play Episode Listen Later Apr 28, 2025 52:38


In this episode of the podcast, we are joined again by Dr Jamie Smith, Executive Chairman at C-Learning and author of the new book The Responsible AI Revolution. Jamie joins us to discuss the intersection of AI and education, emphasising the need for a responsible approach to AI implementation. Jamie introduces his book, which addresses the potential consequences of AI in education and the importance of asking deeper questions about its role. The conversation explores ethical considerations, the need for upskilling, and the redefinition of roles in the workforce as AI continues to evolve. In this conversation, we explore the transformative impact of AI on productivity, leadership, and organisational culture. We also discuss the necessity for leaders to embrace discomfort and innovation, the importance of a supportive culture for AI adoption, and the potential for a collective approach to AI governance. The dialogue also touches on the need for a reimagined educational framework that prioritises human well-being over standardised assessments, as well as the importance of living in the present and making meaningful contributions to society.Chapters00:00 Introduction and Context of AI in Education06:04 The Responsible AI Revolution12:09 Ethical Considerations and Unintended Consequences18:01 Upskilling and Redefining Roles in the Age of AI27:26 Embracing AI: A Paradigm Shift30:08 Positive Disruption and Innovation32:03 Leadership in the Age of AI36:03 The Role of Culture in AI Adoption39:44 The Future of AI and Our Collective Responsibility46:51 Rethinking Education for the AI EraGrab a copy of The Responsible AI Revolution.Thanks so much for joining us again for another episode - we appreciate you.Ben & Steve xChampioning those who are making the future of education a reality.Follow us on XFollow us on LinkedInCheck out all about EdufuturistsWant to sponsor future episodes or get involved with the Edufuturists work?Get in touchGet your tickets for Edufuturists Uprising 2025

Customer Experience Conversations
"managing data in the age of AI" - W / Janet Xinyi Guo (Lloyds Banking Group) #130

Customer Experience Conversations

Play Episode Listen Later Apr 25, 2025 31:21


In this episode, we sit down with Janet Xinyi Guo, a leading expert in data management and digital transformation at Lloyds. With over 7 years of experience driving one of the UK's largest digital transformation programs, Janet brings deep insights into the intersection of AI, data ethics, and responsible data usage. Content: 00:00 – Introduction 03:15 – What is Data Ethics? 06:40 – The Role of AI 10:20 – Responsible AI 14:05 – Data Centralisation 17:30 – Data Storage 21:10 – Enhancing Customer Experience 24:45 – Surfacing the Right Data 28:20 – Navigating Regulations like GDPR  

NYU Abu Dhabi Institute
The Future of AI Ethics: A Cross-Disciplinary Discussion

NYU Abu Dhabi Institute

Play Episode Listen Later Apr 25, 2025 73:19


This panel discussion will consider how ethical decisions will be influenced in the future by the many applications of Artificial Intelligence. An ethicist and philosopher, an engineer who will design intelligent robots, and a computer scientist whose goal will be to make "responsible AI" synonymous with "AI" will each present a view of future AI ethics and then discuss how their views will diverge. While each participant will be a specialist conducting research into AI ethics, this discussion will bring together scientific, technical, and humanistic issues under the broad category of responsibility. Panel Members Ludovic Righetti, Electrical and Computer Engineer; Director of Machines in Motion Laboratory, Autonomous Machines in Motion Jeff Sebo, Ethicist and Philosopher; Director of Center for Mind, Ethics and Policy, AI Moral Well Being Julia Stoyanovich, Computer Scientist, Director of Center for Responsible AI, AI Governance Moderated by Harold Sjursen, Professor Emeritus, NYU Tandon School of Engineering

The Best Podcast
Episode 882: Responsible AI Use

The Best Podcast

Play Episode Listen Later Apr 23, 2025 9:35


Today's podcast is a little more niche than usual, which oddly ends up being a message that I think all of us would benefit from hearing in one way or another. AI use is becoming more and more common in our lives, and it affects our brains in was that we need to be thoughtful about. I describe that process here and send a message to younger people about the implications our use of AI has in our lives and personal development. Thanks for listening. As always, Much Love ❤️ and please take care.

A Few Things with Jim Barrood
#147 Responsible AI in housing, financial services, NFHA AI Conf. with Lisa Rice + Michael Akinwumi

A Few Things with Jim Barrood

Play Episode Listen Later Apr 17, 2025 42:28


We discussed a few things including:1. Their career journeys  2. History of NFHA3. Michael's impact on organization; AI in housing/financial services4. April 28-30 Responsible AI Symposium  https://events.nationalfairhousing.org/2025AISymposium5. Trends, challenges and opportunities re fair housing and technologyLisa Rice is the President and CEO of the National Fair Housing Alliance (NFHA), the nation's only national civil rights agency solely dedicated to eliminating all forms of housing discrimination and ensuring equitable housing opportunities for all people and communities.  Lisa has led her team in using civil rights principles to bring fairness and equity into the housing, lending, and technology sectors. She is a member of the Leadership Conference on Civil and Human Rights Board of Directors, Center for Responsible Lending Board of Directors, FinRegLab Board of Directors, JPMorgan Chase Consumer Advisory Council, Mortgage Bankers Association Consumer Advisory Council, Freddie Mac Affordable Housing Advisory Council, Fannie Mae Affordable Housing Advisory Council, Quicken Loans Advisory Forum, Bipartisan Policy Center's Housing Advisory Council, and Berkeley's The Terner Center Advisory Council.  She has received numerous awards including the National Housing Conference's Housing Visionary Award and was selected as one of TIME Magazine's 2024 ‘Closers.'----Dr. Michael Akinwumi translates values and principles to math and code. He ensures critical and emerging technologies like AI and blockchain enhance innovation, security, trust, and access in housing and financial systems, preventing historical injustices.  As a senior leader, he collaborates with policymakers and industry to strengthen protections and advance innovation. A Rita Allen Civic Science Fellow at Rutgers, he developed an AI policy tool for state-level impact and co-developed an AI Readiness (AIR) Index to help state governments assess their AI maturity.  Michael also advises AI companies on developing, deploying and adopting responsible innovations, driven by his belief that a life lived for others is most meaningful, aiming for lasting societal change through technology.#podcast #afewthingspodcast

WorkLab
DJ Patil on Using AI to Move Fast and Fix Things

WorkLab

Play Episode Listen Later Apr 16, 2025 26:36


DJ Patil was the first-ever US Chief Data Scientist and has led some of the biggest data initiatives in government and business. He has also been at the forefront of leveraging AI to solve the thorniest problems companies face, as well as “stupid, boring problems in the back office.” He joins the WorkLab podcast to discuss the potential of AI to change business, how leaders can drive technological transformation, and why it's vital for data scientists to never lose sight of the human element. WorkLab  Subscribe to the WorkLab newsletter

Pondering AI
Regulating Addictive AI with Robert Mahari

Pondering AI

Play Episode Listen Later Apr 16, 2025 54:24


Robert Mahari examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn't negate accountability; AI's negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research.  Robert Mahari is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. A transcript of this episode is here.   Additional Resources:The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/Robert Mahari (website): https://robertmahari.com/

Track Changes
Scaling responsible AI: With Noelle Russell

Track Changes

Play Episode Listen Later Apr 15, 2025 43:16


Noelle Russell on harnessing the power of AI in a responsible and ethical way Noelle Russell compares AI to a baby tiger, it's super cute when it's small but it can quickly grow into something huge and dangerous. As the CEO and founder of the AI Leadership Institute and as an early developer on Amazon Alexa, Noelle has a deep understanding of scaling and selling AI. This week Noelle joins Tammy to discuss why she's so passionate about teaching individuals and organizations about AI and how companies can leverage AI in the right way. It's time to learn how to tame the tiger! Please note that the views expressed may not necessarily be those of NTT DATA.Links: Noelle Russell Scaling Responsible AI AI Leadership Institute Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Connected FM
A Strategic Guide to Transforming Facility Management with AI

Connected FM

Play Episode Listen Later Apr 15, 2025 16:08


Lorri Rowlandson, head of strategy and innovation at BGIS, outlines a three-layer approach to AI integration: individual level desktop AI for job reengineering, operational level AI for enhancing internal efficiencies and client value-driven AI for service improvement. She emphasizes the importance of practical innovation, responsible AI use, employee engagement and continuous learning. This episode is sponsored by Envoy. Connect with Us:LinkedIn: https://www.linkedin.com/company/ifmaFacebook: https://www.facebook.com/InternationalFacilityManagementAssociation/Twitter: https://twitter.com/IFMAInstagram: https://www.instagram.com/ifma_hq/YouTube: https://youtube.com/ifmaglobalVisit us at https://ifma.org

Science 4-Hire
Responsible AI In 2025 and Beyond – Three pillars of progress

Science 4-Hire

Play Episode Listen Later Apr 15, 2025 54:44


"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." – Bob PulverMy guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices.  Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage.  * Human-Centric AI * AI Adoption and Readiness * AI Regulation and GovernanceThe past year's progress explained through three pillars that are shaping ethical AI:These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year.1. Human-Centric AIChange from Last Year:* Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness.Reasons for Change:* Increasing comfort level with AI and experience with the benefits that it brings to our work* Continued exploration and development of low stakes, low friction use cases* AI continues to be seen as a partner and magnifier of human capabilitiesWhat to Expect in the Next Year:* Increased experience with human machine partnerships* Increased opportunities to build superpowers* Increased adoption of human centric tools by employers2. AI Adoption and ReadinessChange from Last Year:* Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives.* Significant growth in AI educational resources and adoption within teams, rather than just individuals.Reasons for Change:* Improved understanding of AI's benefits and limitations, reducing fears and resistance.* Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building.What to Expect in the Next Year:* More systematic frameworks for AI adoption across entire organizations.* Increased demand for formal AI proficiency assessments to ensure responsible and effective usage.3. AI Regulation and GovernanceChange from Last Year:* Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws).* Momentum to hold vendors of AI increasingly accountable for ethical AI use.Reasons for Change:* Growing awareness of risks associated with unchecked AI deployment.* Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness.What to Expect in the Next Year:* Implementation of stricter AI audits and compliance standards.* Clearer responsibilities for vendors and organizations regarding ethical AI practices.* Finally some concrete standards that will require fundamental changes in oversight and create messy situations.Practical Takeaways:What should I/we be doing to move the ball fwd and realize AI's full potential while limiting collateral damage?Prioritize Human-Centric AI Design* Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology's sake.* Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement.Build Robust AI Literacy and Education Programs* Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations.* Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness.Strengthen AI Governance and Oversight* Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation.* Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits.Monitor AI Effectiveness and Impact* Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality.* Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust.Email Bob- bob@cognitivepath.io Listen to Bob's awesome podcast - Elevate you AIQ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Data Transforming Business
Building Trust in Data: Transparency, Collaboration, and Governance for Successful AI

Data Transforming Business

Play Episode Listen Later Apr 14, 2025 22:59


"So you want trusted data, but you want it now? Building this trust really starts with transparency and collaboration. It's not just technology. It's about creating a single governed view of data that is consistent no matter who accesses it, " says Errol Rodericks, Director of Product Marketing at Denodo.In this episode of the 'Don't Panic, It's Just Data' podcast, Shawn Rogers, CEO at BARC US, speaks with Errol Rodericks from Denodo. They explore the crucial link between trusted data and successful AI initiatives. They discuss key factors such as data orchestration, governance, and cost management within complex cloud environments. We've all heard the horror stories – AI projects that fail spectacularly, delivering biased or inaccurate results. But what's the root cause of these failures? More often than not, it's a lack of focus on the data itself. Rodericks emphasises that "AI is only as good as the data it's trained on." This episode explores how organisations can avoid the "garbage in, garbage out" scenario by prioritising data quality, lineage, and responsible AI practices. Learn how to avoid AI failures and discover strategies for building an AI-ready data foundation that ensures trusted, reliable outcomes. Key topics include overcoming data bias, ETL processes, and improving data sharing practices.TakeawaysBad data leads to bad AI outputs.Trust in data is essential for effective AI.Organisations must prioritise data quality and orchestration.Transparency and collaboration are key to building trust in data.Compliance is a responsibility for the entire organisation, not just IT.Agility in accessing data is crucial for AI success.Chapters00:00 The Importance of Data Quality in AI02:57 Building Trust in Data Ecosystems06:11 Navigating Complex Data Landscapes09:11 Top-Down Pressure for AI Strategy11:49 Responsible AI and Data Governance15:08 Challenges in Personalisation and Compliance17:47 The Role of Speed in Data Utilisation20:47 Advice for CFOs on AI InvestmentsAbout DenodoDenodo is a leader in data management. The award-winning Denodo Platform is the leading logical data management platform for transforming data into trustworthy insights and outcomes for all data-related initiatives across the enterprise, including AI and self-service. Denodo's customers in all industries all over the world have delivered trusted AI-ready and business-ready data in a third of the time and with 10x better performance than with lakehouses and other mainstream data platforms alone.

Blockchain DXB

Produced entirely using AI | Powered by Google Notebook LMIn this special edition of the AI Takeover Series—part of the Blockchain DXB main podcast—we dive deep into the Artificial Intelligence Index Report 2025, one of the most comprehensive and data-rich annual reports on global AI trends, curated by Stanford University's Human-Centered AI Institute.

HFS PODCASTS
Unfiltered Stories | Cracking the Code on AI Adoption: Eviden's, an Atos Group Company, perspective on the AADA Quadfecta

HFS PODCASTS

Play Episode Listen Later Apr 12, 2025 20:02


How are enterprises leveraging AI, analytics, data platforms, and automation to drive real business impact? In this exclusive conversation, Chetan Manjarekar, SVP and Global Head of Digital with Eviden, an Atos Group Company, joins HFS Research to break down the latest trends, challenges, and innovations shaping the generative enterprise. From customer experience to data readiness and co-innovation, discover how Eviden leads the charge in digital transformation. The key points discussed include:AI & automation are reshaping enterprise transformation: GenAI and automation are now central to digital transformation, but success depends on aligning technology investments with clear business outcomes.Customer, employee, and partner experience are interlinked priorities: 75% of enterprises prioritize customer experience, but organizations are now equally focused on improving employee and partner experiences to drive holistic transformation.Data readiness is the biggest AI adoption challenge: 74% of organizations struggle with fragmented, inconsistent, and unstructured data, limiting AI's full potential. Eviden tackles this with its Data Readiness Assessment Framework to improve governance and accessibility.AI investments are surpassing traditional analytics: A major shift is happening; AI spending is set to grow from 19% to 31% in two years, overtaking analytics. AI is increasingly embedded in enterprise applications, automation, and smart platforms.Co-innovation & responsible AI will define industry leaders: Eviden emphasizes co-innovation with customers and partners to develop industry-specific AI solutions. Responsible AI, data governance, and security are critical for sustainable success. Watch now to explore key insights and what's next for AI-powered business evolution! Access the full report on AADA Quadfecta Services for the Generative Enterprise 2024: ⁠https://www.hfsresearch.com/research/hfs-horizons-aada-quadfecta-services-for-the-generative-enterprise-2024/

VUX World
Responsible AI, Real Results with Citizens Advice

VUX World

Play Episode Listen Later Apr 11, 2025 51:32


In this episode on the VUX World podcast, we chat all about the innovative AI journey of Citizens Advice with Stuart Pearson.We discover how a nonprofit organisation is revolutionising customer support through Caddy, an intelligent AI assistant that reduces average handle time by 50% while maintaining a rigorous ethical approach.Stuart shares the meticulous process of developing an internal AI tool that supports contact centre agents, highlighting the importance of responsible AI implementation. From initial challenges with chatbots to creating a sophisticated AI solution, this podcast reveals how organisations can leverage artificial intelligence to enhance productivity, improve service delivery, and ultimately help more people.Subscribe to VUX World.Subscribe to The AI Ultimatum Substack. Hosted on Acast. See acast.com/privacy for more information.

AI and the Future of Work
World Health Day Special: How AI Is Making Healthcare Smarter, Cheaper, and Kinder

AI and the Future of Work

Play Episode Listen Later Apr 10, 2025 20:35


To mark World Health Day, we're revisiting powerful conversations with innovators using AI to improve healthcare access, reduce costs, and return empathy to the patient experience.In this special compilation episode, you'll hear from five leaders at the intersection of healthcare and emerging technologies—sharing how AI is already reshaping how we deliver care and what's next for clinical innovation.

Leading Transformational Change with Tobias Sturesson
105. Jen Gennai: The Quest for Responsible AI

Leading Transformational Change with Tobias Sturesson

Play Episode Listen Later Apr 9, 2025 61:42


What does it really mean to build AI responsibly, at Google scale?In this episode, we sit down with Jen Gennai, the person Google turned to when it needed to build ethical AI from the ground up. Jen founded Google's Responsible Innovation team and has spent her career at the intersection of trust, safety, and emerging tech.From advising world leaders on AI governance to navigating internal pushback, Jen opens up about what it actually takes to embed ethics into one of the fastest-moving industries on the planet.Tune in to this episode as we explore:The importance of early ethical considerationsTrust as a core pillar of technologyNavigating AI's impact on fairness and discriminationThe future of AI-human relationshipsBuilding AI literacy in organizationsThe intersection of regulation and innovationLinks mentioned:Connect with Jen Gennai on LinkedInT3 Website‘You Can Culture: Transformative Leadership Habits for a Thriving Workplace, Positive Impact and Lasting Success' is now available here.

On Tech Ethics with CITI Program
Fostering AI Literacy - On Tech Ethics

On Tech Ethics with CITI Program

Play Episode Listen Later Apr 8, 2025 25:31


Discusses the importance of fostering AI literacy in research and higher education. Our guest today is Sarah Florini who is an Associate Director and Associate Professor in the Lincoln Center for Applied Ethics at Arizona State University. Sarah's work focuses on technology, social media, technology ethics, digital ethnography, and Black digital culture. Among other things, Sarah is dedicated to fostering critical AI literacy and ethical engagement with AI/ML technologies. She founded the AI and Ethics Workgroup to serve as a catalyst for critical conversations about the role of AI models in higher education.  Additional resources: Distributed AI Research Institute: https://www.dair-institute.org/ Mystery AI Hype Theater 3000: https://www.dair-institute.org/maiht3k/ Tech Won't Save Us: https://techwontsave.us/ CITI Program's Essentials of Responsible AI course: https://about.citiprogram.org/course/essentials-of-responsible-ai/ 

Disruption Now
Disruption Now Episode 179: Breaking Into Tech Sales & AI Innovation with Jarrett Albritton

Disruption Now

Play Episode Listen Later Apr 4, 2025 49:36 Transcription Available


"I went from door-to-door sales to closing million-dollar tech deals — but the real secret? Asking boldly. Watch [12:33] to hear how Jarrett Albritton turned Clubhouse conversations into 80,000 followers and landed 40+ recruiters on stage. Spoiler: His AI platform is changing how careers are built!"-Why are tech sales misunderstood (and why are they not what you think)?-The biggest misconception about tech sales that's costing you $$$.-3 secrets to breaking into tech sales and dominating AI-driven careers.⏰ Timestamps: 0:00 – "Why Tech Sales Isn't Just for Geeks (Misconceptions)" 12:33 – "How Jarrett Albritton Leveraged Clubhouse to Empower 80K People" 25:47 – "The AI Tool Helping You Land a Job Faster (Why Right C is Different)" 38:55 – "Mastering Interviews with AI: The Future of Job Search" 45:00 – "3 P's to Succeed in Tech Sales (Persistence, Patience, Pivot)"✅ Try Job Search Genius FREE for 7 days: jobsearchgenius.ai

Pondering AI
AI Literacy for All with Phaedra Boinodiris

Pondering AI

Play Episode Listen Later Apr 2, 2025 43:24


Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn't enough; and the hard work required to develop good AI.  Phaedra Boinodiris is IBM's Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. A transcript of this episode is here.    Additional Resources: Phaedra's Website -  https://phaedra.ai/ The Future World Alliance - https://futureworldalliance.org/ 

MLOps.community
Building Trust Through Technology: Responsible AI in Practice // Allegra Guinan // #298

MLOps.community

Play Episode Listen Later Mar 25, 2025 47:08


Building Trust Through Technology: Responsible AI in Practice // MLOps Podcast #298 with Allegra Guinan, Co-founder of Lumiera.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractAllegra joins the podcast to discuss how Responsible AI (RAI) extends beyond traditional pillars like transparency and privacy. While these foundational elements are crucial, true RAI success requires deeply embedding responsible practices into organizational culture and decision-making processes. Drawing from Lumiera's comprehensive approach, Allegra shares how organizations can move from checkbox compliance to genuine RAI integration that drives innovation and sustainable AI adoption.// BioAllegra is a technical leader with a background in managing data and enterprise engineering portfolios. Having built her career bridging technical teams and business stakeholders, she's seen the ins and outs of how decisions are made across organizations. She combines her understanding of data value chains, passion for responsible technology, and practical experience guiding teams through complex implementations into her role as co-founder and CTO of Lumiera.// Related LinksWebsite: https://www.lumiera.ai/Weekly newsletter: https://lumiera.beehiiv.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Allegra on LinkedIn: /allegraguinanTimestamps:[00:00] Allegra's preferred coffee[00:14] Takeaways[01:11] Responsible AI principles[03:13] Shades of Transparency[07:56] Effective questioning for clarity [11:17] Managing stakeholder input effectively[14:06] Business to Tech Translation[19:30] Responsible AI challenges[23:59] Successful plan vs Retroactive responsibility[28:38] AI product robustness explained [30:44] AI transparency vs Engagement[34:10] Efficient interaction preferences[37:57] Preserving human essence[39:51] Conflict and growth in life[46:02] Subscribe to Allegra's Weekly Newsletter!

Disruption Now
Disruption Now Episode 178: Entrepreneurship, Innovation & AI: A Conversation with Seema Alexander

Disruption Now

Play Episode Listen Later Mar 24, 2025 51:31


In this episode of the Disruption Now podcast, host Rob Richardson engages in a dynamic conversation with Seema Alexander, a seasoned entrepreneur and business strategist. They delve into Seema's rich background, from her upbringing in an entrepreneurial immigrant family to her current roles as co-founder of Virgent AI and DC Startup & Tech Week. The discussion traverses the evolution of entrepreneurship, the pivotal role of emerging technologies like AI, and the significance of community collaboration in fostering innovation.​Three Key Takeaways for Disruptors:Embrace Emerging Technologies: Seema emphasizes entrepreneurs' need to stay abreast of technological advancements, particularly AI. She notes that the rapid pace of AI development means businesses must adapt swiftly or risk obsolescence.​Focus on Strategic Positioning: Drawing from her experience with the UNIQUE Method™, Seema highlights the importance of businesses identifying their unique value propositions to stand out in competitive markets.​Foster Community Engagement: Through her work with DC Startup & Tech Week, Seema illustrates how building and participating in entrepreneurial communities can lead to shared knowledge, resources, and opportunities for collaboration, driving collective growth and innovation.​This episode offers valuable insights into navigating the evolving landscape of entrepreneurship and the critical role of adaptability and community in achieving sustained success.Seema Alexander:Co-Chair, DC Startup & Tech WeekCo-Founder & President, Virgent AICreator, U.N.I.Q.U.E. Method™Business Podcast HostFrom Spaghetti to GrowthConnectLinkedin Book a 15-Min Founder/CEO Meeting

HR Data Labs podcast
Beth White - The Evolution of AI in the Workplace

HR Data Labs podcast

Play Episode Listen Later Mar 20, 2025 35:42 Transcription Available


Send us a textBeth White, Founder and CEO of MeBeBot, joins us this episode to discuss lessons learned from working with AI. She also shares concerns regarding the continued widespread integration of AI in organizations as well as hopes for the future of AI. [0:00] IntroductionWelcome, Beth!Today's Topic: The Evolution of AI in the Workplace[5:21] What has Beth learned over the years working with AI?How AI has evolved from basic if-then statementsThe journey toward developing conversational AI[10:20] What are Beth's concerns regarding AI?Generative AI consumes an unprecedented amount of energyAI regulation remain largely unstructured, especially within organizationsThe importance of grounding the AI with human checks and balances[26:23] What are Beth's hopes for the future of AI?Educational programs developed by a diverse set of professionalsThe natural integration of AI into HR practices[34:56] ClosingThanks for listening!Quick Quote“[If we] help educate and train people on AI today, we'll have a more employable workforce for the future.”Contact:Beth's LinkedInDavid's LinkedInPodcast Manager: Karissa HarrisEmail us!Production by Affogato Media

Pondering AI
Auditing AI with Ryan Carrier

Pondering AI

Play Episode Listen Later Mar 19, 2025 52:31


Ryan Carrier trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective.  Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets.  A transcript of this episode is here.    Ryan Carrier is the Executive Director of ForHumanity, a non-profit organization improving AI outcomes through increased accountability and oversight. 

On the Brink with Andi Simon
Al Must Transform Our Communication Strategy. Just Ask Dan Nestle!

On the Brink with Andi Simon

Play Episode Listen Later Mar 16, 2025 36:00


In this thought-provoking episode of On the Brink with Andi Simon, we welcome Dan Nestle, a strategic communications expert and AI enthusiast, to explore the transformative role of artificial intelligence in marketing, branding, and storytelling. With over 20 years of corporate and agency experience, Dan has been at the forefront of digital and content innovation, helping businesses adapt to the rapidly evolving communications landscape. As AI tools become more sophisticated, many professionals are left wondering: Will AI replace human creativity? Can AI-generated content be authentic? How can businesses use AI without losing their unique voice? Dan tackles these pressing questions, offering real-world insights into how AI can serve as a powerful assistant—rather than a replacement—for communicators, marketers, and business leaders. During our conversation, Dan shares his fascinating career trajectory, from teaching English in Japan to leading global corporate communications teams. Now, as the founder of Inquisitive Communications, he helps organizations navigate AI's impact on content strategy, storytelling, and audience engagement. He also provides a step-by-step breakdown of the AI tools he uses daily to streamline content creation, repurpose valuable insights, and enhance branding efforts without sacrificing authenticity. We'll discuss the importance of curiosity in embracing new technologies, the fear and hesitation many professionals feel around AI, and why adopting AI-driven workflows can save time, increase efficiency, and improve creativity. Whether you're a seasoned marketer, an entrepreneur, or just starting to explore AI's potential, this episode is packed with actionable strategies to help you integrate AI into your communications and branding efforts. Get ready to rethink how you approach content in the age of AI, and learn why being human is still the most valuable differentiator in a tech-driven world. If you prefer to watch the video of our podcast, click here. About Dan Nestle

Down To Business
Scaling Responsible AI: From Enthusiasm to Execution

Down To Business

Play Episode Listen Later Mar 15, 2025 11:53


The AI Leadership Institute is an organization founded in 2015 to help businesses and individuals understand and implement artificial intelligence responsibly. That's nearly 10 years ago and its founder Noelle Russell has spent most of her recent career advising businesses how to use AI. She has distilled all this in her latest book Scaling Responsible AI: From Enthusiasm to Execution. And she joined Bobby earlier.

The MadTech Podcast
MadTech Daily: Latest DOJ Proposal Still Calls for Google Breakup; TikTok US Business Close to Sale?

The MadTech Podcast

Play Episode Listen Later Mar 11, 2025 1:50


Today Dot discusses the DOJ's revised proposed judgement to remedy Google's search dominance, Google removing diverse language from its Responsible AI team webpage, and a potential sale on the horizon for TikTok's US business. 

AI and the Future of Work
The Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & Trust

AI and the Future of Work

Play Episode Listen Later Mar 6, 2025 19:27


Coinciding with International Women's Day this week, this special episode of AI and the Future of Work highlights key conversations with women leading the way in AI ethics, governance, and accountability.In this curated compilation, we feature four remarkable experts working to create a more equitable, trustworthy, and responsible AI future:

HR Data Labs podcast
Martha Curioni - How to Responsibly Integrate AI into HR

HR Data Labs podcast

Play Episode Listen Later Mar 6, 2025 37:29 Transcription Available


Send us a textMartha Curioni, People Analytics Consultant at Provisio Data Solutions, joins us this episode to discuss some best practices for responsible AI implementation in organizations. She also explains the importance of integrating AI models into existing processes and allowing users to submit feedback on the AI. [00:00] IntroductionWelcome, Martha!Today's Topic: How to Responsibly Integrate AI into HR[05:33] What does “responsible AI implementation” mean?Intersection with data security and privacyData governance regarding new AI processes[14:30] What are the essential steps for responsible AI implementation?Redesign HR processes around the newly implemented AIChecking that the new AI is accurate and reliable[26:59] How can organizations avoid training AI models on bad data?Building unbiased AI systemsImplementing user feedback mechanisms[35:33] ClosingThanks for listening!Quick Quote“Companies need to ensure that the AI they're going to be using is implemented in a way that is transparent, minimizes the influence of bias, supports fairness, and empowers employees and managers to make better decisions.”Resources:Martha's previous episodeGithub accountContact:Martha's LinkedInDavid's LinkedInDwight's LinkedInPodcast Manager: Karissa HarrisEmail us!Production by Affogato Media

Pondering AI
Ethical by Design with Olivia Gambelin

Pondering AI

Play Episode Listen Later Mar 5, 2025 51:26


Olivia Gambelin values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters.  A transcript of this episode is here. Olivia Gambelin is a renowned AI Ethicist and the Founder of Ethical Intelligence, the world's largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI.  Additional Resources: Responsible AI: Implement an Ethical Approach in Your Organization – BookPlato & a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes - Book   The Values Canvas – RAI Design Tool Women Shaping the Future of Responsible AI – Organization In Pursuit of Good Tech | Subscribe - Newsletter

Disruption Now
Disruption Now Episode 177: Disrupting Business with Gratitude with Rachel Desrochers

Disruption Now

Play Episode Listen Later Mar 4, 2025 36:38


In this episode of the Disruption Now Podcast, host Rob Richardson sits down with Rachel Desrochers, the founder of Gratitude Collective and a passionate advocate for gratitude, entrepreneurship, and community building. Rachel shares her journey of turning a simple idea into a thriving business while fostering a culture of kindness and connection. She discusses the power of gratitude in both personal and professional life, the challenges of entrepreneurship, and how she supports other business owners through her work with the Incubator Kitchen Collective. Tune in for an inspiring conversation on purpose-driven business, resilience, and the impact of gratitude on success.Top 3 Things You'll Learn from This Episode:1. The Power of Gratitude in Business & Life – Practicing gratitude can fuel success, strengthen leadership, and build meaningful connections.2. Entrepreneurship with Purpose – Rachel Desrochers shares insights on growing a values-driven business while creating opportunities for others.3. Building a Community-Driven Brand – Lessons from Rachel's journey in launching Gratitude Collective and supporting entrepreneurs through the Incubator Kitchen Collective.Rachel's Social Media Pages:LinkedIn: https://www.linkedin.com/in/rachel-desrochers-b2356760/Websites:https://www.thegratitudecollective.org/ (Company)https://www.powertopursue.org/ (Company)https://www.incubatorkitchencollective.org/ (Company)Disruption Now:  building a fair share for the Culture and Media. Join us and disrupt. Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comLinkedIn: https://www.linkedin.com/in/robrichardsonjr/Instagram: https://www.instagram.com/robforohio/Website: https://podcast.disruptionnow.com/

On Tech Ethics with CITI Program
Importance of Data Privacy Compliance and ESG Reporting - On Tech Ethics

On Tech Ethics with CITI Program

Play Episode Listen Later Mar 4, 2025 22:12


Discusses data privacy compliance and environmental, social, and governance (ESG) reporting. Our guest today is Katrina Destrée who is a globally experienced privacy and sustainability professional. Katrina's work in privacy and sustainability focuses on privacy programs, ESG reporting, awareness and training, and strategic communications.  Additional resources: International Association of Privacy Professionals (IAPP): https://iapp.org/ ISACA: https://www.isaca.org/ Global Enabling Sustainability Initiative (GeSI): https://www.gesi.org/ Agréa Privacy & ESG: https://agreaprivacyesg.com/ CITI Program's “GDPR for Research and Higher Ed” course: https://about.citiprogram.org/course/gdpr-for-research-and-higher-ed/  CITI Program's “Big Data and Data Science Research Ethics” course: https://about.citiprogram.org/course/big-data-and-data-science-research-ethics/ CITI Program's “Essentials of Responsible AI” course: https://about.citiprogram.org/course/essentials-of-responsible-ai/ 

AI in Education Podcast
From Butterflies to Bias: Dr. Nici Sweaney on Building Responsible AI in Schools

AI in Education Podcast

Play Episode Listen Later Feb 27, 2025 43:22


In this episode Dan and Ray sit down with Dr. Nici (Nikki) Sweaney—founder of “AI Her Way,” gender equality advocate, and globally recognized AI consultant. After a fascinating career journey from butterfly ecology to statistics to AI, Nici now helps schools and organizations create ethical, responsible, and effective AI strategies. She shares her insights on responsible AI adoption, gender equity, bias, and how schools can proactively plan for an AI-driven future. Follow Nici on LinkedIn: https://www.linkedin.com/in/dr-nici-sweaney/  Nici's Ai Her Way website  Book in a discovery call with Nici Make an Inquiry about PD for your school: https://www.aiherway.com.au/professional-development  Book Nici as a speaker https://www.aiherway.com.au/speaking-engagements Nici's will be speaking at lots of events in March, if you'd like to hear more, and ask your own questions! March 5 - Australian Retirement Trust live podcast recording, Brisbane March 6 - Canon, online event for IWD March 6 - National Press Club, "Women in Media Canberra Networking Night: Democracy under threat? Unpacking mis/disinformation in Australia" Canberra March 7 - Powerful Steps, IWD Event, 'Empowering Women, Changing Lives', Sydney March 13 - Hoyts, IWD Event, Advancing Women in the AI Era, Sydney March 17 - Women in Data and Digital (Australian Public Service), Online event for IWD March 25 - ARIA's Innovator Conference, Sydney March 27 - Learning Environments Australia Conference, "Architecting Future Learning Environments with AI", Sydney Key Topics & Takeaways Nici's Career Path: Ecology, Academia & AI How studying butterfly behavior and coding in R sparked her interest in data and algorithms. Transition from 17 years in academia to AI consulting, focusing on statistics, data analytics, and ethical tech. The ‘Aha' Moment with ChatGPT Discovering ChatGPT in late 2022 and realizing its massive potential for education—and society at large. Balancing excitement and existential questions (“What is the point of humanity?”) when first encountering generative AI. Bias, Equity & Ethical AI How AI systems inherit the biases found in the internet's data—and why that's problematic for underrepresented groups. Why women's perspectives (and other minority voices) are crucial in AI development and how current data sets often perpetuate stereotypes. Nici's passion for closing the AI gender gap and practical steps schools can take. Building AI Literacy & Culture in Schools The importance of having a written guideline (rather than a static policy) for AI use in schools. How to engage teachers and staff: start with small wins—solving real pain points such as lesson planning or admin tasks. Creating a “community of practice” so that staff share prompts, workflows, and best practices. Parent and Community Involvement Addressing misconceptions and fears by informing families about AI's capabilities, risks, and benefits. Why schools must proactively explain why they're teaching AI skills—and how they're doing so responsibly. Rethinking Assessment & Student Skills Moving beyond “fact recall” to deeper inquiry, collaboration, ethical reflection, and higher-order thinking. Teaching students how AI works (at a conceptual level) so they can use it responsibly and critically. Encouraging leadership traits like empathy, diversity of thought, and clear communication—assets for both human collaboration and AI prompting. Global Perspectives & Data Sovereignty The challenges of relying on global tech giants for AI tools and data security. Understanding AI as a “private commodity,” not a universal public resource—what that means for Australian schools and beyond. Practical Steps for School Leaders Survey staff to find out their AI familiarity, pain points, and training needs. Start with low-hanging fruit: use AI to streamline daily tasks, then build toward larger strategic workflows. Sustain momentum via ongoing PD, regular sharing sessions, and involving all stakeholders, including parents.  

AI, Government, and the Future by Alan Pentz
Balancing AI Governance and Innovation with Erica Werneman Root of EWR Consulting

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Feb 26, 2025 51:06


In this episode of AI, Government, and the Future, host Max Romanik is joined by Erica Werneman Root, Founder of EWR Consulting, to discuss the complex interplay between AI governance, regulation, and practical implementation. Drawing from her unique background in economics and law, Erica explores how organizations can navigate AI deployment while balancing innovation with responsible governance.

Disruption Now
Disruption Now Episode 176: Rethinking Innovation with Tremain Davis

Disruption Now

Play Episode Listen Later Feb 25, 2025 60:17


In this episode of Disruption Now, Tremain Davis shares forward-thinking insights on how innovation is upending traditional business models and reshaping entire industries. Here are three things you can learn from this episode:Adaptive Leadership: Davis disrupts conventional norms by staying agile, empowering his team, and constantly reevaluating strategies to drive transformation.Leveraging Disruptive Technologies: Learn how embracing new technologies can drive sustainable growth and create a competitive edge. Tremain's approach as a leader integrates cutting-edge digital solutions that challenge outdated business practices.Strategic Risk-Taking: Understand the value of taking calculated risks and maintaining a proactive mindset to turn challenges into opportunities. As a leader, Davis exemplifies disruption by challenging the status quo and fostering a culture of innovation that inspires others to break away from traditional molds.Davis's journey to becoming a disruptive leader in his community is rooted in his commitment to challenging outdated paradigms and championing local change. He transforms his business and drives community-wide innovation and resilience by empowering emerging entrepreneurs, mentoring future leaders, and building collaborative networks.Tremain Davis's social media page:LinkedIn: https://www.linkedin.com/in/tremain-davis-348504a4/Website: https://www.thinkpgc.org/Instagram: https://www.instagram.com/iamtremain/Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comLinkedIn: https://www.linkedin.com/in/robrichardsonjr/Instagram: https://www.instagram.com/robforohio/Website: https://podcast.disruptionnow.com/

Trending In Education
Pursuing Equity and Responsible AI with Dr. Allison Scott

Trending In Education

Play Episode Listen Later Feb 24, 2025 25:52


Dr. Allison Scott, CEO of the Kapor Foundation, joins Mike Palmer on Trending in Education to discuss the crucial intersection of technology, education, and equity. The conversation explores the persistent lack of diversity in the tech industry and the urgent need to prepare students for the AI-driven future. Dr. Scott emphasizes the importance of critical thinking, ethical considerations, and creating a more inclusive tech ecosystem that benefits everyone. This episode offers valuable insights for educators, parents, and anyone interested in the transformative power of technology and its impact on society. We reference the WEF Future of Jobs Report and the Kapor's Guide to Responsible AI. Key Takeaways: The tech industry is not representative of the population. This lack of diversity limits innovation and economic opportunity. AI is rapidly changing the job market. The fastest-growing jobs and skills are related to AI, big data, and cybersecurity. Critical thinking and ethical considerations are essential in AI development and use. Students need to be prepared to analyze and evaluate AI technologies. Diversity in tech is crucial for creating AI solutions that benefit everyone. A broader understanding of AI will be beneficial across various fields. Educators have a vital role to play in preparing students for the age of AI. They need to foster critical thinking, curiosity, and a passion for learning. Why You Should Listen: Dr. Allison Scott provides a compelling vision for the future of tech education. She emphasizes the importance of diversity, critical thinking, and ethical considerations in the development and use of AI. This episode is a must-listen for anyone interested in the future of work, education, and technology. Subscribe to Trending in Education to stay informed about the latest trends and insights in the field. Timestamps: 00:00 Introduction and Guest Welcome 00:43 Dr. Allison Scott's Origin Story 01:07 Understanding Inequality in Education 02:55 The Kapor Foundation's Mission 03:36 The Leaky Tech Pipeline Framework 05:40 Responsible AI and Diversity 06:52 Preparing the Next Generation for AI 10:38 Critical Thinking and AI Education 11:25 Future of Work and Skills 13:56 Encouraging Innovation and Problem Solving 22:04 Philanthropy and Nonprofits in Tech 23:19 Conclusion and Takeaways

Pondering AI
The Nature of Learning with Helen Beetham

Pondering AI

Play Episode Listen Later Feb 19, 2025 45:57


Helen Beetham isn't waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course.     Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI's teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities.Helen Beetham is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, Imperfect Offerings, is recommended by the Guardian/Observer for its wise and thoughtful critique of generative AI.   Additional Resources:Imperfect Offerings - https://helenbeetham.substack.com/Audrey Watters - https://audreywatters.com/ Kathryn (Katie) Conrad - https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/ Anna Mills - https://www.linkedin.com/in/anna-mills-oer/ Dr. Maya Indira Ganesh - https://www.linkedin.com/in/dr-des-maya-indira-ganesh/ Tech(nically) Politics - https://www.technicallypolitics.org/ LOG OFF - logoffmovement.org/ Rest of World -  www.restofworld.org/Derechos Digitales – www.derechosdigitales.org  A transcript of this episode is here.

Alter Everything
178: From White House Advisory to AI Entrepreneurship

Alter Everything

Play Episode Listen Later Feb 12, 2025 25:56


In this episode of Alter Everything, we sit down with Eric Daimler, CEO and co-founder of Conexus, and the first AI advisor to the White House under President Obama. Eric explores how AI-driven data consolidation is transforming industries, the critical role of neuro-symbolic AI, and the evolving landscape of AI regulation. He shares insights on AI's impact across sectors like healthcare and defense, highlighting the importance of inclusive discussions on AI safety and governance. Discover how responsible AI implementation can drive innovation while ensuring ethical considerations remain at the forefront.Panelists:Eric Daimler, Chair, CEO & Co-Founder @ Conexus - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes:SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.Neuro-symbolic AIUber Data Consolidation Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.

Disruption Now
Disruption Now Episode 175: Redefining Style, Confidence, and Resilience

Disruption Now

Play Episode Listen Later Feb 10, 2025 19:44


Today's Disruption Now episode is with LaMar "The Coutureman "Wright. He is a trailblazer in curated lifestyle expertise and image consulting through his business, The Coutureman LLC. As a disruptor in his field, he challenges traditional notions of style and personal branding, helping individuals cultivate an elevated sense of confidence and self-expression through fashion, grooming, and lifestyle refinement. His approach goes beyond just aesthetics—it's about empowerment, self-identity, and making a statement in both personal and professional spaces.LaMar shared deep insights into his journey, highlighting pivotal life lessons that have shaped his career and mindset—how life's challenges and setbacks can serve as catalysts for growth and transformation. He also spoke about the importance of authenticity, emphasizing that success isn't about conforming to societal expectations but about embracing and elevating one's unique identity. His presence in the image consulting space is more than just about style; it's about redefining how individuals own their narratives, making bold statements, and crafting legacies beyond clothing.Lamar Wright Social Media links:LinkedIn: https://www.linkedin.com/in/thecoutureman/website: https://thecoutureman.com/Disruption Now YouTube: https://www.youtube.com/@UCWDYBJSzBoqgCd1ADPVttSw LinkedIn Page: https://www.linkedin.com/in/robrichardsonjr/Instagram: https://www.instagram.com/robforohio/Apply to get on the Podcast: https://form.typeform.com/to/Ir6Agmzr

Returns on Investment
Responsible AI in the Age of Trump with Ravit Dotan

Returns on Investment

Play Episode Listen Later Feb 5, 2025 18:31


In the latest Agents of Impact podcast, Ravit Dotan, an AI governance advisor and researcher in Pittsburgh, joins David Bank to share insights on the new state of play for responsible AI in the early days of the Trump administration and what Deepseek, the Chinese competitor to Open AI, means for intellectual property, open-source code … and national security.

Pondering AI
Ethics for Engineers with Steven Kelts

Pondering AI

Play Episode Listen Later Feb 5, 2025 46:45


Steven Kelts engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care. Steven and Kimberly discuss Ashley Casovan's inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices.Steven Kelts is a lecturer in Princeton's University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human's Responsible University Network. Additional Resources:Princeton Agile Ethics Program: https://agile-ethics.princeton.eduCITP Talk 11/19/24: Agile Ethics Theory and EvidenceOktar, Lomborozo et al: Changing Moral Judgements4-Stage Theory of Ethical Decision Making: An IntroductionEnabling Engineers through “Moral Imagination” (Google)A transcript of this episode is here.

Returns on Investment
Mission driven funders react to feds funding freeze + responsible AI strategies.

Returns on Investment

Play Episode Listen Later Jan 31, 2025 19:20


Host Brian Walsh takes up ImpactAlpha's top stories with editor David Bank. Stories featured in this week's episode: Mission-driven funders scramble to respond to federal funding freeze⁠, by Amy Cortese and David Bank Call roundup: https://impactalpha.com/calls/ How machine learning and AI can be harnessed for mission-based lending⁠, by Mar Diteos Rendon, Nicole Jansma and Sachi Shenoy

The FOX News Rundown
Business Rundown: The "Creator Economy" Braces For A TikTok Ban

The FOX News Rundown

Play Episode Listen Later Jan 17, 2025 20:53


The TikTok ban has been upheld by the Supreme Court. If the Chinese-owned social media platform is not sold to a U.S. owner by Sunday, January 19th, millions of consumers and content creators on the app will need to pack up and find a new platform to post their videos. The “creator economy” is a multi-billion dollar industry in America now, so how will TikTok creators be impacted? Lydia Hu speaks with AirBnb entrepreneur with millions of likes and hundreds of thousands of followers on TikTok, Brittany Magsig about how the possible ban puts her livelihood in jeopardy. Later, Lydia speaks with Anjana Susarla is an Omura-Saxena Professor of Responsible AI at Michigan State University business school professor Anajana Susarla about where creators may go after TikTok. Photo Credit: AP Learn more about your ad choices. Visit podcastchoices.com/adchoices