Podcasts about responsible ai

  • 577PODCASTS
  • 1,005EPISODES
  • 35mAVG DURATION
  • 1DAILY NEW EPISODE
  • Jul 16, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about responsible ai

Show all podcasts related to responsible ai

Latest podcast episodes about responsible ai

Disruption Now
Disruption Now Episode 183: What Social Media Algorithms Are Doing to Girls in 2025

Disruption Now

Play Episode Listen Later Jul 16, 2025 28:57


She was told she wasn't tall enough, thin enough, pretty enough.Now she's rewriting the rules of beauty for everyone else.In this episode 183 of the Disruption Now Podcast, Sian Bitner-Kearney, founder of Rock Your Beauty, reveals how perfectionism, filters, and social media lies damage self-worth—and what it takes to break free. Her nonprofit is helping women embrace their authentic selves, one fashion show and workshop at a time.

Disruption Now
Disruption Now Episode 182: 2025 Job Hunt Is Broken | 6 Rules That Still Work

Disruption Now

Play Episode Listen Later Jul 16, 2025 66:28


Laid off or lost in the noise? You're not alone.Episode 182: Nicole Dunbar—Cornell-trained strategist and viral LinkedIn voice—joins Disruption Now to dissect the “white collar recession.” From AI displacement to hiring freezes, she breaks down how even top-tier pros are invisible in today's job market. Learn:6 proven job search strategies (only 1 involves job boards)-Why resumes alone fail—and what recruiters actually trust-The mindset shift every mid-career pro must make now-How to “network” without small talk or selling out-The emotional trap that sabotages high performers-The hiring game changed. Here's how to stay in it.

Disruption Now
Disruption Now Episode 184: This Founder Builds Black Wealth with AI

Disruption Now

Play Episode Listen Later Jul 16, 2025 41:48


She's not waiting for permission.Flavilla Fongang went from the ghettos of Paris to building Black Rise—one of the UK's boldest tech tribes. She's mixing identity, data, and storytelling to scale Black power through business.In this episode:-Why storytelling beats pitching in tech-How she turned oil & gas into a launchpad-Her strategy for building community-first platforms-Why AI is non-negotiable for Black excellence-The real ROI of diverse ecosystemsTimestamps:00:00 Intro — Paris to Power01:42 Childhood in the Paris ghettos03:55 Moving to London & early struggles06:18 From oil & gas to fashion to tech09:44 Founding 3 Colours Rule12:30 How storytelling became her weapon15:02 Why she built GTA Black Women in Tech17:50 Launching Black Rise — the AI-powered tribe21:33 Scaling community and credibility25:01 Diversity with data, not feelings28:40 What leadership looks like in 202531:09 Her advice to future founders34:00 Closing thoughts + how to connectAbout Flavilla Fongang:Multi-award-winning entrepreneur. Founder of 3 Colours Rule, GTA Black Women in Tech, and now Black Rise. Former oil & gas exec turned tech community builder. UN Brand Partner. Named UK's Most Influential Woman in Tech (Computer Weekly), Global Top 100 MIPAD Innovator, and Entrepreneur of the Year (BTA 2023). She also serves as an entrepreneurship expert at Oxford University Saïd Business School.Watch this if you lead with identity and build with vision.—Follow Flavilla Fongang:LinkedIn: https://www.linkedin.com/in/flavillafongangTwitter (X): https://x.com/FlavillaFongangInstagram: https://www.instagram.com/flavillafongangTikTok: https://www.tiktok.com/@flavillafWebsite: https://www.flavillafongang.comBlack Rise: https://www.theblackrise.comGTA Black Women in Tech: https://theblackwomenintech.com3 Colours Rule: https://www.3coloursrule.com#blackfounder #techstorytelling #communitytechDisruption Now: Disrupting the status quo, making emerging tech human-centric and Accessible to all. Website https://disruptionnow.com/Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comMusic: Powerful Beat - Oleksandr Stepanov

CanadianSME Small Business Podcast
SalesChoice: Advancing Responsible AI for Sales Professionals

CanadianSME Small Business Podcast

Play Episode Listen Later Jul 16, 2025 30:36


Welcome to the CanadianSME Small Business Podcast, hosted by Kripa Anand, where we explore cutting-edge technologies and ethical considerations shaping the future of business. Today, we're focusing on the transformative power of Artificial Intelligence in sales and the critical role of ethics and regulations in its application. As AI drives sales productivity and revenue growth, its responsible and ethical use becomes crucial for businesses seeking to leverage its full potential.Joining us today is Dr. Cindy Gordon, CEO & Founder of SalesChoice, a SaaS and Data Sciences company focused on enabling human advantage through Trusted AI Methods. Dr. Gordon is a leading voice in AI and ethics, and today we'll discuss the intersection of AI, ethics, and regulations in sales. Let's dive in!Key Highlights:1. AI, Ethics, Regulations, and the Application of All 3 in Sales: Dr. Gordon will explain how AI, ethics, and regulations are converging in the context of sales, and share the key considerations businesses need to ensure they're using AI responsibly and ethically in their sales processes.2. AI Applications in Sales: Dr. Gordon will discuss some of the most effective ways businesses can leverage AI to improve sales performance, increase revenue, and enhance sales team productivity.3. SalesChoice's AI Platform: InsightEngine™: Dr. Gordon will talk about SalesChoice's AI platform, including SalesInsights™ and MoodInsights™, and how they address different aspects of sales and employee productivity.4. Trusted AI Methods: Dr. Gordon will explain what “Trusted AI Methods” means in practice and why trust is so important in the adoption of AI solutions.5. AI Enablement Advisory and Strategy Solutions: Dr. Gordon will outline the services SalesChoice provides and how they help organizations across diverse AI use cases.Special Thanks to Our Partners:RBC: https://www.rbcroyalbank.com/dms/business/accounts/beyond-banking/index.htmlUPS: https://solutions.ups.com/ca-beunstoppable.html?WT.mc_id=BUSMEWAGoogle: https://www.google.ca/For more expert insights, visit www.canadiansme.ca and subscribe to the CanadianSME Small Business Magazine. Stay innovative, stay informed, and thrive in the digital age!Disclaimer: The information shared in this podcast is for general informational purposes only and should not be considered as direct financial or business advice. Always consult with a qualified professional for advice specific to your situation.

CPO PLAYBOOK
76 How PepsiCo Uses Responsible AI to Boost Employee Productivity

CPO PLAYBOOK

Play Episode Listen Later Jul 15, 2025 31:47


How can companies harness Responsible AI without losing the human touch? PepsiCo's VP of People Solutions, Mark Sankarsingh, shares how the company boosts productivity and streamlines HR—while safeguarding trust, ethics, and human judgment. Learn how to lead with integrity as you scale teams and adopt AI in a digital-first world.

Disruption Now
Disruption Now Episode 181: Wall Street Fund Manager Reveals AI's Next $Trillion Opportunity

Disruption Now

Play Episode Listen Later Jul 14, 2025 33:03


In this episode of the Disruption Now Podcast, host Rob Richardson sits down with Jacob D. Frankel (Kobi), the visionary Founder and CEO of Beyond Alpha Ventures. Jacob shares his journey from managing over $355 million in assets on Wall Street to leading a multi-strategy family office and hedge fund that invests in transformative technologies like AI, quantum computing, and cybersecurity. He discusses the importance of investing with purpose, the future of venture capital, and how Beyond Alpha Ventures is shaping the infrastructure of an AI-driven future. Tune in for an insightful conversation on strategic investing, innovation, and building a legacy.Top 3 Things You'll Learn from This Episode:Investing with Purpose – Jacob emphasizes the significance of deploying capital in ways that make a tangible difference, focusing on ventures that combine financial opportunity with real-world impact.Navigating Market Volatility – Learn how Beyond Alpha Ventures capitalizes on market dislocations and geopolitical shifts to identify high-conviction investment opportunities.The Future of Venture Capital – Discover Jacob's insights on the evolving landscape of venture capital, including the rise of AI, quantum computing, and the importance of ethical foresight in investment strategies.Jacob's Social Media Pages:LinkedIn: https://www.linkedin.com/in/jacobfrankelprivateequityWebsite: https://www.beyondalphaventures.com/Disruption Now: Building a Fair Share for Culture and Media. Join us and disrupt. Website https://bit.ly/2VUO9sfApply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comFacebook https://bit.ly/303IU8jInstagram https://bit.ly/2YOLl26Twitter https://bit.ly/2KfLaTfWebsite https://bit.ly/2VUO9sf

The Family History AI Show
EP27: AI Image Restoration Concerns, Perplexity's Future, Copyright Cases Are Shaping The Future of AI, Project Workspaces Help You Stay Organized

The Family History AI Show

Play Episode Listen Later Jul 14, 2025 73:23


Co-hosts Mark Thompson and Steve Little examine the controversial rise of AI image "restoration" and discuss how entirely new images are being generated, rather than the original photos being restored. This is raising concerns about the preservation of authentic family photos.They discuss Mark's reconsideration of canceling his Perplexity subscription after rediscovering its unique strengths for supporting research.The hosts analyze recent court rulings that permit AI training on legally acquired content, plus Disney's ongoing case against Midjourney.This week's Tip of the Week explores how project workspaces in ChatGPT and Claude can greatly simplify your genealogical research.In RapidFire, the hosts cover Meta's aggressive AI hiring spree, the proliferation of AI tools in everyday software, including a new genealogy transcription tool from Dan Maloney, and the importance of reading AI news critically.Timestamps:In the News:06:50 The Pros and Cons of "Restoring" Family Photos with AI23:58 Mark is Cancelling Perplexity... Maybe32:33 AI Copyright Cases Are Starting to Work Their Way Through the CourtsTip of the Week:40:09 How Project Workspaces Help Genealogists Stay OrganizedRapidFire:48:51 Meta Goes on a Hiring Spree56:09 AI Is Everywhere!01:06:00 Reading AI News ResponsiblyResource LinksOpenAI: Introducing 4o Image Generation https://openai.com/index/introducing-4o-image-generation/Perplexity https://www.perplexity.ai/How does Perplexity work? https://www.perplexity.ai/help-center/en/articles/10352895-how-does-perplexity-workAnthropic wins key US ruling on AI training in authors' copyright lawsuit https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/Meta wins AI copyright lawsuit as US judge rules against authors https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authorsDisney, Universal sue image creator Midjourney for copyright infringement https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/Disney and Universal Sue A.I. Firm for Copyright Infringement https://www.nytimes.com/2025/06/11/business/media/disney-universal-midjourney-ai.htmlProjects in ChatGPThttps://help.openai.com/en/articles/10169521-projects-in-chatgptMeta shares hit all-time high as Mark Zuckerberg goes on AI hiring blitz https://www.cnbc.com/2025/06/30/meta-hits-all-time-mark-zuckerberg-ai-blitz.htmlHere's What Mark Zuckerberg Is Offering Top AI Talent https://www.wired.com/story/mark-zuckerberg-meta-offer-top-ai-talent-300-million/Genealogy Assistant AI Handwritten Text Recognition Tool https://www.genea.ca/htr-tool/Borland Genetics https://borlandgenetics.com/Illusion of Thinking https://machinelearning.apple.com/research/illusion-of-thinkingSimon Willison: Seven replies to the viral Apple reasoning paper -- and why they fall short https://simonwillison.net/2025/Jun/15/viral-apple-reasoning-paper/MIT: Your Brain on ChatGPT https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated https://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450Guiding Principles for Responsible AI in Genealogy https://craigen.org/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Image Generation, AI Ethics, Perplexity, ChatGPT, Claude, Meta, Copyright Law, AI Training, Photo Restoration, Project Management, AI Development, Research Tools, Responsible AI Use, GRIP, AI News Analysis, Vibe Coding, Coalition for Responsible AI in Genealogy, AI Hiring, Dan Maloney, Handwritten Text Recognition

The Greener Way
The AI governance risk keeping executives up at night: A conversation with Elfreda Jonker

The Greener Way

Play Episode Listen Later Jul 14, 2025 15:49


In this episode of The Greener Way, host Michelle Baltazar discusses the governance risks posed by AI with Elfreda Jonker from Alphinity Investment Management.They explore the impact of AI on cybersecurity and data privacy, as highlighted in Alphinity's latest sustainability report. The conversation covers the importance of a Responsible AI framework, how companies including Netflix and Wesfarmers address these risks, and the need for better investor disclosures by fund managers on how they tackle AI risks.01:38 Overview of Alphinity's Investment Management02:54 Highlights from the Sustainability Report04:20 What did Netflix do 08:35 AI as a governance risk11:09 Opportunities and challenges13:54 Conclusion Link: https://www.alphinity.com.au/This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

Data Culture Podcast
How to build an AI strategy with business buy-in – with Emma Verhagen, Ideal Shift AI

Data Culture Podcast

Play Episode Listen Later Jul 14, 2025 37:22


“You don't build an AI strategy in a vacuum – it only works if it's co-created with the business from day one.”

Create with Franz
Shape your AI future

Create with Franz

Play Episode Listen Later Jul 13, 2025 30:36


Are we on the brink of an AI revolution that could reshape our lives in unimaginable ways? Are we worrying about losing our jobs and ways of going things as usual? This is a very real concern that can affect our emotional well being. This week, we sit down with Kristof Horompoly, Head of AI Risk Management at ValidMind and former Head of Responsible AI for JP Morgan Chase, to tackle the biggest questions surrounding artificial intelligence. Kristof, with his deep expertise in the field, helps us navigate the promises and perils of AI. We explore a profound paradox: what if AI could unlock new realms of time, creativity, and even reignite our humanity, allowing us to focus on what truly matters? But conversely, what happens when we hand the steering wheel over to intelligent machines and they take us somewhere entirely unintended? In a world where machines can think, write, and create with increasing sophistication, we wonder: what is left for us to do? Should we be worried, or is there a path to embrace this future?  Kristof provides thoughtful insights on how we can prepare for this evolving landscape, offering a grounded perspective on responsible AI development and what it means for our collective future. Tune in for an essential conversation on understanding, harnessing, and preparing for the age of AI. Topics covered: AI, artificial intelligence, Kristof Horompoly, ValidMind, JP Morgan Chase, AI risk management, responsible AI, future of AI, AI ethics, human-AI interaction, AI impact, technology, innovation, podcast, digital transformation, AI challenges, AI opportunities   Video link: https://youtu.be/MGELXPkYMUU   Did you enjoy this episode and would like to share some love?  

ASCO Daily News
From Clinic to Clinical Trials: Responsible AI Integration in Oncology

ASCO Daily News

Play Episode Listen Later Jul 10, 2025 24:01


Dr. Paul Hanona and Dr. Arturo Loaiza-Bonilla discuss how to safely and smartly integrate AI into the clinical workflow and tap its potential to improve patient-centered care, drug development, and access to clinical trials. TRANSCRIPT Dr. Paul Hanona: Hello, I'm Dr. Paul Hanona, your guest host of the ASCO Daily News Podcast today. I am a medical oncologist as well as a content creator @DoctorDiscover, and I'm delighted to be joined today by Dr. Arturo Loaiza-Bonilla, the chief of hematology and oncology at St. Luke's University Health Network. Dr. Bonilla is also the co-founder and chief medical officer at Massive Bio, an AI-driven platform that matches patients with clinical trials and novel therapies. Dr. Loaiza-Bonilla will share his unique perspective on the potential of artificial intelligence to advance precision oncology, especially through clinical trials and research, and other key advancements in AI that are transforming the oncology field. Our full disclosures are available in the transcript of the episode. Dr. Bonilla, it's great to be speaking with you today. Thanks for being here. Dr. Arturo Loaiza-Bonilla: Oh, thank you so much, Dr. Hanona. Paul, it's always great to have a conversation. Looking forward to a great one today. Dr. Paul Hanona: Absolutely. Let's just jump right into it. Let's talk about the way that we see AI being embedded in our clinical workflow as oncologists. What are some practical ways to use AI? Dr. Arturo Loaiza-Bonilla: To me, responsible AI integration in oncology is one of those that's focused on one principle to me, which is clinical purpose is first, instead of the algorithm or whatever technology we're going to be using. If we look at the best models in the world, they're really irrelevant unless we really solve a real day-to-day challenge, either when we're talking to patients in the clinic or in the infusion chair or making decision support. Currently, what I'm doing the most is focusing on solutions that are saving us time to be more productive and spend more time with our patients. So, for example, we're using ambient AI for appropriate documentation in real time with our patients. We're leveraging certain tools to assess for potential admission or readmission of patients who have certain conditions as well. And it's all about combining the listening of physicians like ourselves who are end users, those who create those algorithms, data scientists, and patient advocates, and even regulators, before they even write any single line of code. I felt that on my own, you know, entrepreneurial aspects, but I think it's an ethos that we should all follow. And I think that AI shouldn't be just bolted on later. We always have to look at workflows and try to look, for example, at clinical trial matching, which is something I'm very passionate about. We need to make sure that first, it's easier to access for patients, that oncologists like myself can go into the interface and be able to pull the data in real time when you really need it, and you don't get all this fatigue alerts. To me, that's the responsible way of doing so. Those are like the opportunities, right? So, the challenge is how we can make this happen in a meaningful way – we're just not reacting to like a black box suggestion or something that we have no idea why it came up to be. So, in terms of success – and I can tell you probably two stories of things that we know we're seeing successful – we all work closely with radiation oncologists, right? So, there are now these tools, for example, of automated contouring in radiation oncology, and some of these solutions were brought up in different meetings, including the last ASCO meeting. But overall, we know that transformer-based segmentation tools; transformer is just the specific architecture of the machine learning algorithm that has been able to dramatically reduce the time for colleagues to spend allotting targets for radiation oncology. So, comparing the target versus the normal tissue, which sometimes it takes many hours, now we can optimize things over 60%, sometimes even in minutes. So, this is not just responsible, but it's also an efficiency win, it's a precision win, and we're using it to adapt even mid-course in response to tumor shrinkage. Another success that I think is relevant is, for example, on the clinical trial matching side. We've been working on that and, you know, I don't want to preach to the choir here, but having the ability for us to structure data in real time using these tools, being able to extract information on biomarkers, and then show that multi-agentic AI is superior to what we call zero-shot or just throwing it into ChatGPT or any other algorithm, but using the same tools but just fine-tuned to the point that we can be very efficient and actually reliable to the level of almost like a research coordinator, is not just theory. Now, it can change lives because we can get patients enrolled in clinical trials and be activated in different places wherever the patient may be. I know it's like a long answer on that, but, you know, as we talk about responsible AI, that's important. And in terms of what keeps me up at night on this: data drift and biases, right? So, imaging protocols, all these things change, the lab switch between different vendors, or a patient has issues with new emerging data points. And health systems serve vastly different populations. So, if our models are trained in one context and deployed in another, then the output can be really inaccurate. So, the idea is to become a collaborative approach where we can use federated learning and patient-centricity so we can be much more efficient in developing those models that account for all the populations, and any retraining that is used based on data can be diverse enough that it represents all of us and we can be treated in a very good, appropriate way. So, if a clinician doesn't understand why a recommendation is made, as you probably know, you probably don't trust it, and we shouldn't expect them to. So, I think this is the next wave of the future. We need to make sure that we account for all those things. Dr. Paul Hanona: Absolutely. And even the part about the clinical trials, I want to dive a little bit more into in a few questions. I just kind of wanted to make a quick comment. Like you said, some of the prevalent things that I see are the ambient scribes. It seems like that's really taken off in the last year, and it seems like it's improving at a pretty dramatic speed as well. I wonder how quickly that'll get adopted by the majority of physicians or practitioners in general throughout the country. And you also mentioned things with AI tools regarding helping regulators move things quicker, even the radiation oncologist, helping them in their workflow with contouring and what else they might have to do. And again, the clinical trials thing will be quite interesting to get into. The first question I had subsequent to that is just more so when you have large datasets. And this pertains to two things: the paper that you published recently regarding different ways to use AI in the space of oncology referred to drug development, the way that we look at how we design drugs, specifically anticancer drugs, is pretty cumbersome. The steps that you have to take to design something, to make sure that one chemical will fit into the right chemical or the structure of the molecule, that takes a lot of time to tinker with. What are your thoughts on AI tools to help accelerate drug development? Dr. Arturo Loaiza-Bonilla: Yes, that's the Holy Grail and something that I feel we should dedicate as much time and effort as possible because it relies on multimodality. It cannot be solved by just looking at patient histories. It cannot be solved by just looking at the tissue alone. It's combining all these different datasets and being able to understand the microenvironment, the patient condition and prior treatments, and how dynamic changes that we do through interventions and also exposome – the things that happen outside of the patient's own control – can be leveraged to determine like what's the best next step in terms of drugs. So, the ones that we heard the news the most is, for example, the Nobel Prize-winning [for Chemistry awarded to Demis Hassabis and John Jumper for] AlphaFold, an AI system that predicts protein structures right? So, we solved this very interesting concept of protein folding where, in the past, it would take the history of the known universe, basically – what's called the Levinthal's paradox – to be able to just predict on amino acid structure alone or the sequence alone, the way that three-dimensionally the proteins will fold. So, with that problem being solved and the Nobel Prize being won, the next step is, “Okay, now we know how this protein is there and just by sequence, how can we really understand any new drug that can be used as a candidate and leverage all the data that has been done for many years of testing against a specific protein or a specific gene or knockouts and what not?” So, this is the future of oncology and where we're probably seeing a lot of investments on that. The key challenge here is mostly working on the side of not just looking at pathology, but leveraging this digital pathology with whole slide imaging and identifying the microenvironment of that specific tissue. There's a number of efforts currently being done. One isn't just H&E, like hematoxylin and eosin, slides alone, but with whole imaging, now we can use expression profiles, spatial transcriptomics, and gene whole exome sequencing in the same space and use this transformer technology in a multimodality approach that we know already the slide or the pathology, but can we use that to understand, like, if I knock out this gene, how is the microenvironment going to change to see if an immunotherapy may work better, right? If we can make a microenvironment more reactive towards a cytotoxic T cell profile, for example. So, that is the way where we're really seeing the field moving forward, using multimodality for drug discovery. So, the FDA now seems to be very eager to support those initiatives, so that's of course welcome. And now the key thing is the investment to do this in a meaningful way so we can see those candidates that we're seeing from different companies now being leveraged for rare disease, for things that are going to be almost impossible to collect enough data, and make it efficient by using these algorithms that sometimes, just with multiple masking – basically, what they do is they mask all the features and force the algorithm to find solutions based on the specific inputs or prompts we're doing. So, I'm very excited about that, and I think we're going to be seeing that in the future. Dr. Paul Hanona: So, essentially, in a nutshell, we're saying we have the cancer, which is maybe a dandelion in a field of grass, and we want to see the grass that's surrounding the dandelion, which is the pathology slides. The problem is, to the human eye, it's almost impossible to look at every single piece of grass that's surrounding the dandelion. And so, with tools like AI, we can greatly accelerate our study of the microenvironment or the grass that's surrounding the dandelion and better tailor therapy, come up with therapy. Otherwise, like you said, to truly generate a drug, this would take years and years. We just don't have the throughput to get to answers like that unless we have something like AI to help us. Dr. Arturo Loaiza-Bonilla: Correct. Dr. Paul Hanona: And then, clinical trials. Now, this is an interesting conversation because if you ever look up our national guidelines as oncologists, there's always a mention of, if treatment fails, consider clinical trials. Or in the really aggressive cancers, sometimes you might just start out with clinical trials. You don't even give the standard first-line therapy because of how ineffective it is. There are a few issues with clinical trials that people might not be aware of, but the fact that the majority of patients who should be on clinical trials are never given the chance to be on clinical trials, whether that's because of proximity, right, they might live somewhere that's far from the institution, or for whatever reason, they don't qualify for the clinical trial, they don't meet the strict inclusion criteria.  But a reason you mentioned early on is that it's simply impossible for someone to be aware of every single clinical trial that's out there. And then even if you are aware of those clinical trials, to actually find the sites and put in the time could take hours. And so, how is AI going to revolutionize that? Because in my mind, it's not that we're inventing a new tool. Clinical trials have always been available. We just can't access them. So, if we have a tool that helps with access, wouldn't that be huge? Dr. Arturo Loaiza-Bonilla: Correct. And that has been one of my passions. And for those who know me and follow me and we've spoke about it in different settings, that's something that I think we can solve. This other paradox, which is the clinical trial enrollment paradox, right? We have tens of thousands of clinical trials available with millions of patients eager to learn about trials, but we don't enroll enough and many trials close to accrual because of lack of enrollment. It is completely paradoxical and it's because of that misalignment because patients don't know where to go for trials and sites don't know what patients they can help because they haven't reached their doors yet. So, the solution has to be patient-centric, right? We have to put the patient at the center of the equation. And that was precisely what we had been discussing during the ASCO meeting. There was an ASCO Education Session where we talked about digital prescreening hubs, where we, in a patient-centric manner, the same way we look for Uber, Instacart, any solution that you may think of that you want something that can be leveraged in real time, we can use these real-world data streams from the patient directly, from hospitals, from pathology labs, from genomics companies, to continuously screen patients who can match to the inclusion/exclusion criteria of unique trials. So, when the patient walks into the clinic, the system already knows if there's a trial and alerts the site proactively. The patient can actually also do decentralization. So, there's a number of decentralized clinical trial solutions that are using what I call the “click and mortar” approach, which is basically the patient is checking digitally and then goes to the site to activate. We can also have the click and mortar in the bidirectional way where the patient is engaged in person and then you give the solution like the ones that are being offered on things that we're doing at Massive Bio and beyond, which is having the patient to access all that information and then they make decisions and enroll when the time is right.  As I mentioned earlier, there is this concept drift where clinical trials open and close, the patient line of therapy changes, new approvals come in and out, and sites may not be available at a given time but may be later. So, having that real-time alerts using tools that are able already to extract data from summarization that we already have in different settings and doing this natural language ingestion, we can not only solve this issue with manual chart review, which is extremely cumbersome and takes forever and takes to a lot of one-time assessments with very high screen failures, to a real-time dynamic approach where the patient, as they get closer to that eligibility criteria, they get engaged. And those tools can be built to activate trials, audit trials, and make them better and accessible to patients. And something that we know is, for example, 91%-plus of Americans live close to either a pharmacy or an imaging center. So, imagine that we can potentially activate certain of those trials in those locations. So, there's a number of pharmacies, special pharmacies, Walgreens, and sometimes CVS trying to do some of those efforts. So, I think the sky's the limit in terms of us working together. And we've been talking with corporate groups, they're all interested in those efforts as well, to getting patients digitally enabled and then activate the same way we activate the NCTN network of the corporate groups, that are almost just-in-time. You can activate a trial the patient is eligible for and we get all these breakthroughs from the NIH and NCI, just activate it in my site within a week or so, as long as we have the understanding of the protocol. So, using clinical trial matching in a digitally enabled way and then activate in that same fashion, but not only for NCTN studies, but all the studies that we have available will be the key of the future through those prescreening hubs. So, I think now we're at this very important time where collaboration is the important part and having this silo-breaking approach with interoperability where we can leverage data from any data source and from any electronic medical records and whatnot is going to be essential for us to move forward because now we have the tools to do so with our phones, with our interests, and with the multiple clinical trials that are coming into the pipelines. Dr. Paul Hanona: I just want to point out that the way you described the process involves several variables that practitioners often don't think about. We don't realize the 15 steps that are happening in the background. But just as a clarifier, how much time is it taking now to get one patient enrolled on a clinical trial? Is it on the order of maybe 5 to 10 hours for one patient by the time the manual chart review happens, by the time the matching happens, the calls go out, the sign-up, all this? And how much time do you think a tool that could match those trials quicker and get you enrolled quicker could save? Would it be maybe an hour instead of 15 hours? What's your thought process on that? Dr. Arturo Loaiza-Bonilla: Yeah, exactly. So one is the matching, the other one is the enrollment, which, as you mentioned, is very important. So, it can take, from, as you said, probably between 4 days to sometimes 30 days. Sometimes that's how long it takes for all the things to be parsed out in terms of logistics and things that could be done now agentically. So, we can use agents to solve those different steps that may take multiple individuals. We can just do it as a supply chain approach where all those different steps can be done by a single agent in a simultaneous fashion and then we can get things much faster. With an AI-based solution using these frontier models and multi-agentic AI – and we presented some of this data in ASCO as well – you can do 5,000 patients in an hour, right? So, just enrolling is going to be between an hour and maximum enrollment, it could be 7 days for those 5,000 patients if it was done at scale in a multi-level approach where we have all the trials available. Dr. Paul Hanona: No, definitely a very exciting aspect of our future as oncologists. It's one thing to have really neat, novel mechanisms of treatment, but what good is it if we can't actually get it to people who need it? I'm very much looking for the future of that.  One of the last questions I want to ask you is another prevalent way that people use AI is just simply looking up questions, right? So, traditionally, the workflow for oncologists is maybe going on national guidelines and looking up the stage of the cancer and seeing what treatments are available and then referencing the papers and looking at who was included, who wasn't included, the side effects to be aware of, and sort of coming up with a decision as to how to treat a cancer patient. But now, just in the last few years, we've had several tools become available that make getting questions easier, make getting answers easier, whether that's something like OpenAI's tools or Perplexity or Doximity or OpenEvidence or even ASCO has a Guidelines Assistant as well that is drawing from their own guidelines as to how to treat different cancers. Do you see these replacing traditional sources? Do you see them saving us a lot more time so that we can be more productive in clinic? What do you think is the role that they're going to play with patient care? Dr. Arturo Loaiza-Bonilla: Such a relevant question, particularly at this time, because these AI-enabled query tools, they're coming left and right and becoming increasingly common in our daily workflows and things that we're doing. So, traditionally, when we go and we look for national guidelines, we try to understand the context ourselves and then we make treatment decisions accordingly. But that is a lot of a process that now AI is helping us to solve. So, at face value, it seems like an efficiency win, but in many cases, I personally evaluate platforms as the chief of hem/onc at St. Luke's and also having led the digital engagement things through Massive Bio and trying to put things together, I can tell you this: not all tools are created equal. In cancer care, each data point can mean the difference between cure and progression, so we cannot really take a lot of shortcuts in this case or have unverified output. So, the tools are helpful, but it has to be grounded in truth, in trusted data sources, and they need to be continuously updated with, like, ASCO and NCCN and others. So, the reason why the ASCO Guidelines Assistant, for instance, works is because it builds on all these recommendations, is assessed by end users like ourselves. So, that kind of verification is critical, right? We're entering a phase where even the source material may be AI-generated. So, the role of human expert validation is really actually more important, not less important. You know, generalist LLMs, even when fine-tuned, they may not be enough. You can pull a few API calls from PubMed, etc., but what we need now is specialized, context-aware, agentic tools that can interpret multimodal and real-time clinical inputs. So, something that we are continuing to check on and very relevant to have entities and bodies like ASCO looking into this so they can help us to be really efficient and really help our patients. Dr. Paul Hanona: Dr. Bonilla, what do you want to leave the listener with in terms of the future direction of AI, things that we should be cautious about, and things that we should be optimistic about? Dr. Arturo Loaiza-Bonilla: Looking 5 years ahead, I think there's enormous promise. As you know, I'm an AI enthusiast, but always, there's a few priorities that I think – 3 of them, I think – we need to tackle head-on. First is algorithmic equity. So, most AI tools today are trained on data from academic medical centers but not necessarily from community practices or underrepresented populations, particularly when you're looking at radiology, pathology, and what not. So, those blind spots, they need to be filled, and we can eliminate a lot of disparities in cancer care. So, those frameworks to incentivize while keeping the data sharing using federated models and things that we can optimize is key. The second one is the governance on the lifecycle. So, you know, AI is not really static. So, unlike a drug that is approved and it just, you know, works always, AI changes. So, we need to make sure that we have tools that are able to retrain and recall when things degrade or models drift. So, we need to use up-to-date AI for clinical practice, so we are going to be in constant revalidation and make it really easy to do. And lastly, the human-AI interface. You know, clinicians don't need more noise or we don't need more black boxes. We need decision support that is clear, that we can interpret, and that is actionable. “Why are you using this? Why did we choose this drug? Why this dose? Why now?” So, all these things are going to help us and that allows us to trace evidence with a single click. So, I always call it back to the Moravec's paradox where we say, you know, evolution gave us so much energy to discern in the sensory-neural and dexterity. That's what we're going to be taking care of patients. We can use AI to really be a force to help us to be better clinicians and not to really replace us. So, if we get this right and we decide for transparency with trust, inclusion, etc., it will never replace any of our work, which is so important, as much as we want, we can actually take care of patients and be personalized, timely, and equitable. So, all those things are what get me excited every single day about these conversations on AI. Dr. Paul Hanona: All great thoughts, Dr. Bonilla. I'm very excited to see how this field evolves. I'm excited to see how oncologists really come to this field. I think with technology, there's always a bit of a lag in adopting it, but I think if we jump on board and grow with it, we can do amazing things for the field of oncology in general. Thank you for the advancements that you've made in your own career in the field of AI and oncology and just ultimately with the hopeful outcomes of improving patient care, especially cancer patients. Dr. Arturo Loaiza-Bonilla: Thank you so much, Dr. Hanona. Dr. Paul Hanona: Thanks to our listeners for your time today. If you value the insights that you hear on ASCO Daily News Podcast, please take a moment to rate, review, and subscribe wherever you get your podcasts. Disclaimer: The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. More on today's speakers:    Dr. Arturo Loaiza-Bonilla @DrBonillaOnc Dr. Paul Hanona @DoctorDiscover on YouTube Follow ASCO on social media:      @ASCO on Twitter      ASCO on Facebook      ASCO on LinkedIn    ASCO on BlueSky Disclosures: Paul Hanona: No relationships to disclose. Dr. Arturo-Loaiza-Bonilla: Leadership: Massive Bio Stock & Other Ownership Interests: Massive Bio Consulting or Advisory Role: Massive Bio, Bayer, PSI, BrightInsight, CardinalHealth, Pfizer, AstraZeneca, Medscape Speakers' Bureau: Guardant Health, Ipsen, AstraZeneca/Daiichi Sankyo, Natera

The Broadband Bunch
Episode 446: Chris Draper with SafetAI on Responsible AI in Broadband and Utilities

The Broadband Bunch

Play Episode Listen Later Jul 10, 2025 47:16


In this episode of The Broadband Bunch, host Brad Hine sits down with Chris Draper, Board Chair at SafetAI, recorded live at day two of the Community Broadband Action Network (CBAN) conference in Ames, Iowa. With broadband providers increasingly overwhelmed by promises of “AI-infused” solutions, Chris brings clarity and expertise to the conversation around artificial intelligence in utilities and broadband. Drawing from his experience in high-risk technology environments—from rocket science to compliance in legal and government tech—Chris discusses the need for intentional, ethical AI implementation. He explores the "art of the possible" while highlighting the real-world risks of AI systems operating faster than human oversight can manage. Listeners will gain insight into how AI can amplify human action when deployed responsibly—especially in rural broadband and utility environments—and why now is the time to establish ethical frameworks before regulatory mandates catch up. Chris also shares his philosophy on data responsibility, automation pitfalls, the importance of transparency, and how SafetAI is helping organizations make informed decisions about AI adoption.

HR on the Offensive
How can you build trust in AI and future-proof your HR strategy?

HR on the Offensive

Play Episode Listen Later Jul 10, 2025 27:34


AI is transforming the world of work at speed—but with great potential comes a need for clarity, accountability, and smart, human-centred decisions. That's why we developed the TRUSTED AI framework at LACE Partners: a practical, values-led model to help organisations adopt AI in ways that are responsible, secure, and truly useful. In this episode of the HR on the Offensive podcast, Chris Howard sits down with LACE AI experts Martin Colyer and Charlie Frost to unpack the framework and what it means for forward-thinking HR teams wanting to trust in AI. Introducing the TRUSTED AI Framework We've also published two companion blogs that go deeper into each element of the model—but here's a quick guide to what TRUSTED stands for: T — Technology Make sure your AI tools and infrastructure match the needs of your people and business—not the other way around. R — Regulation Stay compliant with evolving laws and ethical expectations. Responsible AI starts with knowing the rules. U — Usability If AI isn't intuitive, it won't get used. Prioritise design that blends seamlessly into everyday workflows. S — Security Keep systems and data safe. Robust protection is essential when dealing with sensitive HR information. T — Transparency Build trust by making it clear how AI decisions are made, and how tools are trained and evaluated. E — Ethics AI must be fair and accountable. Bias and unintended consequences must be actively addressed—not left to chance. D — Data High-quality, well-governed data is the fuel for reliable AI. Without it, everything else breaks down.

Pondering AI
A Question of Humanity with Pia Lauritzen, PhD

Pondering AI

Play Episode Listen Later Jul 9, 2025 55:48


Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger's eerily precise predictions, the skill of critical thinking, and why it's not really about the questions at all. Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions. Related ResourcesQuestions (Book): https://www.press.jhu.edu/books/title/23069/questions TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions Question Jam: www.questionjam.comForbes Column: forbes.com/sites/pialauritzen LinkedIn Learning: www.Linkedin.com/learning/pialauritzen Personal Website: pialauritzen.dk A transcript of this episode is here.   

Analyse Asia with Bernard Leong
The Future of AI Trust: Why Guardrails Actually Accelerate Innovation with Sabastian Niles

Analyse Asia with Bernard Leong

Play Episode Listen Later Jul 8, 2025 52:31


"You can try to develop self-awareness and take a beginner's mind in all things. This includes being open to feedback and truly listening, even when it might be hard to receive. I think that's been something I've really tried to practice. The other area is recognizing that just like a company or country, as humans we have many stakeholders. You may wear many hats in different ways. So as we think of the totality of your life over time, what's your portfolio of passions? How do you choose—as individuals, as society, as organizations, as humans and families with our loved ones and friends—to not just spend your time and resources, but really invest your time, resources, and spirit into areas, people, and contexts that bring you meaning and where you can build a legacy? So it's not so much advice, but more like a north star." - Sabastian V. Niles Fresh out of the studio, Sabastian Niles, President and Chief Legal Officer at Salesforce Global, joins us to explore how trust and responsibility shape the future of enterprise AI. He shares his journey from being a high-tech corporate lawyer and trusted advisor to leading AI governance at a company whose number one value is trust, reflecting on the evolution from automation to agentic AI that can reason, plan, and execute tasks alongside humans. Sabastian explains how Agentforce 3.0 enables agent-to-agent interactions and human-AI collaboration through command centers and robust guardrails. He highlights how organizations are leveraging trusted AI for personalized customer experiences, while Salesforce's Office of Ethical and Humane Use operationalizes trust through transparency, explainability, and auditability. Addressing the black box problem in AI, he emphasizes that guardrails provide confidence to move faster rather than creating barriers. Closing the conversation, Sabastian shares his vision on what great looks like for trusted agentic AI at scale. Episode Highlights [00:00] Quote of the Day by Sabastian Niles: "Portfolio of passions - invest your spirit into areas that bring meaning" [01:02] Introduction: Sabastian Niles, President and Chief Legal Officer of Salesforce Global [02:29] Sabastian's Career Journey [04:50] From Trusted Advisor to SalesForce whose number one value is trust [08:09] Salesforce's 5 core values: Trust, Customer Success, Innovation, Equality, Sustainability [10:25] Defining Agentic AI: humans with AI agents driving stakeholder success together [13:13] Trust paradigm shift: trusted approaches become an accelerant, not obstacle [17:33] Agent interactions: not just human-to-agent, but agent-to-agent-to-agent handoffs [23:35] Enterprise AI requires transparency, explainability, and auditability [28:00] Trust philosophy: "begins long before prompt, continues after output" [34:06] Office of Ethical and Humane Use operationalizes trust values [40:00] Future vision: AI helps us spend time on uniquely human work [45:17] Governance philosophy: Guardrails provide confidence to move faster [48:24] What does great look like for Salesorce for Trust & Responsibility in the Era of AI? [50:16] Closing Profile: Sabastian V. Niles, President & Chief Legal Officer, LinkedIn: https://www.linkedin.com/in/sabastian-v-niles-b0175b2/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/

The Family History AI Show
EP26: Gemini and Claude Updates, RootsTech Panel on Responsible AI, Interview with Jessica Taylor of Legacy Tree Genealogists, ChatGPT 5 Announcement

The Family History AI Show

Play Episode Listen Later Jul 7, 2025 62:12


Co-hosts Mark Thompson and Steve Little discuss recent updates from Google Gemini and Anthropic Claude that are reshaping AI capabilities for genealogists. Google's Gemini 2.5 Pro with its massive context window and Claude 4's hybrid reasoning models that excels at both writing and document analysis.They share insights from the RootsTech panel on responsible AI use in genealogy, and introduce the Coalition's five core principles for the response use of AI. The episode features an interview with Jessica Taylor, president of Legacy Tree Genealogists, who discusses how her company is thoughtfully experimenting with AI tools.In RapidFire, they preview ChatGPT 5's anticipated summer release, Meta's $14 billion acquisition to stay competitive, and Adobe Acrobat AI's new multi-document capabilities.Timestamps:In the News:03:45 Google Gemini 2.5 Pro: Massive Context Windows Transform Document Analysis15:09 Claude 4 Opus and Sonnet: Hybrid Reasoning Models for Writing and Research26:30 RootsTech Panel: Coalition for Responsible AI in GenealogyInterview:31:28 Jessica Taylor, CEO of Legacy Tree Genealogists, on her cautious approach to AI AdoptionRapidFire:45:07 ChatGPT 5 Coming Soon: One Model to Rule Them All51:08 Meta's $14.8 Billion Scale AI Acquisition56:42 Adobe Acrobat AI Assistant Adds Multi-Document AnalysisResource LinksGoogle I/O Conference Highlightshttps://blog.google/technology/ai/google-io-2025-all-our-announcements/Anthropic Announces Claude 4https://www.anthropic.com/news/claude-4Anthropic's new Claude 4 AI models can reason over many stepshttps://techcrunch.com/2025/05/22/anthropics-new-claude-4-ai-models-can-reason-over-many-steps/Coalition for Responsible AI in Genealogyhttps://craigen.org/Jessica M. Taylorhttps://www.apgen.org/users/jessica-m-taylorLegacy Tree Genealogistshttps://www.legacytree.com/Rootstechhttps://www.familysearch.org/en/rootstech/ChatGPT 5 is Coming Soonhttps://www.tomsguide.com/ai/chatgpt/chatgpt-5-is-coming-soon-heres-what-we-knowMeta's $14.8 billion Scale AI deal latest test of AI partnershipshttps://www.reuters.com/sustainability/boards-policy-regulation/metas-148-billion-scale-ai-deal-latest-test-ai-partnerships-2025-06-13/A frustrated Zuckerberg makes his biggest AI bethttps://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.htmlAdobe upgrades Acrobat AI chatbot to add multi-document analysishttps://www.androidauthority.com/adobe-ai-assistant-acrobat-3451988/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Google Gemini, Claude AI, OpenAI, ChatGPT, Meta AI, Adobe Acrobat, Responsible AI, Coalition for Responsible AI in Genealogy, RootsTech, AI Ethics, Document Analysis, AI Writing Tools, Hybrid Reasoning Models, Context Windows, Professional Genealogy, Legacy Tree Genealogists, Jessica Taylor, AI Integration, Multi-Document Analysis, AI Acquisitions

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 560: Inside Multi-Agentic AI: 3 Critical Risks and How to Navigate Them

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jul 3, 2025 32:49


Multi-agentic AI is rewriting the future of work.... but are we racing ahead without checking for warning signs?Microsoft's new agent systems can split up work, make choices, and act on their own. The possibilities? Massive.But it's not without risks, which is why you NEED to listen to Sarah Bird. She's the Chief Product Officer of Responsible AI at Microsoft and is constantly building out safer agentic AI. So what's really at stake when AIs start making decisions together?And how do you actually stay in control?We're pulling back the curtain on the 3 critical risks of multi-agentic AI and unveiling the playbook to navigate them safely.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft's AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

The TribalHub Podcast
SoCal Regional Spotlight: Cheryl Goodman on Building Responsible AI

The TribalHub Podcast

Play Episode Listen Later Jul 3, 2025 31:50


In this episode of the TribalHub Podcast, we're joined by SoCal Regional speaker Cheryl Goodman, author of How to Win Friends and Influence Robots. Cheryl shares insights from her session, AI Roadmap: Building Blocks and Best Practices, diving into what it really means to develop a thoughtful, sustainable AI strategy. We explore her professional journey, the evolving definition of intelligence in the age of AI, and what “responsible AI” looks like in real-world practice. Purchase How to Win Friends and Influence Robots and connect with Cheryl on LinkedIn.

Process Transformers
Episode 26: Noisy Funnels, Real Signals – AI's True Impact on B2B Sales and Marketing

Process Transformers

Play Episode Listen Later Jul 2, 2025 36:23


In this episode, Matt Heinz demystifies the real-world applications of AI, emphasizing its role in augmenting human effort rather than replacing it. Discover how AI is used today to streamline operations and boost productivity in marketing and sales, and explore what the future holds as AI technologies evolve. Perfect for professionals seeking to integrate AI into their strategies effectively. Let's dive right in!

Microsoft Research Podcast
AI Testing and Evaluation: Learnings from genome editing

Microsoft Research Podcast

Play Episode Listen Later Jun 30, 2025 34:30 Transcription Available


In this episode, Alta Charo, emerita professor of law and bioethics at the University of Wisconsin–Madison, joins Sullivan for a conversation on the evolving landscape of genome editing and its regulatory implications. Drawing on decades of experience in biotechnology policy, Charo emphasizes the importance of distinguishing between hazards and risks and describes the field's approach to regulating applications of technology rather than the technology itself. The discussion also explores opportunities and challenges in biotech's multi-agency oversight model and the role of international coordination. Later, Daniel Kluttz, a partner general manager in Microsoft's Office of Responsible AI, joins Sullivan to discuss how insights from genome editing could inform more nuanced and robust governance frameworks for emerging technologies like AI.

The Big Unlock
Designing AI-Native Healthcare with Innovation, Automation, and Responsible AI.

The Big Unlock

Play Episode Listen Later Jun 30, 2025 27:29


 The Big Unlock Podcast · Designing AI-Native Healthcare with Innovation, Automation, and Responsible AI. – Podcast with Sara Vaezy In this episode, Sara Vaezy, Chief Transformation Officer at Providence, discusses Providence's strategic approach to digital transformation, consumer engagement, and responsible AI adoption to improve both patient and caregiver experiences.  Sara highlights the importance of delivering personalized, frictionless, and proactive healthcare experiences across digital touchpoints. At Providence, a standout initiative is the use of conversational AI to enable ‘message deflection' which reduces the volume of patient messages sent to physicians by helping patients resolve queries instantly through intelligent chatbots. Sara emphasizes building a digital workforce not just to automate routine tasks, but to rethink and redesign workflows creatively. With foundational investments in cloud infrastructure, unified data systems, and interoperability, Providence is well-positioned to scale AI use cases like ambient documentation and care navigation.  Sara also shares how Providence has incubated and spun off innovative startups like DexCare and Praia Health to address critical gaps in supply-demand matching and patient personalization. She advocates for ethical AI governance, better observability tools, and designing AI-native healthcare processes that go beyond simply replacing human tasks. Take a listen.

The Financial Executive Podcast
From Forecasts to Finance Bots: EY's Sam Peterson on the AI-Powered Future of Accounting

The Financial Executive Podcast

Play Episode Listen Later Jun 30, 2025 27:38


In this episode of the FEI Podcast, Sam Peterson, EY's Global Innovation Leader for Financial Accounting Advisory Services, explores the transformative impact of generative AI on the finance function. From the evolution of AI since its breakout moment to real-world use cases in FP&A, treasury, and financial reporting, Sam shares insights on how finance leaders can harness AI to drive efficiency, accuracy, and strategic value. The conversation also dives into emerging trends like agentic AI and reasoning models, and how professionals can prepare their teams for the future of work. Special Guest: Sam Peterson.

AI and the Future of Work
AI and Safety: How Responsible Tech Leaders Build Trustworthy Systems (National Safety Month Special)

AI and the Future of Work

Play Episode Listen Later Jun 26, 2025 31:08


In honor of National Safety Month, this special compilation episode of AI and the Future of Work brings together powerful conversations with four thought leaders focused on designing AI systems that protect users, prevent harm, and promote trust. Featuring past guests:Silvio Savarese (Executive Vice President and Chief Scientist, Salesforce) -Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/15548310Navindra Yadav (Co-founder & CEO, Theom) -  Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/12370356Eric Siegel (CEO, Gooder AI & Author ) -  Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14464391Ben Kus (CTO, Box) -  Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14789034✅ What You'll Learn: What it means to design AI with safety, transparency, and human oversight in mindHow leading enterprises approach responsible AI development at scaleWhy data privacy and permissions are critical to safe AI deploymentHow to detect and mitigate bias in predictive modelsWhy responsible AI requires balancing speed with long-term impactHow trust, explainability, and compliance shape the future of enterprise AI  ResourcesSubscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe Other special compilation episodes Ethical AI in Hiring: How to Stay Compliant While Building a Fairer Future of Work (HR Day Special Episode)Data Privacy Day Special Episode: AI, Deepfakes & The Future of TrustThe Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & TrustWorld Health Day Special: How AI Is Making Healthcare Smarter, Cheaper, and Kinder

Environment Variables
Environment Variables Year Three Roundup

Environment Variables

Play Episode Listen Later Jun 26, 2025 25:20


It's been three years of Environment Variables! What a landmark year for the Green Software Foundation. From launching behind-the-scenes Backstage episodes, to covering the explosive impact of AI on software emissions, to broadening our audience through beginner-friendly conversations; this retrospective showcases our mission to create a trusted ecosystem for sustainable software. Here's to many more years of EV!

Pondering AI
A Healthier AI Narrative with Michael Strange

Pondering AI

Play Episode Listen Later Jun 25, 2025 59:51


Michael Strange has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well.  Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system. Michael Strange is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health & Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures). Related Resources If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/ Beyond ‘Our product is trusted!' – A processual approach to trust in AI healthcare (paper) https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539 Michael Strange (website): https://mau.se/en/persons/michael.strange/  A transcript of this episode is here.     

Faces of Digital Health
Global AI Regulation: Health AI, Global Network and Early warning system

Faces of Digital Health

Play Episode Listen Later Jun 24, 2025 43:02


The integration of Artificial Intelligence (AI) in healthcare presents both opportunities and challenges that demand careful consideration. The complex interplay between innovation, regulation, and ethical governance are central themes at the heart of global discussions on health AI. This dialogue was brought to the forefront in a recent conversation with Ricardo Baptista Leite, CEO of Health AI - Global Agency for Responsible AI in Healthcare. Understanding Health AI and Its Mission Health AI, the global agency for responsible AI in health, is at the forefront of steering the development and adoption of AI solutions through collaborative regulatory mechanisms and global standards. www.facesofdigitalhealth.com https://fodh.substack.com/ Youtube:

Green IO
#60 Why Tech companies should not deprioritize future readiness with Rainer Karcher

Green IO

Play Episode Listen Later Jun 24, 2025 49:37


“Climate activist in a suit”. This is how Rainer Karcher describes himself. It is an endless debate between people advocating for the system to change from the outside and those willing to change it from the inside. In this episode Gaël Duez welcomes a strong advocate of moving the corporate world into the right direction from within? Having spent 2 decades in companies such as Siemens or Allianz, Rainer Karsher knows the corporate world well, which he now advises on sustainability. In this Green IO episode, they analyse the current backlash against ESG in our corporate world and what can be done to keep big companies aligned with the Paris agreement, but also caring about biodiversity or human rights across their supply chain. Many topics were covered such as: Why ESB has nothing to with “saving the planet”, 3 tips to tackle the end of the month vs end-of-the world dilemma, Embracing a global perspective on ESG and why the current backlash is a western world only issue, Knowing the price we pay for AI and how to avoid rebound effect, the challenge with shadow AI and why training is pivotal, and yes they talked about whales also and many more things!

The Wisdom Of... with Simon Bowen
Dr. Catriona Wallace: The Seven Generations Principle and the Future of Human-AI Leadership

The Wisdom Of... with Simon Bowen

Play Episode Listen Later Jun 23, 2025 63:47


In this episode of The Wisdom Of ... Show, host Simon Bowen speaks with Dr. Catriona Wallace, a world-renowned AI pioneer, founder of the Responsible Metaverse Alliance, and one of the most influential voices in ethical technology development. With over two decades in AI, long before most people knew it existed, Catriona brings a unique perspective that bridges cutting-edge technology with ancient indigenous wisdom.As Chair of Boab AI, co-author of Checkmate Humanity, and a former Shark on Shark Tank Australia, Catriona has consistently been at the forefront of responsible technology development. But what makes this conversation extraordinary is her integration of plant medicine practices, indigenous community wisdom, and the "Seven Generations Principle" into the most advanced AI discussions of our time.Ready to transform your leadership approach? Join Simon's exclusive masterclass on The Models Method. Learn how to articulate your unique value and create scalable impact: https://thesimonbowen.com/masterclass Episode Breakdown00:00 Introduction and Catriona's journey from wanting to be a farmer to becoming an AI pioneer05:45 Why Australia risks becoming an "AI backwater" and the urgent need for responsible AI adoption12:30 The difference between AI ethics and responsible AI and why most leaders get this wrong18:15 The "evolutionary tipping point" toward transhumanism and what it means for business25:20 Plant medicine journeys and their impact on tech leaders' understanding of regenerative economics32:45 The Seven Generations Principle: How indigenous wisdom guides AI decision-making38:30 From extraction to regeneration: Why business models must fundamentally transform44:15 The eight principles of responsible AI and how to implement them in organizations50:30 "Rapid Transformation" and the five-step process for evolving leadership consciousness56:45 The intersection of technology love and nature love in shaping the future of humanityAbout Dr. Catriona WallaceDr. Catriona Wallace has been recognized as a Top Global Power Woman by the Centre of Economic & Leadership Development and as the Most Influential Woman in Business & Entrepreneurship by the Australian Financial Review. In 2023, she was a Shark on the hit TV series Shark Tank Australia.Catriona is the founder of the Responsible Metaverse Alliance and Chair of Boab AI, Artesian Capital's AI Accelerator and VC fund. She was also the founder of Ethical AI Advisory (now part of the Gradient Institute) and co-author of Checkmate Humanity: The How and Why of Responsible AI.As founder of AI company Flamingo AI (which exited in 2020), Catriona led only the second woman-led business ever to list on the Australian Stock Exchange. She's an international keynote speaker, one of the world's most cited experts on AI and the Metaverse, and has been recognized by Onalytica as one of the world's top AI speakers.With a PhD in Organizational Behaviour: Technology Substituting for Human Leaders and an Honorary Doctorate in Business, Dr. Wallace was inducted into the Royal Institution of Australia as one of Australia's most pre-eminent scientists. She is also a human rights activist, mother of five, trained Plant Medicine Guide, and strong advocate of the Psychedelic Renaissance.Connect with Dr. Catriona WallaceLinkedIn: Dr. Catriona Wallace Website: Responsible Metaverse Alliance Personal Website:

The Road to Accountable AI
Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate

The Road to Accountable AI

Play Episode Listen Later Jun 19, 2025 39:33 Transcription Available


Kevin Werbach interviews Dale Cendali, one of the country's leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution.  Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm's nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute's Copyright Restatement project and sits on the Board of the International Trademark Association. Transcript Thompson Reuters Wins Key Fair Use Fight With AI Startup Dale Cendali - 2024 Law360 MVP Copyright Office Report on Generative AI Training

The Marketing AI Show
#154: AI Answers: The Future of AI Agents at Work, Building an AI Roadmap, Choosing the Right Tools, & Responsible AI Use

The Marketing AI Show

Play Episode Listen Later Jun 19, 2025 68:16


In this episode of AI Answers, Paul Roetzer and Cathy McPhillips tackle 20 of the most pressing questions from our 48th Intro to AI class—covering everything from building effective AI roadmaps and selecting the right tools, using GPTs, navigating AI ethics, understanding great prompting, and more. Access the show notes and show links here Timestamps: 00:00:00 — Intro 00:08:46 — Question #1: How do you define a “human-first” approach to AI? 00:11:33 — Question #2: What uniquely human qualities do you believe we must preserve in an AI-driven world? 00:15:55 — Question #3: Where do we currently stand with AGI—and how close are OpenAI, Anthropic, Google, and Meta to making it real? 00:17:53 — Question #4: If AI becomes smarter, faster, and more accessible to all—how do individuals or companies stand out? 00:23:17 — Question #5: Do you see a future where AI agents can collaborate like human teams?  00:28:40 — Question #6: For those working with sensitive data, when does it make sense to use a local LLM over a cloud-based one? 00:30:50 — Question #7: What's the difference between ChatGPT Projects and Custom GPTs? 00:32:36 — Question #8:  If an agency or consultant is managing dozens of GPTs, what are your best tips for organizing workflows, versioning, and staying sane at scale? 00:36:12 — Question #9: How do you personally decide which AI tools to use—and do you see a winner emerging? 00:38:53 — Question #10: What tools or platforms in the agent space are actually ready for production today? 00:43:10 — Question #11: For companies just getting started, how do you recommend they identify the right pain points and build their AI roadmap? 00:45:34 — Question #12: What AI tools do you believe deliver the most value to marketing leaders right now? 00:46:20 — Question #13: How is AI forcing agencies and consultants to rethink their models, especially with rising efficiency and lower costs? 00:51:14 — Question #14: What does great prompting actually look like? And how should employers think about evaluating that skill in job candidates? 00:54:40 — Question #15: As AI reshapes roles, does age or experience become a liability—or can being the most informed person in the room still win out? 00:56:52 — Question #16: What kind of changes should leaders expect in workplace culture as AI adoption grows? 01:00:54 — Question #17: What is ChatGPT really storing in its “memory,” and how persistent is user data across sessions? 01:02:11 — Question #18: How can businesses safely use LLMs while protecting personal or proprietary information? 01:02:55 — Question #19: Why do you think some companies still ban AI tools internally—and what will it take for those policies to shift? 01:04:13 — Question #20: If AI tools are free or low-cost, does that make us the product? Or is there a more optimistic future where creators and users both win This week's episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types. For more information on MAICON and to register for this year's conference, visit www.MAICON.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy 

AI and the Future of Work
340: Critical Thinking over Code: Tess Posner, AI4ALL CEO, on Raising Responsible AI Leaders

AI and the Future of Work

Play Episode Listen Later Jun 16, 2025 43:06


Tess Posner is the CEO and founding leader of AI4ALL, a nonprofit that works to ensure the next generation of AI leaders is diverse and well-quipped to innovate. Since joining in 2017, she has focused on embedding ethics, responsibility, and real-world impact into AI education. Her work connects students from underrepresented backgrounds to hands-on projects and mentorships that prepare them to lead in tech. Beyond her role at AI4ALL, Tess is a musician whose 2023 EP Alchemy has over 600,000 streams on Spotify. She was named a 2020 Brilliant Woman in AI Ethics Hall of Fame Honoree and holds degrees from St. John's University and Columbia University.In this conversation, we discuss:Why AI literacy is becoming essential for everyone, from casual users to future developersThe role of project-based learning in helping students see the real-world impact of AIWhat it takes to expand AI access for underrepresented communitiesHow AI can either reinforce bias or drive real change, depending on who's leading its developmentWhy schools should stop penalizing AI use and instead teach students to use it with curiosity and responsibilityTess's views on balancing optimism and caution in the development of AI toolsResources:Subscribe to the AI & The Future of Work NewsletterConnect with Tess on LinkedIn or learn more about AI4ALLAI fun fact articleOn How To Build and Activate a Powerful NetworkPast episodes mentioned in this conversation:[With Tess in 2020] - About what leaders do in a crisis[With Tess in 2019] - About how to mitigate AI bias and hiring best practices [With Chris Caren, Turnitin CEO] - On Using AI to Prevent Students from Cheating[With Marcus "Bellringer" Bell] - On Creating North America's First AI Artist

The Road to Accountable AI
Brenda Leong: Building AI Law Amid Legal Uncertainty

The Road to Accountable AI

Play Episode Listen Later Jun 12, 2025 36:52 Transcription Available


Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum.  Transcript   AI Audits: Who, When, How...Or Even If?   Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda      

Pondering AI
LLMs Are Useful Liars with Andriy Burkov

Pondering AI

Play Episode Listen Later Jun 11, 2025 47:00


Andriy Burkov talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents.  Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word). Andriy Burkov holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His Artificial Intelligence Newsletter reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer. Related Resources The Hundred Page Language Models Book: https://thelmbook.com/ The Hundred Page Machine Learning Book: https://themlbook.com/  True Positive Weekly (newsletter): https://aiweekly.substack.com/  A transcript of this episode is here.

The Guy Gordon Show
Fake Images Spreading Misinformation Following LA Protests

The Guy Gordon Show

Play Episode Listen Later Jun 11, 2025 7:55


June 11, 2025 ~ Misleading photographs, videos and text have spread widely on social media as protests against immigrant raids have unfolded in Los Angeles. Anjana Susarla, professor of Responsible AI at Michigan State University, talks with Chris and Lloyd about verifying sources and authenticity, government oversight, and much more.

Smart City
Intelligenza artificiale: ecco dove può colpire duro

Smart City

Play Episode Listen Later Jun 11, 2025 5:19


Immaginate di dover colorare una cartina geografica sulla base degli impatti sociali dell'intelligenza artificiale, scurendo le aree più a rischio di essere colpite. Quali vi aspettereste che siano? Le città o le campagne? I centri o le periferie? Le città medie o i grandi centri urbani? Uno studio condotto dai Nokia Bell Labs di Cambridge e dal Politecnico di Torino ha esaminato il probabile impatto dellIA sul lavoro nelle principali aree urbane degli Stati Uniti e ne è uscita una mappa che offre diversi spunti di riflessione e che segnala un rischio: quello di acuire i divari sociali presenti nel paese, colpendo in particolare le città di medie dimensioni fortemente legate a un solo tipo di industria. Ce ne parla Daniele Quercia, Director of Responsible AI at Nokia Bell Labs Cambridge.

Green IO
#59 Debriefing Qcon Sustainability track with Erica Pisani

Green IO

Play Episode Listen Later Jun 10, 2025 39:32


How is sustainability covered in main tech conferences? Sure cybersecurity, DevOps, or anything related to SRE, is covered at length. Not to mention AI… But what room is left for the environmental impact of our job ? And what are the main trends which are filtered out from specialized conferences in Green IT such as Green IO, GreenTech Forum or eco-compute to generic Tech conferences? To talk about it Gaël Duez sat down in this latest Green IO episode with Erica Pisani who was the MC of the Performance and Sustainability track at QCon London this year. Together they discussed: - The inspiring speakers in the track - Why Qcon didn't become AIcon - How to get C-level buy-in by highlighting the new environmental rik - The limit to efficiency: fine balancing between hardware stress and usage optimization - Why performance and sustainability are tight in technology - Why assessing Edge computing's positive and negative impact is tricky And much more! ❤️ Subscribe, follow, like, ... stay connected the way you want to never miss an episode, twice a month, on Tuesday!

Disruption Now
He Was 15, Broke, and Brilliant. Now He Coaches the Fortune 500

Disruption Now

Play Episode Listen Later Jun 4, 2025 36:43


Eric Brown Jr. is the founder of ELVTE Coaching and Consulting and a Generative AI innovation lead at Microsoft. In this powerful conversation with Rob Richardson, he unpacks how early adversity became fuel for legacy. From mentoring underserved youth to helping enterprise teams align tech with purpose, Eric proves that impact isn't just about innovation — it's about elevation.Disruption Now Episode 180Inside This Episode:-Life Hacker Mindset: How reframing pain unlocks potential-AI with Empathy: Why tech that doesn't center people fails-The Power of Context: Making technology relatable and actionable for allConnect with Eric Brown Jr.:LinkedIn: www.linkedin.com/in/ericbrownjrForbes Council: councils.forbes.com/profile/Eric-Brown-Jr-Founder-%7C-Chief-Transformation-Officer-ELVTE-Coaching-and-Consulting/440ec31a-0e0d-4650-ae7c-a2b401148572Thought Leadership: linkedin.com/pulse/empowering-dreams-lessons-learned-from-any-fellow-eric-brown-jrDisruption Now Apply to be a guest: form.typeform.com/to/Ir6AgmzrWatch more episodes: podcast.disruptionnow.comDisruption Now: Building a fair share for the Culture and Media. Join us and disrupt.Apply to get on the Podcast: https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comLinkedIn: https://www.linkedin.com/in/robrichardsonjr/Instagram: https://www.instagram.com/robforohio/Website: https://podcast.disruptionnow.com/

The 10 Minute Teacher Podcast
Stop, Think, Question: A Teacher's Guide to Responsible AI with Audrey Watters

The 10 Minute Teacher Podcast

Play Episode Listen Later Jun 3, 2025 12:54


with Audrey Watters | Episode 903 | Tech Tool Tuesday Are we racing toward an AI future without asking the right questions? Author and ed-tech critic Audrey Watters joins me to show teachers how to hit pause, get thoughtful, and keep classroom relationships at the center. Sponsored by Rise Vision Did you know the same solution that powers my AI classroom also drives campus-wide emergency alerts and digital signage? See how Rise Vision can save your school thousands: RiseVision.com/10MinuteTeacher Highlights Include Why “human first” still beats the newest AI tool: Audrey explains how relationships drive real learning. Personalized learning myths busted: How algorithmic “solutions” can isolate students. Practical guardrails for AI: Three reflection questions every teacher should ask before hitting “assign.”

AI Uncovered
Karla Childers - Ethical Guardrails for AI in Pharma

AI Uncovered

Play Episode Listen Later Jun 3, 2025 29:27


We welcome Karla Childers to AI Uncovered. Karla is a long-standing leader in bioethics and data transparency in the pharmaceutical industry. As part of the Office of the Chief Medical Officer at Johnson & Johnson, she brings deep expertise in navigating the ethical implications of emerging technologies, especially artificial intelligence, in medicine and drug development.In this episode, Tim and Karla explore the intersection of AI, bioethics and patient-centered development. They discuss how existing ethical frameworks are being challenged by the rise of generative AI and why maintaining human oversight is critical—especially in high-context areas like clinical trial design, consent and medical communications. Karla also shares her views on the future of data privacy, the complexity of patient agency and how to avoid losing trust in the race for efficiency.Karla is a strong advocate for using innovation responsibly. From her work with internal bioethics committees to her perspective on evolving regulatory expectations, she offers bold insights into how the industry can modernize without compromising ethics or equity.Welcome to AI Uncovered, a podcast for technology enthusiasts that explores the intersection of generative AI, machine learning, and innovation across regulated industries. With the AI software market projected to reach $14 trillion by 2030, each episode features compelling conversations with an innovator exploring the impact of generative AI, LLMs, and other rapidly evolving technologies across their organization. Hosted by Executive VP of Product at Yseop, Tim Martin leads a global team and uses his expertise to manage the wonderful world of product.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 536: Agentic AI - The risks and how to tackle them responsibly

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later May 30, 2025 31:45


We only talk about the upside of agentic AI.But why don't we talk about the risks? As AI agents grow exponentially more capable, so too does the likelihood of something going wrong.So how can we take advantage of agentic AI while also addressing the risks head-on? Join us to learn from a global leader on Responsible AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft's AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

The Collective Voice of Health IT, A WEDI Podcast
Episode 204- Artificial Intelligence's Current and Future State in Health Care

The Collective Voice of Health IT, A WEDI Podcast

Play Episode Listen Later May 30, 2025 56:43


From WEDI's March 27th's virtual spotlight on Artificial Intelligence, former WEDI Board Chair Ed Hafner chats with an impressive group of health care professionals about the benefits, challenges, and future of artificial intelligence in health care. The panel: Robert Laumeyer, CTO, Availity Nick Marzotto, Product Informaticist, Epic Andy Chu, SVP of Product and Technology Incubation, Providence Peter Clardy, MD, Senior Staff Clinical Specialist, Google Health Merage Ghane, PhD, Director of Responsible AI in Health, Coalition for Health AI (CHAI)

The Road to Accountable AI
Uthman Ali: Responsible AI in a Safety Culture

The Road to Accountable AI

Play Episode Listen Later May 29, 2025 32:44 Transcription Available


Host Kevin Werbach interviews Uthman Ali, Global Responsible AI Officer at BP, to delve into the complexities of implementing responsible AI practices within a global energy company. Ali emphasizes how the culture of safety in the industry influences BP's willingness to engage in AI governance. He discusses the necessity of embedding ethical AI principles across all levels of the organization, emphasizing tailored training programs for various employee roles—from casual AI users to data scientists—to ensure a comprehensive understanding of AI's ethical implications. He also highlights the importance of proactive governance, advocating for the development of ethical policies and procedures that address emerging technologies such as robotics and wearables. Ali's approach underscores the balance between innovation and ethical responsibility, aiming to foster an environment where AI advancements align with societal values and regulatory standards. Uthman Ali is BP's first Global Responsible AI Officer, and has been instrumental in establishing the company's Digital Ethics Center of Excellence. He advises prominent organizations such as the World Economic Forum and the British Standards Institute on AI governance and ethics. Additionally, Ali contributes to research and policy discussions as an advisor to Oxford University's Oxethica spinout and various AI safety institutes.   Transcript Prioritizing People and Planet as the Metrics for Responsible AI (IEEE Standards Association) Robocops and Superhumans: Dilemmas of Frontier Technology (2024 podcast interview)

Pondering AI
Reframing Responsible AI with Ravit Dotan

Pondering AI

Play Episode Listen Later May 28, 2025 59:55


Ravit Dotan, PhD asserts that beneficial AI adoption requires clarity of purpose, good judgment, ethical leadership, and making responsibility integral to innovation. Ravit and Kimberly discuss the philosophy of science; why all algorithms incorporate values; how technical judgements centralize power; not exempting AI from established norms; when lists of risks lead us astray; wasting water, eating meat, and using AI responsibly; corporate ethics washing; patterns of ethical decoupling; reframing the relationship between responsibility and innovation; measuring what matters; and the next phase of ethical innovation in practice.  Ravit Dotan, PhD is an AI ethics researcher and governance advisor on a mission to enable everyone to adopt AI the right way. The Founder and CEO of TechBetter, Ravit holds a PhD in Philosophy from UC Berkeley and is a sought-after advisor on the topic of responsible innovation.Related ResourcesThe AI Treasure Chest (Substack): https://techbetter.substack.com/The Values Embedded in Machine Learning Research (Paper): https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083A transcript of this episode is here.   

NN/g UX Podcast
50. Responsible AI Use for Research Analysis (feat. Alexander Knoll, Co-Founder of Condens.io)

NN/g UX Podcast

Play Episode Listen Later May 23, 2025 41:15


AI can do more than it's ever done… but there's a lot of unfounded hype, especially when it comes to user research. When should you delegate tasks to AI? And when should you insist on keeping the human in the loop? In this episode, Therese Fessenden sits down with Alexander Knoll, co-founder of Condens, to discuss the strengths and limitations of AI tools for research, and the evolving role of the user researcher.About Alexander & Condens: LinkedIn | Condens.ioResearch Repository Guide: https://condens.io/guides/research-repository-guide/On-Demand Recording of Condens Event: Making an Impact with User Research: How to Drive Change and Get NoticedAlex's Article with NN/g: Common-Sense AI Integration: Lessons from the Cofounder of CondensOther Related NN/g Articles & Courses:Free Articles about AI & UXCourse: UX Basic TrainingCourse: Accelerating Research with AICourse: AI for Design WorkflowsCourse: Designing AI Experiences

AI and the Future of Work
Ethical AI in Hiring: How to Stay Compliant While Building a Fairer Future of Work (HR Day Special Episode)

AI and the Future of Work

Play Episode Listen Later May 22, 2025 30:03


To coincide with International Human Resources Day (May 20th), this special compilation episode of AI and the Future of Work explores the promises and pitfalls of AI in hiring.HR leaders are under pressure to innovate—but how can we automate hiring ethically, avoid bias, and stay compliant with evolving laws and expectations?In this episode, we revisit key moments from past interviews with four top voices shaping the future of ethical workforce automation:

HLTH Matters
AI @ ViVE: Beyond Hype: Responsible AI Leadership in Healthcare

HLTH Matters

Play Episode Listen Later May 19, 2025 23:17


In this episode of The Beat, host Sandy Vance sits down with Dr. Heather Bassett, Chief Medical Officer at Xsolis and creator of the proprietary Care Level Score. Together, they explore the future of AI in healthcare and how real-world AI applications are already driving improved operational efficiency, reducing clinician burnout, and enhancing payer-provider collaboration. Dr. Bassett also shares insights from her recent involvement with CHAI.org, emphasizing why healthcare leaders must take initiative in developing responsible AI—without waiting for government mandates. Tune in to hear how Xsolis is helping health systems move from spreadsheets to smart automation, making data more actionable, and building a more transparent, interoperable ecosystem.In this episode, they talk about:How Xsolis is working toward creating a frictionless healthcare systemHow Xsolis reduces manual tasks, decreasing clinician burnout, and boosting productivityXsolis' use of data aggregation to minimize redundancy in the healthcare industryMoving healthcare teams off spreadsheets and into AI-driven solutionsHow client collaboration helps maximize the value Xsolis deliversCMS recognition of the need to eliminate unnecessary steps to accelerate patient careThe role of interoperability in standardizing data exchange and enhancing contextWhy transparency is critical when vendors integrate artificial intelligenceEvaluating whether vendors have the people and processes to support AI change managementA Little About Heather:Dr. Heather Bassett is the Chief Medical Officer at Xsolis, an AI-driven health technology company transforming healthcare through a human-centered approach. With over 20 years of experience in clinical care and health IT, she leads Xsolis' medical and data science teams and co-developed the company's signature innovation—the Care Level Score, which blends clinical expertise with AI and machine learning to assess patient status in real time.A board-certified internist and former hospitalist, Dr. Bassett oversees Xsolis' award-winning physician advisor program, denials management, and AI model development. She's a frequent speaker at national healthcare conferences, including ACMA and HFMA, and has been featured in Becker's, MedCity News, and Medical Economics. Recognized as CMO of the Year by the Nashville Business Journal and named one of Becker's Women in Health IT to Know (2023, 2024), Dr. Bassett is also a member of CHAI.org, advocating for responsible AI in healthcare.

Numbers and Narratives
Responsible AI: Balancing Efficiency and Ethics in Business

Numbers and Narratives

Play Episode Listen Later May 19, 2025 26:39


In this episode of Numbers and Narratives, Sean and Ibby dive deep into the world of responsible AI with guest Sarah Payne, AI Strategy and Program Lead at Coinbase. Sarah shares her expertise on implementing AI across workflows while prioritizing ethics and user trust. The conversation explores the challenges of developing AI systems that are not just efficient, but also ethically sound and safe for users.Sarah discusses the importance of having humans in the loop during AI development, gradually reducing human involvement as systems are validated over time. The hosts and guest also delve into the complexities of designing guardrails for AI, especially when dealing with non-declarative systems like large language models. Sarah provides valuable insights on using multiple models to cross-check responses and flag potential issues, as well as leveraging real customer interactions to test and improve AI workflows. Tune in to gain a deeper understanding of responsible AI practices and the challenges facing companies as they navigate this rapidly evolving landscape.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 524: Agentic AI Done Right - How to avoid missing out or messing up.

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later May 13, 2025 18:33


Agentic AI is equally as daunting as it is dynamic. So…… how do you not screw it up? After all, the more robust and complex agentic AI becomes, the more room there is for error. Luckily, we've got Dr. Maryam Ashoori to guide our agentic ways. Maryam is the Senior Director of Product Management of watsonx at IBM. She joined us at IBM Think 2025 to break down agentic AI done right. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Agentic AI Benefits for EnterprisesWatson X's New Features & AnnouncementsAI-Powered Enterprise Solutions at IBMResponsible Implementation of Agentic AILLMs in Enterprise Cost OptimizationDeployment and Scalability EnhancementsAI's Impact on Developer ProductivityProblem-Solving with Agentic AITimestamps:00:00 AI Agents: A Business Imperative06:14 "Optimizing Enterprise Agent Strategy"09:15 Enterprise Leaders' AI Mindset Shift09:58 Focus on Problem-Solving with Technology13:34 "Boost Business with LLMs"16:48 "Understanding and Managing AI Risks"Keywords:Agentic AI, AI agents, Agent lifecycle, LLMs taking actions, WatsonX.ai, Product management, IBM Think conference, Business leaders, Enterprise productivity, WatsonX platform, Custom AI solutions, Environmental Intelligence Suite, Granite Code models, AI-powered code assistant, Customer challenges, Responsible AI implementation, Transparency and traceability, Observability, Optimization, Larger compute, Cost performance optimization, Chain of thought reasoning, Inference time scaling, Deployment service, Scalability of enterprise, Access control, Security requirements, Non-technical users, AI-assisted coding, Developer time-saving, Function calling, Tool calling, Enterprise data integration, Solving enterprise problems, Responsible implementation, Human in the loop, Automation, IBM savings, Risk assessment, Empowering workforce.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Risk Management: Brick by Brick
Responsible AI Isn't Optional: New Strategies for Risk Management Success with Rohan Sen

Risk Management: Brick by Brick

Play Episode Listen Later May 7, 2025 25:06


In this episode of Risk Management Brick by Brick, The Power of AI in Risk - Episode 7, host Jason Reichl sits down with Rohan Sen, Principal in Data Risk and Privacy Practice at PwC, to explore the critical intersection of AI innovation and risk management. They dive into how organizations can implement responsible AI practices while maintaining technological progress.

Digital HR Leaders with David Green
How Human Intelligence Can Guide Responsible AI in the Workplace (an Interview with Kevin Heinzelman)

Digital HR Leaders with David Green

Play Episode Listen Later May 6, 2025 52:37


With AI tools becoming more common across HR and people functions, HR leaders across the globe are asking the same question: how do we use AI without compromising on empathy, ethics, and culture. So, in this special bonus episode of the Digital HR Leaders podcast, host David Green welcomes Kevin Heinzelman, SVP of Product at Workhuman to discuss this very critical topic. David and Kevin share a core belief: that technology should support people, not replace them, and in this conversation, they explore what that means in practice. Tune in as they discuss: Why now is a critical moment for HR to lead with a human-first mindset How HR can retain control and oversight over AI-driven processes The unique value of human intelligence, and how it complements AI How recognition can support skills-based transformation and company culture during times of radical transformation What ethical, responsible AI looks like in day-to-day HR practice How to avoid common pitfalls like bias and data misuse Practical ways to integrate AI without losing sight of culture and care Whether you're early in your AI journey or looking to scale responsibly, this episode, sponsored by Workhuman, offers clear, grounded insight to help HR lead the way - with purpose and with people in mind. Workhuman is on a mission to help organisations build more human-centred workplaces through the power of recognition, connection, and Human Intelligence. By combining AI with the rich data from their #1 rated employee recognition platform, Workhuman delivers the insights HR leaders need to drive engagement, culture, and meaningful change at scale. To learn more, visit Workhuman.com and discover how Human Intelligence can help your organisation lead with purpose. Hosted on Acast. See acast.com/privacy for more information.