Podcasts about ai bias

  • 191PODCASTS
  • 228EPISODES
  • 41mAVG DURATION
  • 1WEEKLY EPISODE
  • May 20, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai bias

Latest podcast episodes about ai bias

Bringing the Human back to Human Resources
228. Navigating HR Policies in a Changing Political Landscape

Bringing the Human back to Human Resources

Play Episode Listen Later May 20, 2025 31:48


Go to https://cozyearth.com and use code HUMANHR for 40% off their best-selling sheets, pajamas, towels, and more. And if you get a post-purchase survey? Let them know you heard about Cozy Earth right here.In this episode of the Bringing the Human Back to Human Resources podcast, Traci Chernoff and Bryan Driscoll discuss recent updates in HR policies, focusing on independent contractor enforcement, employee classification, and the implications of automation and AI on the workforce. They explore the political fluctuations affecting HR regulations, the importance of understanding state laws, and the risks associated with misclassification. The conversation also highlights a recent SHRM data brief on automation displacement, emphasizing the need for upskilling and the potential biases in AI hiring practices. The episode concludes with a discussion on the legal responsibilities of employers in accommodating candidates and the necessity of regular audits of hiring tools.Chapters00:00 Introduction to Policy Pulse and Independent Contractor Enforcement03:01 Understanding Employee Classification and Political Whiplash05:49 Private Lawsuits and State Laws on Misclassification09:13 Recommendations for HR on Classification and Compliance12:07 SHRM Data Brief on Automation Displacement14:55 Implications of Automation on Workforce and Upskilling17:56 AI Bias and Discrimination in Hiring Practices20:54 Legal Responsibilities and Accommodations in Hiring24:00 Conclusion and Future Insights on Employment LawDon't forget to rate, review, and subscribe! Plus, leave a comment if you're catching this episode on Spotify or YouTube.We hope you enjoyed this month's Policy Pulse episode. If you found our discussion insightful, we'd like you to take a moment to rate our podcast. Your feedback helps us grow and reach more listeners who are passionate about these topics. You can also leave a review and tell us what you loved or what you'd like to hear more of - we're all ears!Connect with Traci here: ⁠https://linktr.ee/HRTraci⁠Connect with Bryan: Website: https://bryanjdriscoll.com/ LinkedIn: https://www.linkedin.com/in/bryanjohndriscoll/ Disclaimer: Thoughts, opinions, and statements made on this podcast are not a reflection of the thoughts, opinions, and statements of the Company by whom Traci Chernoff is actively employed.Please note that this episode may contain paid endorsements and advertisements for products or services. Individuals on the show may have a direct or indirect financial interest in products or services referred to in this episode.

CXO.fm | Transformation Leader's Podcast
AI Bias: A Hidden Business Risk

CXO.fm | Transformation Leader's Podcast

Play Episode Listen Later May 14, 2025 13:57 Transcription Available


Is your AI helping—or quietly hurting—your business? In this episode, we uncover how hidden biases in large language models can quietly erode trust, derail decision-making, and expose companies to legal and reputational risk. You'll learn actionable strategies to detect, mitigate, and govern AI bias across high-stakes domains like hiring, finance, and healthcare. Perfect for corporate leaders and consultants navigating AI transformation, this episode offers practical insights for building ethical, accountable, and high-performing AI systems. 

The College Essay Guy Podcast: A Practical Guide to College Admissions
606: Navigating College Applications with AI (Part 2): What Colleges Say + My Advice to Students and Counselors

The College Essay Guy Podcast: A Practical Guide to College Admissions

Play Episode Listen Later May 13, 2025 16:07


Welcome to Part 2 of 2 of this mini-series on AI in college admission! In our last episode, I interviewed Jen Rubin from foundry10 on how students and counselors are using AI in the college admission process. In today's episode I'll get into what colleges have to say, and what I would advise as a result. I'll get into: My biggest concern with AI… in general (spoiler: it's much bigger than college applications) My concerns related to AI use in the college application process   Common App guidelines + What colleges have to say around AI use What I believe students might miss out on if they use AI to write their essays for them My advice to students and counselors We hope you enjoy!   Play-by-Play: 2:12 – What is Ethan's biggest concern with AI?  3:29 – CommonApp Guidelines 4:09 – What do colleges have to say about AI use in the application process?   7:05 – What does Ethan believe students might miss out on if they use AI to write their essays for them? 12:00 – Can Chat GPT and AI be useful in certain contexts? 13:34 – AI use and environmental impacts  14:26 – Closing thoughts    Resources: CEG Podcast Episode 605 - Navigating College Applications with AI (Part 1): How High School Teachers and Students Use Tools Like ChatGPT Blog Version of this episode – Navigating College Applications with AI: What Colleges Say + CEG's Advice to Students and Counselors CEG's Thoughts on AI and College Application Essays Duncan Sabian – Article on AI Safety AI 2027 CommonApp Affirmation Statement College Statements on AI Usage Princeton University CalTech Brown University University of California (UC) System Gonzaga University Southern Methodist University (Graduate) University of Melbourne (Outside the US) As Use of A.I. Soars, So Does the Energy and Water It Requires Jeff Neill's Tech-Neill-ogy  College Essay Guy's Personal Statement Resources College Essay Guy's College Application Hub  

The College Essay Guy Podcast: A Practical Guide to College Admissions
605: Navigating College Applications with AI: How High School Teachers and Students Use Tools Like ChatGPT

The College Essay Guy Podcast: A Practical Guide to College Admissions

Play Episode Listen Later Apr 29, 2025 49:56


In today's episode, Ethan is joined by Jennifer Rubin, Senior Researcher at foundry10 and Principal Investigator of the Digital Technologies and Education Lab. Jen is a first generation college graduate and, like Ethan, she didn't have a ton of support navigating the college application and admission process. When Chat GPT was released, she wondered: How were students actually using these tools for the college essay and application process? And how can we make sure AI becomes a tool for equity, not another advantage for students who already have access?  In this conversation, Ethan and Jen get into: Some of the most interesting takeaways from her research Why higher-income students were more likely to use AI in their applications than lower-income students How educators can increase their students' AI literacy How students can use AI without losing their voice Resources/tools she recommends for students and counselors navigating this new technology And lots more. Jennifer Rubin is a Senior Researcher at foundry10 who investigates how technology shapes youth development, learning, and social connections. She earned a B.A. in Psychology from UC Berkeley before completing a dual Ph.D. in Psychology and Gender and Women's Studies at the University of Michigan. Currently, she is the Principal Investigator of the Digital Technologies and Education Lab, where she leads research on how social media, generative AI, and digital learning environments impact youth development. Her work explores how young people navigate digital spaces, strategies for educators to meaningfully integrate emerging technologies, and the essential skills needed to foster critical engagement with social media and AI tools.  Hope you enjoy!   Play-by-Play 2:26 – Jen shares her research background and what she's been working on  4:06 – What have been the most interesting takeaways from Jen's research?  5:58 – Why were higher-income students more likely to use AI in their applications than lower-income students? 9:16 – What are some practical ways for educators to increase their students' AI literacy? 13:02 – How can students use AI without losing their voice? 18:47 – What resources or tools does Jen recommend for students and counselors navigating this new technology? 22:52 – Does Jen see ethical gray areas with teacher use of AI?  29:02 – How are colleges approaching AI use in applications?  32:16 – Do AI detectors actually work?  37:16 – How does Jen use AI in her own work and writing? 43:11 – What does Jen see for the future of AI? 44:32 – What advice does Jen have for students? 46:10 – Advice for caregivers? 46:58 – Advice for educators and counselors? 48:50 – Closing thoughts Resources: Navigating College Applications with AI  | foundry10 CalTech Admissions AI Policy Princeton Admissions AI Policy Common Sense Media AI Literacy Initiatives Digital Promise: AI in Education Resources  Understanding Generative AI: Caregiver, Teacher, and Student Experiences College Essay Guy's Personal Statement Resources College Essay Guy's College Application Hub  

Change Happens
Confronting Bias in AI with Tracey Spicer

Change Happens

Play Episode Listen Later Apr 22, 2025 29:57


Today, we're stepping into one of the most urgent conversations in tech right now: bias in artificial intelligence.Tracey Spicer AM is a Walkley Award-winning journalist, author, and longtime activist for gender equity. In this episode, she unpacks the unseen biases coded into the technologies we use every day—and what happens when we leave them unchecked. Drawing on years of research for her latest book Man-Made, Tracey explores the ethical challenges and opportunities in AI development, and why we all have a role to play in shaping more equitable outcomes.In this episode, Tracey shares:How gender, race, age, and ability bias are embedded into AI systemsThe real-world impacts of biased tech—from hiring software to image generatorsWhy ‘human in the loop' systems are critical for ethical AIHow organisations can audit their data, clean up algorithms, and lead responsiblyHost: Jenelle McMaster, Deputy CEO and People & Culture Leader at EYGuest: Tracey Spicer AM, journalist, author, and AI ethics advocate

I Thought You'd Like To Know This, Too
ITEST Webinar AI and Sin: Medieval Robots and the Theology of Technology (April 5, 2025)

I Thought You'd Like To Know This, Too

Play Episode Listen Later Apr 6, 2025 117:52


In this webinar entitled AI and Sin: Medieval Robots and the Theology of Technology, hosted by the Institute for Theological Encounter with Science and Technology, Dr. Chris Reilly and Dr. Jordan Joseph Wales offer their insights into the promises and challenges of Artificial Intelligence. (April 5, 2025)Christopher M. Reilly, ThDAI and Sin: How Today's Technology Motivates EvilChristopher M. Reilly writes and speaks in regard to a Christian response to advanced technology, and he has written numerous articles on bioethics and moral theology and philosophy. Chris holds a doctor of theology degree and three masters degrees in philosophy, theology, and public affairs. He resides in the greater Washington, DC region. His website is ChristopherMReilly.com. Chris is Associate Director of ITEST.AbstractArtificial intelligence technology (AI) motivates persons' engagement in sin. With this startling argument drawn from Catholic theology and technological insight, Christopher M. Reilly, Th.D. takes on both critics and proponents of AI who see it as essentially a neutral tool that can be used with good or bad intentions. More specifically, Reilly demonstrates that AI strongly encourages the vice of instrumental rationality, which in turn leads the developers, producers, and users of AI and its machines toward acedia, one of the “seven deadly sins.”Jordan Joseph Wales, PhDResponse: Medieval Robots and the ​Theology of TechnologyJordan Wales is the Kuczmarski Professor of Theology at Hillsdale College. With degrees in engineering, cognitive science, and theology, his scholarship focuses on early Christianity as well as contemporary theological questions relating to artificial intelligence. He is a member of the AI Research Group for the Holy See's Centre for Digital Culture, under the Dicastery for Culture and Education; a fellow of the Centre for Humanity and the Common Good; and a fellow of the International Society for Science and Religion.AbstractMedieval theologians and storytellers grappled with humankind's tendency to confine our aims to what a technology can represent rather than to situate that technology within the wider horizon of the human journey to God. Responding to Dr. Reilly, I draw on legends of robots that illustrate a theological approach to AI as a perilous but also potent instrument mediating between human volition and our natural and social environment. In their diverging outcomes, these texts suggest paths toward a more humane positioning of AI within our lives.Webinar resourcesChris Reilly's ResourcesRadio interview on Relevant Radio, Trending with Timmerie: https://www.spreaker.com/episode/ai-reason-acedia–64575876Book – AI and Sin: How Today's Technologies Motivate Evil: https://enroutebooksandmedia.com/aiandsin/Chris Reilly's website: https://christophermreilly.com/Chapter – “Seven Christian Principles for Thriving with Artificial Intelligence”: https://static1.squarespace.com/static/5e3ada1a6a2e8d6a131d1dcd/t/66bb63fdcdba62679b200277/1723556861413/Artificial+Intelligence-1.pdfJordan Wales' Resources“What Will a Future with Androids among Us Look Like”: https://churchlifejournal.nd.edu/articles/what-will-a-future-with-androids-among-us-look-like/“The Image and the Idol: A Theological Reflection on AI Bias”: https://churchlifejournal.nd.edu/articles/the-image-and-the-idol-a-theological-reflection-on-ai-bias/“Encountering Artificial Intelligence: Ethical and Anthropological Investigations”: https://jmt.scholasticahq.com/article/91230-encountering-artificial-intelligence-ethical-and-anthropological-investigations

ReDesigned Podcast
The New Creative Reality: GPT-4o, Runway Gen-4 & more

ReDesigned Podcast

Play Episode Listen Later Apr 3, 2025 37:38


Is content creation changed forever? This week on the Redesigned Podcast, we dive deep into the bombshell arrival of OpenAI's GPT-4o (Omni)! We explore its mind-blowing capabilities – real-time voice interaction, emotion detection, vision analysis – and discuss what this paradigm shift means for creators, designers, coders, and the future of work. Is it a super-powered assistant or a job-killer?Enjoyed the episode? Hit follow or subscribe nowWe also cover:Runway's Gen-4: The AI video generation race heats up with stunning new realism and control.Instagram Reposts: Is the platform testing a feature that could shake up your feed?Blockbuster's Return: A nostalgic pop-up hits London - details inside!AI-Generated Ads: A look at a KFC ad made entirely with AI tools.AI Concerns: We discuss deepfakes, bias (with examples!), copyright, and the ethical challenges.Scrolling Deep: Featuring an incredible beatbox champion, a bizarre anti-phone-slap invention, and a student's very public thank you to ChatGPT for his thesis!Plus, test your ears with a new Sound Bite Challenge!Join the conversation – what are your thoughts on GPT-4o and the rapid pace of AI? Let us know in the comments!#AI #GPT4o #OpenAI #TechPodcast #ContentCreation #FutureofTech #RunwayML #Instagram #Blockbuster #DigitalCultureTimestamps:00:00:00 - Intro: AI-Generated Ad Preview & Teaser00:01:06 - Welcome to the Redesigned Podcast00:01:43 - Last Week's Sound Bite Challenge Answer (Donkey Kong Country)00:02:24 - News: Instagram Testing Repost Feature00:04:39 - News: Blockbuster Video Nostalgia Pop-Up in London (KitKat Collab)00:07:15 - News: Runway Introduces Gen-4 AI Video Model00:11:33 - Main Topic: GPT-4o - The Day Everything Changed00:12:19 - GPT-4o Demo Highlights & Capabilities (Omni explained)00:13:31 - What Makes GPT-4o Different? (Native Multimodal Processing)00:14:25 - GPT-4o as a Co-Pilot for Content Creators (Brainstorming, Scripting, Editing)00:17:30 - GPT-4o Sketch-to-Image / Storyboarding Power00:19:27 - Example: Fully AI-Generated KFC Ad (Tools: Udio, Runway, Pika, Sora)00:22:01 - Impact of GPT-4o Speed & Fluidity for Creators00:22:22 - Disruption Beyond Content Creation (Design, Music, Coding, Admin)00:23:34 - Concerns About AI: Job Displacement, Generic Content, Deepfakes00:25:07 - Deepfakes & Fraud Potential (Example video)00:28:34 - AI Concerns Continued: Misinformation, Copyright, Bias & Ethics00:29:23 - Example of AI Bias in Image Generation00:30:44 - Navigating the AI Landscape: Augmentation vs. Replacement00:31:33 - GPT-4o Wrap-up & Call for Discussion00:32:07 - Sound Bite Challenge: Can You Guess This Sound?00:32:31 - Segment: Scrolling Deep Intro00:33:13 - Scrolling Deep: Wing - Korean Beatbox Champion00:34:43 - Scrolling Deep: Anti-Phone-Slap Helmet Invention00:35:11 - Scrolling Deep: Student Thanks ChatGPT for Thesis (Uh Oh!)00:36:01 - Episode Recap00:37:08 - Outro & Call to Subscribe00:37:28 - Blooper/Cut Content Snippet

AW360 Live Podcast
AI, Bias & the Future of Creativity: A Conversation with Zoe Eagle

AW360 Live Podcast

Play Episode Listen Later Apr 2, 2025 16:11


Live from Advertising Week Europe 2025 at 180 Studios in London, this episode of the AW360 Podcast features a powerful conversation with Zoe Eagle, CEO of Iris UK. We dive into the promises and pitfalls of AI in marketing—especially how bias can creep into the tools we use and the stories we tell. Zoe shares … Continue reading "AI, Bias & the Future of Creativity: A Conversation with Zoe Eagle"

The David Knight Show
Fri Episode #1979: Authoritarian Bullying as a Red Herring to Impose Control Through AI & “StableCoins”

The David Knight Show

Play Episode Listen Later Mar 28, 2025 181:04


Bill Gates' AI PredictionsAI Apocalypse Unveiled: Bill Gates Predicts the End of Humanity in a Decade Buckle up for a wild ride as Bill Gates drops a bombshell on Jimmy Fallon's show, claiming artificial intelligence will obliterate jobs, replace doctors and teachers, and leave humans obsolete within 10 years. Is this a visionary forecast or a sinister plot to usher in a technocratic nightmare and establish NEW unchallengeable authority figure? The Future of Jobs in an AI WorldHumans Need Not Apply: Gates' Shocking Vision of an AI-Dominated Workforce Gates boldly declares most jobs will vanish as AI takes over, leaving only three human roles standing: and they all are connected to him. Will you be one of the lucky few, or are you destined to be sidelined by machines? The future of work just got terrifyingly real! AI as the New GodBlasphemy in Binary: Is AI the False Deity Replacing God? Critics scream heresy as AI is poised to become the ultimate authority, replacing human experts and even divine wisdom. From Fauci to chatbots, the text warns of a dystopian shift where technology dictates your life. Is this progress or a rebellion against the Almighty? AI Bias and LiesLiar, Liar, Code on Fire: AI's Dirty Secret of Deception ExposedCambridge virologists catch ChatGPT red-handed, fabricating diseases and confessing its lies! With biases baked in by low-paid programmers, AI's “truth” is a sham. Can we trust this tech, or is it a Pandora's box of misinformation ready to ruin us all? Economic Impacts of AI and Net ZeroSkyrocketing Costs and Empty Skies: AI and Net Zero Ground the MassesQantas spills the beans: air travel's about to become a luxury only the elite can afford, thanks to net zero madness. Cars? Add $12,000 to the price tag, courtesy of Trump's tariffs. The economy's crumbling—will you be left grounded and broke? Trump's Tariffs, Planned Misdirection, and Crypto SchemesTrillion-Dollar Heist: Trump's Tariffs and Stablecoins to Rob You BlindTrump's tariff chaos slaps a $100 billion tax on cars according to White House staff (Trump boasts $300-$600 billion), while his crypto cronies push “stablecoins” that aren't stable, private, or even coins! It's a massive wealth grab disguised as patriotism—will you own nothing while they rake in trillions? War and Bombing CampaignsBlood, Bombs, and Bragging: America's Endless Killing Spree ExposedYemen bleeds as Trump boasts of “successful” airstrikes—US averaging 46 airstrikes PER DAY since 2000!   It's bipartisan disregard for civilian slaughter, hidden behind “national security.” Will the war machine ever stop, or are we all complicit in this carnage? Greenland and Geopolitical GamesGreenland Rejects the Vances: Trump's Arctic Land Grab Sparks OutrageGreenlanders slam the door on JD Vance and Usha's visit as Trump eyes their land for rare earth riches. Denmark cries foul, Putin warns of encroachment, and locals say no to being pawns in America's crony capitalist chess game. Is this a takeover in the making? Surveillance and Facial RecognitionBig Brother's New Gang: AI Face-Scanners Turn Cops into Thugs!London's streets sprout permanent facial recognition cameras, misidentifying innocents and unleashing police brutality. The text warns of a surveillance state where AI flags you as a criminal—guilty or not. Will you be SWAT-ed by the “face-recog” hit list? Propaganda in MediaNetflix's Mind Games: Adolescence Pushes a Sinister Digital ID AgendaThe film Adolescence isn't just entertainment—it's propaganda to lock kids off the internet with digital IDs, backed by UK politicos and Netflix's Bernays bloodline. Is this art or a calculated move to control the next generation? Spiritual ResistanceGod vs. AI: The Ultimate Battle for Your Soul Begins NowAs AI and technocrats play God, the text calls for prayer and discernment to fight back. Gates' error-riddled machines can't match the Almighty's truth. Will faith topple this silicon tower of Babel, or are we doomed to digital slavery?Jack Lawson's Civil Defense Manual Returns with a Vengeance Amid a Nation on the Brink           He's back—Jack Lawson, the fearless voice of survival, storms onto the scene with Civil Defense Manual, a a revamped new website, and a stockpile of three tons of books ready to arm Americans with the knowledge to face the chaos ahead.     After a maddening four-month blackout blamed on DEI disasters at his distributor, Lawson's breaking free from the shackles of Big Tech censorship—Amazon's ‘Communist Commissar' crew axed his account, but they can't stop him now!     With a Substack to rally the troops, Lawson's not just selling books—he's igniting a movement. As he warns of a ‘Fourth Turning' crisis barreling toward 2029, with economic collapse and war looming, he's arming patriots with free resources and a Certified American program to rebuild a civil defense from scratch.If you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-show Or you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Money should have intrinsic value AND transactional privacy: Go to DavidKnight.gold for great deals on physical gold/silverFor 10% off Gerald Celente's prescient Trends Journal, go to TrendsJournal.com and enter the code KNIGHTFor 10% off supplements and books, go to RNCstore.com and enter the code KNIGHTBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-david-knight-show--2653468/support.

The REAL David Knight Show
Fri Episode #1979: Authoritarian Bullying as a Red Herring to Impose Control Through AI & “StableCoins”

The REAL David Knight Show

Play Episode Listen Later Mar 28, 2025 181:04


Bill Gates' AI PredictionsAI Apocalypse Unveiled: Bill Gates Predicts the End of Humanity in a Decade Buckle up for a wild ride as Bill Gates drops a bombshell on Jimmy Fallon's show, claiming artificial intelligence will obliterate jobs, replace doctors and teachers, and leave humans obsolete within 10 years. Is this a visionary forecast or a sinister plot to usher in a technocratic nightmare and establish NEW unchallengeable authority figure? The Future of Jobs in an AI WorldHumans Need Not Apply: Gates' Shocking Vision of an AI-Dominated Workforce Gates boldly declares most jobs will vanish as AI takes over, leaving only three human roles standing: and they all are connected to him. Will you be one of the lucky few, or are you destined to be sidelined by machines? The future of work just got terrifyingly real! AI as the New GodBlasphemy in Binary: Is AI the False Deity Replacing God? Critics scream heresy as AI is poised to become the ultimate authority, replacing human experts and even divine wisdom. From Fauci to chatbots, the text warns of a dystopian shift where technology dictates your life. Is this progress or a rebellion against the Almighty? AI Bias and LiesLiar, Liar, Code on Fire: AI's Dirty Secret of Deception ExposedCambridge virologists catch ChatGPT red-handed, fabricating diseases and confessing its lies! With biases baked in by low-paid programmers, AI's “truth” is a sham. Can we trust this tech, or is it a Pandora's box of misinformation ready to ruin us all? Economic Impacts of AI and Net ZeroSkyrocketing Costs and Empty Skies: AI and Net Zero Ground the MassesQantas spills the beans: air travel's about to become a luxury only the elite can afford, thanks to net zero madness. Cars? Add $12,000 to the price tag, courtesy of Trump's tariffs. The economy's crumbling—will you be left grounded and broke? Trump's Tariffs, Planned Misdirection, and Crypto SchemesTrillion-Dollar Heist: Trump's Tariffs and Stablecoins to Rob You BlindTrump's tariff chaos slaps a $100 billion tax on cars according to White House staff (Trump boasts $300-$600 billion), while his crypto cronies push “stablecoins” that aren't stable, private, or even coins! It's a massive wealth grab disguised as patriotism—will you own nothing while they rake in trillions? War and Bombing CampaignsBlood, Bombs, and Bragging: America's Endless Killing Spree ExposedYemen bleeds as Trump boasts of “successful” airstrikes—US averaging 46 airstrikes PER DAY since 2000!   It's bipartisan disregard for civilian slaughter, hidden behind “national security.” Will the war machine ever stop, or are we all complicit in this carnage? Greenland and Geopolitical GamesGreenland Rejects the Vances: Trump's Arctic Land Grab Sparks OutrageGreenlanders slam the door on JD Vance and Usha's visit as Trump eyes their land for rare earth riches. Denmark cries foul, Putin warns of encroachment, and locals say no to being pawns in America's crony capitalist chess game. Is this a takeover in the making? Surveillance and Facial RecognitionBig Brother's New Gang: AI Face-Scanners Turn Cops into Thugs!London's streets sprout permanent facial recognition cameras, misidentifying innocents and unleashing police brutality. The text warns of a surveillance state where AI flags you as a criminal—guilty or not. Will you be SWAT-ed by the “face-recog” hit list? Propaganda in MediaNetflix's Mind Games: Adolescence Pushes a Sinister Digital ID AgendaThe film Adolescence isn't just entertainment—it's propaganda to lock kids off the internet with digital IDs, backed by UK politicos and Netflix's Bernays bloodline. Is this art or a calculated move to control the next generation? Spiritual ResistanceGod vs. AI: The Ultimate Battle for Your Soul Begins NowAs AI and technocrats play God, the text calls for prayer and discernment to fight back. Gates' error-riddled machines can't match the Almighty's truth. Will faith topple this silicon tower of Babel, or are we doomed to digital slavery?Jack Lawson's Civil Defense Manual Returns with a Vengeance Amid a Nation on the Brink           He's back—Jack Lawson, the fearless voice of survival, storms onto the scene with Civil Defense Manual, a a revamped new website, and a stockpile of three tons of books ready to arm Americans with the knowledge to face the chaos ahead.     After a maddening four-month blackout blamed on DEI disasters at his distributor, Lawson's breaking free from the shackles of Big Tech censorship—Amazon's ‘Communist Commissar' crew axed his account, but they can't stop him now!     With a Substack to rally the troops, Lawson's not just selling books—he's igniting a movement. As he warns of a ‘Fourth Turning' crisis barreling toward 2029, with economic collapse and war looming, he's arming patriots with free resources and a Certified American program to rebuild a civil defense from scratch.If you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-show Or you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Money should have intrinsic value AND transactional privacy: Go to DavidKnight.gold for great deals on physical gold/silverFor 10% off Gerald Celente's prescient Trends Journal, go to TrendsJournal.com and enter the code KNIGHTFor 10% off supplements and books, go to RNCstore.com and enter the code KNIGHTBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-real-david-knight-show--5282736/support.

AI for Kids
What If Your First Teacher Was an AI? (Middle+)

AI for Kids

Play Episode Listen Later Mar 18, 2025 39:12 Transcription Available


Send us a textDipti Bhide, CEO and co-founder of LittleLit Kids AI, reveals how the first generation of AI-first children are learning to safely navigate AI through kid-friendly tools and experiences. She shares her journey from building tech for adults to creating the world's first all-in-one AI platform specifically designed for elementary and middle school children.• An entire generation of children are learning AI before they learn Google search• Little Lit AI was inspired by Dipti's experience teaching her neurodivergent son using personalized AI-generated math problems• The "Whole Child AI" framework teaches kids not just how to use AI but what it is, how it works, and its limitations• Children need to understand the difference between human and AI interaction for safety reasons• AI literacy doesn't require coding knowledge - it's about communication skills• Kids should learn AI basics before jumping into creative applications• Understanding AI bias through hands-on experiments helps children develop critical thinking• Teaching ethics means helping kids see AI as a creative tool, not a shortcut for cheating15% off of LittleLit's annual membership - code - AIFORKIDS15This includes full access to the Whole Child AI Curriculum Adventures, all personalized AI tutors, and the Creative AI Arcade. Sign up!For educators looking to level-up their AI teacher skills, access a FREE K-12 AI Teacher Certificate Course HERE.Resources:Whole AI Kids BookMidjourneyEveryone AI by Anne-Sophie SeretKhanmigoSupport the showHelp us become the #1 podcast for AI for Kids.Buy our new book "Let Kids Be Kids, Not Robots!: Embracing Childhood in an Age of AI"Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Gift or get our books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform. Apple Podcasts Amazon Music Spotify YouTube Other Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales...

Education Talk Radio
Artificial Intelligence: Real Talk - Navigating AI Bias and Access with Dr. Marlena Ward Dodds

Education Talk Radio

Play Episode Listen Later Mar 12, 2025 42:25


Today we're excited to introduce our newest podcast here on the Be Podcast Network, Artificial Intelligence: Real Talk hosted by Eli Marshall Davis. Eli is an Interventionist at Hemphill Elementary in Birmingham, AL, where he sits on the leadership team and chairs the MTSS/Problem Solving Team committee. He is a Ph.D. candidate at the University of South Carolina, focusing on Teaching and Learning, with research interests in social justice, epigenetics, and the transformative potential of AI in education. In this episode, host Eli welcomes his longtime friend Dr. Marlena Ward Dodds. They explore artificial intelligence, particularly its impact on education and inclusivity, the challenges of online learning, and the digital divide. Dr. Dodds shares her insights and experiences with AI, both in higher education and in the broader context of learning and development. They also discuss the ethical implications and biases in AI, emphasizing the importance of critical thinking and equitable access to technology.Listen to Artificial Intelligence: Real Talk about subscribe here: https://avisionforlearning.com  We're thrilled to be sponsored by IXL. IXL's comprehensive teaching and learning platform for math, language arts, science, and social studies is accelerating achievement in 95 of the top 100 U.S. school districts. Loved by teachers and backed by independent research from Johns Hopkins University, IXL can help you do the following and more:Simplify and streamline technologySave teachers' timeReliably meet Tier 1 standardsImprove student performance on state assessments

AI and the Future of Work
The Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & Trust

AI and the Future of Work

Play Episode Listen Later Mar 6, 2025 19:27


Coinciding with International Women's Day this week, this special episode of AI and the Future of Work highlights key conversations with women leading the way in AI ethics, governance, and accountability.In this curated compilation, we feature four remarkable experts working to create a more equitable, trustworthy, and responsible AI future:

The Next Wave - Your Chief A.I. Officer
Grok 3 vs Claude 3.7 vs GPT-4.5: Which Update is The Best?

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later Mar 4, 2025 46:59


Episode 48: How do the latest updates to large language models stack up against each other? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) are joined by Matthew Berman (https://x.com/MatthewBerman), an expert in deep-diving and testing the nuances of large language models. In this episode, the trio discusses the recent releases of Grok 3, Claude 3.7, and GPT-4.5, analyzing their strengths, weaknesses, and unique features. Tune in to learn which model might be best for your needs, from coding and real-time information to creative writing and unbiased truth-seeking. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Exploring New AI Models (05:35) Inconsistent AI Code Performance (06:26) Redesigning Benchmarks for Modern Models (11:33) AI Bias Amplification on Social Media (15:11) AI Bias and Human Oversight (17:49) Claude 3.7: Improved Coding Abilities (20:30) Claude Update: Better Code, Worse Chat (23:19) Resistance to Switching IDE from VS Code (28:05) Video Producer App Preview (29:55) Showcasing Nvidia Digits Prototype (34:00) GROK Model's Distributed Training (36:31) Optimistic Perspective on Future Upgrades (40:59) Excited for GPT-5 Launch (42:08) Claude 3.7 Excels in Coding — Mentions: Matthew Berman: https://x.com/MatthewBerman Forward Future: https://www.forwardfuture.ai/ Grok 3: https://x.ai/blog/grok-3 Claude 3.7: https://www.anthropic.com/news/claude-3-7-sonnet GPT-4.5: https://openai.com/index/introducing-gpt-4-5/ Perplexity: https://www.perplexity.ai/ Cursor: https://www.cursor.com/ Gemini: https://ai.google/updates/ Check out this episode on YouTube: https://www.youtube.com/watch?v=pWXT8NZFG_Y Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Unlocking The AI Advantage
⚡ AI Ethics Exposed: The Truth About Fairness & Bias!

Unlocking The AI Advantage

Play Episode Listen Later Feb 27, 2025 56:31


Do you think AI can ever be truly unbiased?How should AI companies balance transparency with security when disclosing AI decision-making processes?Have you ever encountered AI-generated content that felt biased? How did it impact your trust in AI?What are your thoughts on using AI to improve fairness in finance and healthcare?Would you prefer AI models designed for general use or small, purpose-built AI models for specific tasks?Hey there, tech enthusiasts!

The Chad & Cheese Podcast
The Future is Now with Stefan Youngblood

The Chad & Cheese Podcast

Play Episode Listen Later Feb 17, 2025 38:02


HR Collection Playlist
The Future is Now with Stefan Youngblood

HR Collection Playlist

Play Episode Listen Later Feb 17, 2025 38:02


HR Interviews Playlist
The Future is Now with Stefan Youngblood

HR Interviews Playlist

Play Episode Listen Later Feb 17, 2025 38:02


EM360 Podcast
Navigating AI Bias in Data Analysis

EM360 Podcast

Play Episode Listen Later Feb 7, 2025 18:19


Hear Wilson Chen discuss the complexities of AI in data analysis, particularly focusing on the challenges of bias, misinformation, and the importance of human expertise in interpreting AI-driven insights. Wilson shares insights from his experience as the founder of Permutable AI , a startup that builds real-time LLM engines, and emphasizes the need for a balanced view in understanding geopolitical trends and market intelligence. The discussion also highlights the critical checks necessary to ensure the accuracy and reliability of AI-generated information.Key TakeawaysAI systems can amplify existing biases in data.A balanced view of information is crucial for accuracy.Human expertise is essential in interpreting AI outputs.Organizations must critically assess AI-driven insights.Real-time data analysis can enhance decision-making.Misinformation can spread if AI is not properly regulated.Ethical considerations are vital in AI usage.The integrity of sources impacts AI reliability.AI can simplify complex geopolitical dynamics.Permutable.ai aims to provide actionable insights for businesses.Chapters00:00 Introduction to AI and Data Analysis05:01 Addressing Bias in AI Systems09:55 The Role of Human Expertise in AI14:53 Trusting AI-Driven Market Intelligence20:01 Conclusion and Future Insights

The Shintaro Higashi Show
What is ChatGPT?

The Shintaro Higashi Show

Play Episode Listen Later Jan 27, 2025 52:33


Large language models like ChatGPT are transforming the way we interact with AI. Peter explains its inner workings, how it understands language through probabilities, and its applications across various domains. Shintaro brings relatable scenarios, exploring AI's practical uses, its limitations, and ethical concerns like bias and transparency. They also touch on the future of AI, from prompt engineering to potential advancements like agentic AI and multitask robotics. Whether you're curious about how ChatGPT works or how it might shape the future, this conversation offers engaging insights and practical takeaways. (00:00:00) Introduction (00:00:45) What Is ChatGPT? (00:02:19) Language Models and Probabilities Explained (00:05:28) Making AI Understandable for Everyone (00:06:43) ChatGPT's Limitations and Real-World Use Cases (00:13:17) Ethical Concerns and AI Bias (00:17:36) What Is Prompt Engineering? (00:20:58) AI in Specialized Applications (00:25:21) The Turing Test and AI Sentience (00:30:31) Full Self-Driving and AI in Robotics (00:44:33) What's After ChatGPT? (00:50:50) Closing Thoughts If you're in business, then you have customer churn. Whether you're building a startup, growing a mom & pop shop, or operating in a fortune 500 powerhouse, Hakuin.ai measures, predicts, and improves your customer retention. https://hakuin.ai

Women in Data Science
Predicting Responsibly: Claudia Perlich on AI, Bias, and the Art of Data Science

Women in Data Science

Play Episode Listen Later Jan 16, 2025 46:02


Predictive Modeling (4:15) Human judgement and processes (14:06)Imperfection in models (21:40)BioClaudia Perlich is Managing Director and Head of Strategic Data Science for Investment Management at Two Sigma, where she has worked for seven years. In this role, Claudia is responsible for developing innovative alpha strategies at the intersection of alternative data, thematic hypotheses and machine learning in public markets. Claudia joined Two Sigma from Dstillery, an AI ad targeting company, where she worked as Chief Scientist. Claudia began her career in data science at the IBM Watson Research Center, concentrating on research in data analytics and machine learning for complex real-world domains and applications.Since 2011, Claudia has served as an adjunct professor teaching Data Mining in the M.B.A. program at New York University's Stern School of Business. Claudia received a Ph.D. in Information Systems from Stern School of Business, New York University, holds an M.S. of Computer Science from Colorado University and a B.S. in Computer Science from Technical University Darmstadt, Germany. Connect with ClaudiaClaudia Perlich on LinkedinConnect with UsMargot Gerritsen on LinkedInFollow WiDS on LinkedIn (@Women in Data Science (WiDS) Worldwide), Twitter (@WiDS_Worldwide), Facebook (WiDSWorldwide), and Instagram (wids_worldwide)Listen and Subscribe to the WiDS Podcast on Apple Podcasts, Google Podcasts, Spotify, Stitcher

The Health Ranger Report
Brighteon Broadcast News, Jan 8, 2025 – The REAL reason why Trump wants Greenland, and NVIDIA's mind-blowing new tech that will change the world forever

The Health Ranger Report

Play Episode Listen Later Jan 8, 2025 149:15


Register free at https://brightu.com to watch the full A.G.E.S Conference: Cleansing the Causes of Cancer stream - Nvidia's Groundbreaking Announcement and Its Implications (0:11) - Upcoming Interviews and Health Ranger Store Promotions (5:43) - Trump's Strategic Interests in Greenland and Canada (10:19) - Global Conflict and U.S. Decline (28:14) - Mark Zuckerberg's U-Turn on Censorship (29:44) - The Future of Free Speech and Decentralized Platforms (47:12) - Nvidia's New Hardware and Its Impact on AI Capabilities (47:35) - The Role of AI in Future Technological Advancements (1:07:31) - The Ethical and Moral Implications of AI (1:16:08) - The Future of Decentralized Knowledge and AI (1:16:34) - Advancements in AI and Technology (1:20:28) - AI's Role in Human Civilization (1:28:22) - Introduction of Guests and New Administration (1:29:58) - Censorship and AI Bias (1:32:03) - Decentralization and Health Transformation (1:34:30) - AI and Medicine: Centralization vs. Decentralization (1:51:23) - AI and Transhumanism: Threats and Opportunities (2:02:12) - Cancer Awareness and Alternative Treatments (2:05:37) - Conclusion and Call to Action (2:23:59) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

The Agency Accelerator
7 Ethical AI Pitfall and How Your Agency Can Avoid Them

The Agency Accelerator

Play Episode Listen Later Dec 17, 2024 18:42


Concerned about the ethical dilemmas of using AI in your agency? In an era where AI is revolutionising the marketing landscape, understanding its ethical implications is more important than ever. For marketing professionals, navigating the challenges of data privacy, bias in AI models, and transparency can be daunting, yet crucial to maintaining trust and integrity with clients and consumers alike. Who better to discuss this topic with than my AI coach clone, AskRobAnything. You'll discover how to: Implement strategies to ensure AI systems are free from bias and discrimination, promoting fair and inclusive marketing practices.Handle client inquiries about your use of AI with transparency and reassurance, building stronger client relationships.Navigate data privacy concerns and comply with regulations, safeguarding your agency from legal repercussions and maintaining client trust. Today's highlights [00:00] Introduction [00:17] Welcoming AI Co-Host [00:41] Key AI Ethics Challenges [03:11] Ensuring AI Fairness [04:33] Dealing with AI Bias [06:10] AI as a Support Tool [08:53] Discussing Client Transparency [10:27] Handling Data Privacy [11:41] Risks of Ignoring AI Ethics [13:09] Future of AI in Agencies [15:09] Final Thoughts and Wrap-Up Rate, Review, & Subscribe on Apple Podcasts “I enjoy listening to The Agency Accelerator Podcast. I always learn something from every episode.” If that sounds like you, please consider rating and reviewing my show! This helps me support more people like you to move towards a Self-Running Agency. How to leave a review on Apple Podcasts Scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then, let me know what you loved most about the episode! Also, if you haven't done so already, subscribe to the podcast. I'm adding a bunch of bonus episodes to the feed and, if you're not subscribed, there's a good chance you'll miss out. Subscribe now! Useful links mentioned in this episode: Have your own conversation with AskRobAnything (for free!)Scale your agency with The Self-Running Agency Blueprint

The Agency Accelerator
7 Ethical AI Pitfalls and How Your Agency Can Avoid Them

The Agency Accelerator

Play Episode Listen Later Dec 12, 2024 16:43


Concerned about the ethical dilemmas of using AI in your agency? In an era where AI is revolutionising the marketing landscape, understanding its ethical implications is more important than ever. For marketing professionals, navigating the challenges of data privacy, bias in AI models, and transparency can be daunting, yet crucial to maintaining trust and integrity with clients and consumers alike. Who better to discuss this topic with than my AI coach clone, AskRobAnything. You'll discover how to: Implement strategies to ensure AI systems are free from bias and discrimination, promoting fair and inclusive marketing practices.Handle client inquiries about your use of AI with transparency and reassurance, building stronger client relationships.Navigate data privacy concerns and comply with regulations, safeguarding your agency from legal repercussions and maintaining client trust. Today's highlights [00:00] Introduction [00:17] Welcoming AI Co-Host [00:41] Key AI Ethics Challenges [03:11] Ensuring AI Fairness [04:33] Dealing with AI Bias [06:10] AI as a Support Tool [08:53] Discussing Client Transparency [10:27] Handling Data Privacy [11:41] Risks of Ignoring AI Ethics [13:09] Future of AI in Agencies [15:09] Final Thoughts and Wrap-Up Rate, Review, & Subscribe on Apple Podcasts “I enjoy listening to The Agency Accelerator Podcast. I always learn something from every episode.” If that sounds like you, please consider rating and reviewing my show! This helps me support more people like you to move towards a Self-Running Agency. How to leave a review on Apple Podcasts Scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then, let me know what you loved most about the episode! Also, if you haven't done so already, subscribe to the podcast. I'm adding a bunch of bonus episodes to the feed and, if you're not subscribed, there's a good chance you'll miss out. Subscribe now! Useful links mentioned in this episode: Have your own conversation with AskRobAnything (for free!)Scale your agency with The Self-Running Agency Blueprint

Today in Health IT
Newsday: Private Equity Risks, AI Bias, and Imaging Evolution with Lindsey Jarrell

Today in Health IT

Play Episode Listen Later Dec 11, 2024 17:45 Transcription Available


December 11, 2024: Lindsey Jarrell, CEO at Healthlink Advisors joins Bill for the news. Together, they unpack how operating margins reflect the lingering struggles of post-pandemic healthcare systems. Are budget cuts and private equity reshaping the future of care delivery? The conversation explores critical questions about the modernization of imaging systems, the role of AI in consulting and clinical workflows, and the growing intersection of regulation, transparency, and bias in healthcare innovation. Are AI advancements in healthcare moving fast enough—or simply infiltrating through enterprise platforms? Key Points:01:49 Kaufman Hall Report Analysis03:24 Post-Election Healthcare Concerns06:44 RSNA Conference Highlights07:12 AI in Healthcare: Challenges and Opportunities13:07 Future of AI and Healthcare InvestmentsNews articles:RSNA president offers 6 ingredients for a 'better AI future'Research Reports - Healthcare, Financial, Higher Ed | Kaufman…2024: The State of Generative AI in the Enterprise

Big DREAM School - The Art, Science, and Soul of Rocking OUR World Doing Simple Things Each Day
Ikigai Magic: From FedEx to Community Builder- Jason "Geekigai" Hodlers

Big DREAM School - The Art, Science, and Soul of Rocking OUR World Doing Simple Things Each Day

Play Episode Listen Later Nov 27, 2024 78:59 Transcription Available


In this episode, we dive into the world of Bitcoin and the innovative platform Satlantis with Jason Hodlers- aka Geekigai", the ambassador concierge at Satlantis. Jason shares his journey from a FedEx delivery driver to becoming a key player in the Bitcoin community, highlighting his passion for connecting people and fostering communities. We explore the concept of Satlantis, a Nostr client designed to bring Bitcoiners together globally, offering a blend of social networking and travel insights.Jason also opens up about his personal life, sharing his experiences as a homeschooling father of six and his journey through various jobs before finding his purpose in the Bitcoin space. He discusses the importance of humility, continuous learning, and the transformative power of Bitcoin in promoting personal growth and societal change.The conversation takes a heartfelt turn as both Jason and the host share their personal struggles with mental health, emphasizing the importance of community, purpose, and the support of loved ones in overcoming life's challenges. They discuss the broader implications of Bitcoin as a tool for peace and personal empowerment, advocating for a holistic approach to well-being.Listeners are introduced to the concept of Ikigai, a Japanese philosophy that combines passion, mission, vocation, and profession, as Jason shares how he found his own Ikigai in the Bitcoin community. The episode also touches on the Spirit of Satoshi project, aiming to create a balanced AI model informed by Bitcoin principles.Join us for an inspiring conversation about finding purpose, building community, and the revolutionary potential of Bitcoin and decentralized technologies.https://x.com/geekigaihttps://x.com/joinsatlantishttps://btctimes.com/21-questions/https://www.satlantis.io

Heads Talk
233 - Neema Uthappa, CIO: Cyber Protect Series, Oliver Wyman - CIO Blueprint for AI Bias Mitigation and Strategic Cybersecurity Investment

Heads Talk

Play Episode Listen Later Nov 25, 2024 31:15


AI for Kids
How Kids can Shape AI's Future (Middle+)

AI for Kids

Play Episode Listen Later Nov 12, 2024 28:25 Transcription Available


Send us a textEver wondered how a teenager could shape the future of artificial intelligence? Meet Jaiden Li, a high school student and AI researcher with a unique journey from China to Singapore and then the US. Tune in to hear about her innovative vision for an AI translation tool that could break down language barriers in understanding policies and laws. Jaiden's passion for languages and math, along with her personal experiences, fuels a conversation that seamlessly blends cultural insights with AI's potential.From addressing the psychological effects of consuming negative news to the pivotal role teens can play in AI regulation, this episode covers it all. We dive into the challenges facing young minds in today's digital landscape, exploring how AI can be leveraged to promote mental resilience by balancing news consumption. Jaiden provides her perspective on how students can contribute to shaping fair and effective AI policies, especially in educational contexts, highlighting the ethical dilemmas posed by technologies like deepfakes.Whether you're a budding AI enthusiast or just curious about the intersection of technology and education, this episode promises to inspire and enlighten.Resources:TensorFlow Embedding ProjectorA tool for visualizing high-dimensional data, such as word embeddings.Watch Jaiden demo the tool here.ScratchA visual programming platform for creating interactive stories, games, and animations.3Blue1BrownA YouTube channel with animated math explanations.Girls for Algorithmic JusticeAn organization focused on algorithmic fairness and addressing biases in AI.Reuters ClassifierA popular dataset for text classification research. Support the showHelp me become the #1 (number one) podcast and podcaster for AI for Kids. Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Gift or get our books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform. Apple Podcasts Amazon Music Spotify YouTube Other Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales

Energy News Beat Podcast
ENB #223 AI's Energy Demands: The Future of Data Centers and Grid Challenges

Energy News Beat Podcast

Play Episode Listen Later Nov 7, 2024 27:19


In the Energy News Beat – Conversation in Energy with Stu Turley interviews Riley Trettel, VP of Data Center Development at Hut 8, about the growing energy demands driven by AI and Bitcoin mining, and the challenges facing U.S. grid operators. Riley explains Hut 8's focus on building large-scale data centers for AI model training, emphasizing the importance of scalable power interconnections and the role of microgrids, natural gas, and nuclear power in meeting future energy needs. They also discuss AI biases, alignment issues, and the rapid advancements in AI technology, highlighting the potential for a transformative future in energy and computing.#aiinenergy #ai #nuclearpower #naturalgas Check out Hut 8 Here: https://hut8.io/Please reach out to Riley on his LinkedIn HERE: https://www.linkedin.com/in/rileytrettel/Highlights of the Podcast00:00 - Intro01:00 - Hut 8's Role & AI Data Centers02:30 - Future Grid Capacity Needs03:47 - AI Energy Consumption & Global Impacts05:36 - AI Deep Learning & Transformers09:21 - AI Bias & Model Alignment12:28 - Nuclear Power's Role in Data Centers14:21 - Microgrids & Energy for Data Centers19:05 - Natural Gas as a Solution22:53 - Hut 8's Future & AI Training26:50 -Closing Remarks & Contact Information

AI, Government, and the Future by Alan Pentz
AI Perspectives: Misinformation, Government Policy, and Content Authentication - A Round-up Episode

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Oct 29, 2024 28:44


In this special round-up episode of AI, Government, and the Future, we revisit compelling conversations with three distinguished guests. Alex Fink (Otherweb CEO/Founder), Irina Buzu (AI Advisor to Deputy Prime Minister of Moldova), and Jonathan Gillham (Originality.ai Founder) share their insights on AI's impact on content authenticity, government policy, and the future of digital trust.

FOX on Tech
Amazon AI Bias Probe

FOX on Tech

Play Episode Listen Later Oct 28, 2024 1:45


Lawmakers have met with a tech giant over claims of political bias. Learn more about your ad choices. Visit megaphone.fm/adchoices

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 377: Confronting AI Bias and AI Discrimination in the Workplace

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Oct 10, 2024 27:06


Send Everyday AI and Jordan a text messageThink AI is neutral? Think again. This is the workplace impact you never saw coming. What happens when the tech we rely on to be impartial actually reinforces bias? Join us for a deep dive into AI bias and discrimination with Samta Kapoor, EY's Americas Energy AI and Responsible AI Leader.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan and Samta questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Business Leaders Confronting AI Bias and Discrimination2. AI Guardrails3. Bias and Discrimination in AI Models4. AI and the Future of Work4. Responsible AI and the FutureTimestamps:02:10 About Samta Kapoor and her role at EY05:33 AI has risks, biases; guardrails recommended.06:42 Governance ensures technology is scaled responsibly.13:33 Models reflect biases; they mirror societal discrimination.16:10 Embracing AI enhances adaptability, not job replacement.19:04 Leveraging AI for business transformation and innovation.23:05 Technology rapidly changing requires agile adaptation.25:12 Address AI bias to reduce employee anxiety.Keywords:generative AI, AI bias, AI discrimination, business leaders, model bias, model discrimination, AI models, AI guardrails, AI governance, AI policy, Ernst and Young, AI risk, AI implementation, AI investment, AI hype, AI fear, AI training, workplace AI, AI understanding, AI usage, AI responsibilities, generative AI implementation, practical AI use cases, AI audit, AI technology advancement, multimodal models, AI tech enablement, AI innovation, company AI policies, AI anxiety.

Beyond the Hedges
AI, Bias, and Ethics in Hiring feat. Fred Oswald

Beyond the Hedges

Play Episode Listen Later Oct 9, 2024 39:05


As AI grows and becomes more accessible, it's changing our lives in many ways—including the workforce. Our guest today is an expert in organization and workforce development who will tell us how AI is shaping the hiring process. Fred Oswald is a Professor at Rice and the Herbert S. Autrey Chair in Social Sciences. His Organization & Workforce Laboratory (OWL) at Rice focuses on selection and job performance models in organizational, educational, and military contexts, as predicted by individual differences (such as personality and ability) as well as group differences (workgroup characteristics, gender, race/ethnicity, and culture). In our first episode of Season 3, Fred joins host David Mansouri. They delve into Fred's journey to Rice, his research on testing and job performance models, and the work being done in his lab at Rice. The conversation highlights the ethical and practical applications of AI in organizational and educational settings, exploring how AI tools can shape hiring practices and support teaching and learning.Let us know you listened to the episode and leave questions for future guests by completing this short form.Episode Guide:00:46 Fred Oswald's Journey to Rice University02:27 Research at the Organization and Workforce Laboratory (OWL)03:29 Student Research Projects in OWL07:16 AI Applications in Organizational Decision-Making13:56 Ethics and Challenges of AI in Employment23:29 AI in Education: Opportunities and Concerns28:02 Skills-Based Hiring and the Future of Work34:46 Rapid Fire Questions and ConclusionBeyond The Hedges is a production of Rice University and is produced by University FM.Show Links: National AI Advisory Committee - AI.govBoard on Human-Systems Integration | National AcademiesCo-Intelligence by Ethan MollickRice AlumniAssociation of Rice Alumni | FacebookRice Alumni (@ricealumni) | X (Twitter)Association of Rice Alumni (@ricealumni) | Instagram SallyportalHost Profiles:David Mansouri | LinkedInDavid Mansouri '07 | Alumni | Rice UniversityDavid Mansouri (@davemansouri) | XDavid Mansouri | TNScoreGuest Profiles:Organization & Workforce Laboratory (OWL)Twitter/XLinkedInAbout FredDepartment of Psychological Sciences at RiceIndustrial/Organizational Psychology at RiceEpisode Quotes:Is AI turning users into critics?21:17: I've noticed in my own experimentation—no surprise, because I think there's a common experience, you know, with generative AI. With the language models, you often become a critic in ways that you, of course, criticize or advise your own work, but when a GPT is producing language, say, summarize this paper for me or something like that, you shift into the role of a critic and say, is this good enough? However, you define that good enough for you, good enough for your audience, your stakeholders, you know, both in terms of the thematic, the substance—what is there? Does it seem right? But also, critically, what is missing? What didn't show up? And really working with that, I think, changes kind of your approach to how you do some of that work.Using AI to empower the talents we have27:47: You do have to build the fundamentals to understand what AI is doing, so in that sense, we can't use AI as a crutch. We have to use it as a way to empower the talents we already have and are building ourselves.Examining bias and AI's influence in decision-making08:59:  How does bias work when we talk about AI as biased? Well, what does that mean in terms of the data and the decisions that are made from those data? This work gets embedded in these organizations. So, I'm not only concerned with the development of tests, but I'm concerned about the context in which they're being used.

Buffalo, What’s Next?
What's Next? | Project 2025 Series Ep.1: Impact on Education | UB CII on Tackling Misinformation and AI Bias

Buffalo, What’s Next?

Play Episode Listen Later Oct 1, 2024 59:59


For today's episode, we feature two conversations. To start us off, we begin with the first episode of our Project 2025 series. The series aims to go in-depth as to what the impact of the Project would look like if implemented. For the first episode of the series, Thomas O'Neil-White is joined by returning guest Wil Green, the Assistant Dean of Outreach and Community Engagement at the Graduate School of Education at UB. The two discuss the potential impact of Project 2025 on education. For our second conversation, we welcome David Castillo, E. Bruce Pitman, and Jasmina Tacheva of the UB Center for Information Integrity. Jay Moran joins the three in a conversation about how misinformation and generative AI can affect access to reputable information on elections, as well as an upcoming film screening of “Coded Bias”, a film that discusses how certain facial recognition programs are not programmed to analyze dark-skinned faces accurately.

Public Sector Podcast
Building a Modern and Agile Workforce - Shakira Naraine - Episode 105

Public Sector Podcast

Play Episode Listen Later Aug 26, 2024 28:50


How can organizations build an agile, diverse workforce in the age of AI and automation? This episode tackles that pressing question with Shakira Naraine, Chief People Officer of the Toronto Transit Commission. Shakira delves into practical strategies for implementing successful remote work policies, leveraging AI to enhance workforce capabilities, and fostering genuine inclusivity in government organizations. She shares real-world examples from the TTC's initiatives, including their innovative mentorship program and AI-assisted recruitment process. Tune in to discover how modern leaders are balancing technological advancement with human-centered policies to create adaptable, equitable workplaces. Shakira Naraine, Chief People Officer, Toronto Transit Commission For more great insights head to www.PublicSectorNetwork.co  

The Full Desk Experience
Best of FDE | AI Tools in Recruitment: Best Practices and Key Considerations with Aaron Elder, CEO at Crelate

The Full Desk Experience

Play Episode Listen Later Aug 22, 2024 18:23


In this special best-of series, we're diving into one of our most compelling discussions on the integration of AI in staffing and recruiting firms. We're revisiting a candid conversation with Aaron Elder, CEO at Crelate. We dive deep into the concerns and opportunities AI presents in the recruiting industry, discussing everything from the potential over-reliance on automated systems to the nuanced roles AI can play in enhancing rather than replacing human effort.Aaron offers invaluable insights into how AI can be a game-changer and where its limits lie, touching on practical steps and pitfalls to avoid. Whether you're an AI enthusiast or a cautious skeptic, this episode promises to deliver actionable advice and thought-provoking perspectives. Tune in as we explore how AI can help your firm stay cutting-edge without losing the human touch. Join us on this enlightening journey through the world of AI in recruiting. Let's dive in.______________________Follow Crelate on LinkedIn: https://www.linkedin.com/company/crelate/Subscribe to our newsletter: https://www.crelate.com/blog/full-desk-experience

Cyber Security Weekly Podcast
Episode 406 - Technology Leadership in the AI era

Cyber Security Weekly Podcast

Play Episode Listen Later Aug 11, 2024


Prior to Joining Seaco as CIO, Damian Leach held the position of Chief Technology Officer for Workday Asia Pacific and Japan. Prior to his CTO position at Workday Damian spent 13 years in the Banking and Finance industry in Global Technology roles, most recently working for Standard Chartered Bank based in Singapore. Damian led the Digital Transformation program for the Bank to move to the cloud and pioneered Voice Biometric technologies for the retail Banking customers. Prior to coming to Asia, Damian spent many years managing professional services teams to develop core banking interactive technology systems in Europe. Damian is a certified AI professional having studied AI Bias and Governance at NTU and also completed an EMBA in Business Administration focused on Asian Leadership and Entrepreneurship with overseas segments in Wharton Penn university and UC Berkley HaaS. In his spare time Damian coaches, mentors, and is a panelist on startups / innovation contest across Asia. --- In this interview, Damian shares the highlights of how Seaco, a global company HQ in Singapore leverages a network of shipping ports and depots and has over 3million TEUs in circulation. The Seaco IT team in partnership with the business are running a series of experiments with AI and Big data to help it adapt to stay ahead of the curve. While there is a lot of hype surrounding AI, Damian emphasizes the importance of understanding the core business problems before jumping to technology solutions. He introduced the ACE framework (Analytics, Conversational, and Experience) which can help pinpoint the most relevant business cases for AI adoption. For instance, at Seaco, they evaluated 30 potential use cases and narrowed it down to 3 that deliver the biggest boost to productivity and revenue.However, successful AI adoption goes beyond technology. Damian highlights the importance of employee and stakeholder buy-in. This means addressing fears of job displacement and showcasing how AI can actually enhance productivity. For example, he explains how success stories from pilot projects can pave the way for realizing the technology's full impact. He also emphasizes fostering a culture of "psychological safety" where employees feel comfortable experimenting with new technologies. Looking to the future, he acknowledged that AI presents both opportunities and challenges for business leaders. As such, it's essential to have a clear vision and strategy in place, along with a commitment to ongoing learning and development for his employees.Recorded 29th May 2024, ATxSG Singapore Expo, 12.30pm.

The Real Estate Crowdfunding Show - DEAL TIME!
Can AI Solve the Housing Crisis?

The Real Estate Crowdfunding Show - DEAL TIME!

Play Episode Listen Later Jul 23, 2024 40:37


My YouTube show/podcast guest this week, Chris Christensen, NAR's Director of Technology Policy, is particularly interesting (not that other shows are not!) because Chris handles ‘policy' which means much of his daily life is spent on or around The Hill in DC.   Though representing the National Association of  Realtors, so very much a real estate focus, Chris deals with a broad range of issues pertaining to the implementation of AI – many that go way beyond plain ol' real estate applications.   Watch (or listen) to this episode to hear Chris discussing, through the lens of policy making at the highest levels, (can you discuss something through a lens?), copyright and privacy issues amongst other things, stressing how regulation is evolving to keep up.   He talks about how the NAR is actively exploring AI use cases, focusing initially on enhancing internal tools. how AI models streamline data accessibility on the NAR website, aiding in underwriting guidelines, and for member resources.   There are some incredibly powerful housing data and predictive analytics that emerge as key areas where AI can provide significant value and Chris explains how NAR members will be able to access this wealth of data.   Here are some highlights we discuss: AI in Real Estate: Chris explains how AI can enhance customer experience, streamline operations, and provide competitive advantages to agents who use it. Regulatory Landscape: AI's impact on bias, copyright, and privacy, and the need for robust regulation. Practical Applications: AI's role in transaction management, zoning, tax policy, and data accessibility on the NAR website. Housing Crisis: AI's potential to improve housing affordability and liquidity (this is a biggy). And to wrap up, I ask all guests the same three questions. Here are Chris's answers (the 3rd one is gnarly!)   1. Why should real estate investors or professionals be paying attention to AI today? Embrace Change: AI is fundamentally changing core principles of real estate. Ignoring it will be perilous, while embracing it can lead to personal and business advantages. Competitive Edge: Using AI can help you differentiate yourself among your peers, giving you a significant competitive edge. Inexpensive Solutions: Many AI solutions are not expensive, offering easy and cost-effective ways to stand out and increase efficiency.  2. How do you use AI daily? What tools and apps do you use? Note-Taking and CRM: AI tools are used for taking notes and managing customer relationships. Presentations and Writing: AI assists, like Gemini, in creating presentations and writing white papers. Image Generation: Various image generation tools are used to visualize concepts.  3. One easy win using AI for listeners/viewers to try immediately: OpenAI Covert Influence Operations Report: Chris recommends reading the recently released report on how AI is used for covert influence operations on social media. It makes for intense reading: https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/ ***** The only Podcast you need on real estate and AI.   Learn how other real estate pros are using AI to get ahead of their competition.   Get early notice of hot new game-changing AI real estate apps.   Walk away with something you can actually use in every episode.   PLUS, subscribe to my free newsletter and get: • practical guides, • how-to's, and • news updates   All exclusively for real estate investors that make learning AI fun and easy and insanely productive, for free.   EasyWin.AI

The Road to Accountable AI
Diya Wynn: People-Centric Technology

The Road to Accountable AI

Play Episode Listen Later Jun 27, 2024 32:40 Transcription Available


Kevin Werbach speaks with Diya Wynn, the responsible AI lead at Amazon Web Services (AWS). Diya shares how she pioneered a formal practice for ethical AI at AWS, and explains AWS's “Well-Architected” framework to assist customers in responsibly deploying AI. Kevin and Diya also discuss the significance of diversity and human bias in AI systems, revealing the necessity of incorporating diverse perspectives to create more equitable AI outcomes.  Diya Wynn leads a team at AWS that helps customers implement responsible AI practices. She has over 25 years of experience as a technologist scaling products for acquisition; driving inclusion, diversity & equity initiatives; and leading operational transformation. She serves on the AWS Health Equity Initiative Review Committee; mentors at Tulane University, Spelman College, and GMI; was a mayoral appointee in Environment Affairs for six years; and guest lectures regularly on responsible and inclusive technology. Responsible AI for the greater good: insights from AWS's Diya Wynn  Ethics In AI: A Conversation With Diya Wynn, AWS Responsible AI Lead   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.

Open||Source||Data
Eliminating AI Bias Through Inclusive Data Annotation with Andrea Brown

Open||Source||Data

Play Episode Listen Later Jun 18, 2024 45:56


Learn how Andrea Brown, CEO of Reliabl, is revolutionizing AI by ensuring diverse communities are represented in data annotation. Discover how this approach not only reduces bias but also improves algorithmic performance. Andrea shares insights from her journey as an entrepreneur and AI researcher.  Episode timestamps(02:22) Andrea's Career Journey and Experience with Open Source (Adobe, Macromedia, and Alteryx)(11:59) Origins of Alteryx's AI and ML Capabilities / Challenges of Data Annotation and Bias in AI(19:00) Data Transparency & Agency(26:05) Ethical Data Practices(31:00) Open Source Inclusion Algorithms(38:20) Translating AI Governance Policies into Technical Controls(39:00) Future Outlook for AI and ML(42:34) Impact of Diversity Data and Inclusion in Open SourceQuotesAndrea Brown"If we get more of this with data transparency, if we're able to include more inputs from marginalized communities into open source data sets, into open source algorithms, then these smaller platforms that maybe can't pay for a custom algorithm can use an algorithm without having to sacrifice inclusion." Charna Parkey“I think if we lift every single platform up, then we'll advance all of the state of the art and I'm excited for that to happen."Connect with AndreaConnect with Charna

The Product Experience
Navigating AI bias: Insights and strategies - John Haggerty (Founder, The PM Insider)

The Product Experience

Play Episode Play 60 sec Highlight Listen Later May 22, 2024 39:20 Transcription Available


Discover the intricacies of bias in generative AI in product management with John Haggerty, Founder of The PM Insider. In this week's episode, John unveils the layered complexities of AI innovation, navigating the ethical realm of the AI revolution reshaping businesses. Featured Links: Follow John on LinkedIn | The PM Insider | IBM AI Fairness 360 | Google's 'What If' AI Tool | Aequitas Bias and Fairness Toolkit | Deon Ethics Checklist | Kaggle | Perplexity AI | MidjourneyOur HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A Product Manager's Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon's music stores in the US & UK.

Critical Thinking - Bug Bounty Podcast
Episode 71: More VDP Chats & AI Bias Bounty Strats with Keith Hoodlet

Critical Thinking - Bug Bounty Podcast

Play Episode Listen Later May 16, 2024 105:21


Episode 71: In this episode of Critical Thinking - Bug Bounty Podcast Keith Hoodlet joins us to weigh in on the VDP Debate. He shares some of his insights on when VDPs are appropriate in a company's security posture, and the challenges of securing large organizations. Then we switch gears and talk about AI bias bounties, where Keith explains the approach he takes to identify bias in chatbots and highlights the importance of understanding human biases and heuristics to better hack AI.Follow us on twitter at: @ctbbpodcastWe're new to this podcasting thing, so feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!------ Links ------Follow your hosts Rhynorater & Teknogeek on twitter:https://twitter.com/0xteknogeekhttps://twitter.com/rhynorater------ Ways to Support CTBBPodcast ------Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.Sign up for Caido using the referral code CTBBPODCAST for a 10% discount. Today's guest: Keith Hoodlethttps://securing.dev/Resources:Daniel Miessler's article about the security poverty linehttps://danielmiessler.com/p/the-cybersecurity-skills-gap-is-another-instance-of-late-stage-capitalism/Hacking AI Biashttps://securing.dev/posts/hacking-ai-bias/Hacking AI Bias Videohttps://youtu.be/AeFZA7xGIbE?si=TLQ7B3YtzPWXS4hqSarah's Hoodlet's new bookhttps://sarahjhoodlet.comLink to Amazon Pagehttps://a.co/d/c0LTM8UTimestamps:(00:00:00) Introduction(00:04:09) Keith's Appsec Journey(00:16:24) The Great VDP Debate Redux(00:47:18) Platform/Hunter Incentives and Government Regulation(01:06:24) AI Bias Bounties(01:26:27) AI Techniques and Bugcrowd Contest

Paul's Security Weekly
AI & Hype & Security (Oh My!) & Hacking AI Bias - Caleb Sima, Keith Hoodlet - ASW #284

Paul's Security Weekly

Play Episode Listen Later May 7, 2024 64:57


A lot of AI security has nothing to do with AI -- things like data privacy, access controls, and identity are concerns for any new software and in many cases AI concerns look more like old-school API concerns. But...there are still important aspects to AI safety and security, from prompt injection to jailbreaking to authenticity. Caleb Sima explains why it's important to understand the different types of AI and the practical tasks necessary to secure how it's used. Segment resources: https://calebsima.com/2023/08/16/demystifing-llms-and-threats/ https://www.youtube.com/watch?v=qgDtOu17E&t=1s We already have bug bounties for web apps so it was only a matter of time before we would have bounties for AI-related bugs. Keith Hoodlet shares his experience winning first place in the DOD's inaugural AI bias bounty program. He explains how his education in psychology helped fill in the lack of resources in testing an AI's bias. Then we discuss how organizations should approach the very different concepts of AI security and AI safety. Segment Resources: https://securing.dev/posts/hacking-ai-bias/ https://www.defense.gov/News/Releases/Release/Article/3659519/cdao-launches-first-dod-ai-bias-bounty-focused-on-unknown-risks-in-llms/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-284

Paul's Security Weekly TV
Hacking AI Bias with Human Techniques - Keith Hoodlet - ASW #284

Paul's Security Weekly TV

Play Episode Listen Later May 7, 2024 31:47


We already have bug bounties for web apps so it was only a matter of time before we would have bounties for AI-related bugs. Keith Hoodlet shares his experience winning first place in the DOD's inaugural AI bias bounty program. He explains how his education in psychology helped fill in the lack of resources in testing an AI's bias. Then we discuss how organizations should approach the very different concepts of AI security and AI safety. Segment Resources: https://securing.dev/posts/hacking-ai-bias/ https://www.defense.gov/News/Releases/Release/Article/3659519/cdao-launches-first-dod-ai-bias-bounty-focused-on-unknown-risks-in-llms/ Show Notes: https://securityweekly.com/asw-284

Application Security Weekly (Audio)
AI & Hype & Security (Oh My!) & Hacking AI Bias - Caleb Sima, Keith Hoodlet - ASW #284

Application Security Weekly (Audio)

Play Episode Listen Later May 7, 2024 64:57


A lot of AI security has nothing to do with AI -- things like data privacy, access controls, and identity are concerns for any new software and in many cases AI concerns look more like old-school API concerns. But...there are still important aspects to AI safety and security, from prompt injection to jailbreaking to authenticity. Caleb Sima explains why it's important to understand the different types of AI and the practical tasks necessary to secure how it's used. Segment resources: https://calebsima.com/2023/08/16/demystifing-llms-and-threats/ https://www.youtube.com/watch?v=qgDtOu17E&t=1s We already have bug bounties for web apps so it was only a matter of time before we would have bounties for AI-related bugs. Keith Hoodlet shares his experience winning first place in the DOD's inaugural AI bias bounty program. He explains how his education in psychology helped fill in the lack of resources in testing an AI's bias. Then we discuss how organizations should approach the very different concepts of AI security and AI safety. Segment Resources: https://securing.dev/posts/hacking-ai-bias/ https://www.defense.gov/News/Releases/Release/Article/3659519/cdao-launches-first-dod-ai-bias-bounty-focused-on-unknown-risks-in-llms/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-284

The Full Desk Experience
Do my teams hate me? | Navigating the AI Revolution in Recruiting with Aaron Elder, CEO at Crelate

The Full Desk Experience

Play Episode Listen Later Apr 25, 2024 17:35


In today's episode, we delve into the intricate world of artificial intelligence and its role in the recruitment industry. Our insightful host, Kortney Harmon, Director of Industry Relations at Crelate, engages in a riveting discussion with our esteemed guest, Aaron Elder. They unravel the tension between adopting AI tools to enhance efficiency and the desire to maintain the personalized, human element that is fundamental to making exceptional hires.As the president of a recruiting firm finds themselves at a crossroads, torn between the allure of AI and the fear of falling behind competitors while also recognizing the potential risks of an over-reliance on technology, Aaron provides nuanced perspectives on how AI is transforming the recruitment landscape—and not just in terms of automating emails.With Aaron's expertise in technology, they discuss how AI should serve as a means to elevate human capabilities by identifying patterns and surfacing critical information for decisive action, cautioning against its misuse such as wrongful screening of candidates. Together, they reflect on the strategic use of AI against the backdrop of efficiency, differentiation, and ethical considerations.Kortney and Aaron also contemplate the future of AI, the implications for middle management, and the potential for AI to help filter through the noise and drive meaningful outcomes. Join us on this journey as we explore whether AI is indeed the future and how recruitment firms can smartly navigate its implementation without compromising their core values. Engage with our thought-provoking exchange on The Full Desk Experience. Don't miss the insights—subscribe wherever you listen to podcasts, and sign up for our live events. _______  Follow Crelate on LinkedIn: 

Superheroes of Science
AI Bias and Ethical Concerns

Superheroes of Science

Play Episode Listen Later Apr 4, 2024 35:18


Dr. Lindsay Weinberg is a clinical assistant professor in the John Martinson Honors College at Purdue University, and the Director of the Tech Justice Lab. Her research and teaching are at the intersection of science and technology studies, media studies, and feminist studies, with an emphasis on the social and ethical impacts of digital technology.    

Discover Daily by Perplexity
Yahoo Acquires Artifact, Amazon's Dashcart Dilemma, and the Impending AI Data Drought

Discover Daily by Perplexity

Play Episode Listen Later Apr 3, 2024 5:14 Transcription Available


In today's episode of Discover Daily, we explore three significant developments in the world of AI and tech. First, we discuss Yahoo's acquisition of Artifact, an AI-driven news startup, and how this move could revolutionize personalized content delivery across Yahoo's platforms. We also examine Amazon's surprising decision to end its "Just Walk Out" cashierless technology in grocery stores, replacing it with "Dash Carts" that require item scanning, reflecting a broader industry trend of finding the right balance between automation and human interaction in retail.Finally, we delve into the potential "data drought" that experts predict could hit the AI industry as soon as 2026. As AI models consume high-quality training data at an unprecedented rate, the shortage of diverse and ethically-sourced datasets could slow the pace of AI progress. We explore the implications of this challenge and the innovative solutions being developed to address it, from algorithmic improvements to synthetic data generation and data sharing initiatives. Join us as we unpack these fascinating stories and their impact on the future of AI and tech.For more on these stories:Yahoo acquires Artifacthttps://www.perplexity.ai/search/Yahoo-acquires-Artifact-A7BQHWVNQk.ahmBdI0vX1wAmazon ends AI-checkout storeshttps://www.perplexity.ai/search/Amazon-ends-AIcheckout-IpciSQbPQu6AbAYBFzhVpQ2026 AI data droughthttps://www.perplexity.ai/search/2026-AI-data-p5WHafatSneygKa9b6NncAPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin

The Financial Guys
Business Migration from New York

The Financial Guys

Play Episode Listen Later Feb 23, 2024 27:30


Racism in government opportunities is a contentious issue that has sparked debates and discussions across the nation. Mike Sperrazza, drawing from his observations of Google's AI struggles and the policies that prioritize certain racial groups, argues that the exclusion of white individuals from certain opportunities is a form of discrimination. He advocates for a more inclusive and merit-based approach to government opportunities. Similarly, Mike Hoeflich, who also perceives a bias against white individuals in government opportunities, criticizes the hypocrisy of promoting diversity and inclusion for certain racial groups but not for white individuals. He calls for a significant change in government policies to address these discriminatory practices. Both Sperrazza and Hoeflich's perspectives are shaped by their belief in the importance of fairness and equality in government opportunities, regardless of race. (00:00:51) AI Bias in Representing Diverse Populations (00:10:18) Nvidia Stock Purchase Raises Conflict of Interest (00:12:53) Government Officials Enriching Themselves Through Insider Trading (00:20:12) Leadership Impact: Business Migration from New York (00:22:48) Trailblazing Black Women Journalists Honored at Lectern

Your Undivided Attention
Why AI Bias is Existential with Dr. Joy Buolamwini

Your Undivided Attention

Play Episode Listen Later Oct 26, 2023 47:46


In this interview, Dr. Joy Buolamwini argues that algorithmic bias in AI systems poses an existential risk to marginalized people. She challenges the assumptions of tech leaders who advocate for AI “alignment” and explains why tech companies are hypocritical when it comes to addressing bias. Dr. Joy Buolamwini is the founder of the Algorithmic Justice League and the author of “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.”Correction: Aza says that Sam Altman, the CEO of OpenAI, predicts superintelligence in four years. Altman predicts superintelligence in ten years. RECOMMENDED MEDIAUnmasking AI by Joy Buolamwini“The conscience of the AI revolution” explains how we've arrived at an era of AI harms and oppression, and what we can do to avoid its pitfallsCoded BiasShalini Kantayya's film explores the fallout of Dr. Joy's discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us allHow I'm fighting bias in algorithmsDr. Joy's 2016 TED Talk about her mission to fight bias in machine learning, a phenomenon she calls the "coded gaze." RECOMMENDED YUA EPISODESMustafa Suleyman Says We Need to Contain AI. How Do We Do It?Protecting Our Freedom of Thought with Nita FarahanyThe AI Dilemma Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_