Podcasts about explainability

  • 110PODCASTS
  • 155EPISODES
  • 44mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Jun 13, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about explainability

Latest podcast episodes about explainability

Software Lifecycle Stories
Interpretability and Explainability with Aruna Chakkirala

Software Lifecycle Stories

Play Episode Listen Later Jun 13, 2025 61:02


Her early inspiration while growing up in Goa with limited exposure to career options. Her Father's intellectual influence despite personal hardships and shift in focus to technology.Personal tragedy sparked a resolve to become financially independent and learn deeply.Inspirational quote that shaped her mindset: “Even if your dreams haven't come true, be grateful that so haven't your nightmares.”Her first role at a startup with Hands-on work with networking protocols (LDAP, VPN, DNS). Learning using only RFCs and O'Reilly books—no StackOverflow! Importance of building deep expertise for long-term success.Experiences with Troubleshooting and System Thinking; Transitioned from reactive fixes to logical, structured problem-solving. Her depth of understanding helped in debugging and system optimization.Career move to Yahoo where she led Service Engineering for mobile and ads across global data centers got early exposure to big data and machine learning through ad recommendation systems and built "performance and scale muscle" through working at massive scale.Challenges of Scale and Performance Then vs. Now: Problems remain the same, but data volumes and complexity have exploded. How modern tools (like AI/ML) can help identify relevance and anomalies in large data sets.Design with Scale in Mind - Importance of flipping the design approach: think scale-first, not POC-first. Encourage starting with a big-picture view, even when building a small prototype. Highlights multiple scaling dimensions—data, compute, network, security.Getting Into ML and Data Science with early spark from MOOCs, TensorFlow experiments, and statistics; Transition into data science role at Infoblox, a cybersecurity firm with focus areas on DNS security, anomaly detection, threat intelligence.Building real-world ML model applications like supervised models for threat detection and storage forecasting; developing graph models to analyze DNS traffic patterns for anomalies and key challenges of managing and processing massive volumes of security data.Data stack and what it takes to build data lakes that support ML with emphasis on understanding the end-to-end AI pipelineShifts from “under the hood” ML to front-and-center GenAI & Barriers: Data readiness, ROI, explainability, regulatory compliance.Explainability in AI and importance of interpreting model decisions, especially in regulated industries.How Explainability Works -Trade-offs between interpretable models (e.g., decision trees) and complex ones (e.g., deep learning); Techniques for local and global model understanding.Aruna's Book on Interpretability and Explainability in AI Using Python (by Aruna C).The world of GenAI & Transformers - Explainability in LLMs and GenAI: From attention weights to neuron activation.Challenges of scale: billions of parameters make models harder to interpret. Exciting research areas: Concept tracing, gradient analysis, neuron behavior.GenAI Agents in Action - Transition from task-specific GenAI to multi-step agents. Agents as orchestrators of business workflows using tools + reasoning.Real-world impact of agents and AI for everyday lifeAruna Chakkirala is a seasoned leader with expertise in AI, Data and Cloud. She is an AI Solutions Architect at Microsoft where she was instrumental in the early adoption of Generative AI. In prior roles as a Data Scientist she has built models in cybersecurity and holds a patent in community detection for DNS querying. Through her two-decade career, she has developed expertise in scale, security, and strategy at various organizations such as Infoblox, Yahoo, Nokia, EFI, and Verisign. Aruna has led highly successful teams and thrives on working with cutting-edge technologies. She is a frequent technical and keynote speaker, panelist, author and an active blogger. She contributes to community open groups and serves as a guest faculty member at premier academic institutes. Her book titled "Interpretability and Explainability in AI using Python" covers the taxonomy and techniques for model explanations in AI including the latest research in LLMs. She believes that the success of real-world AI applications increasingly depends on well- defined architectures across all encompassing domains. Her current interests include Generative AI, applications of LLMs and SLMs, Causality, Mechanistic Interpretability, and Explainability tools.Her recently published book linkInterpretability and Explainability in AI Using Python: Decrypt AI Decision-Making Using Interpretability and Explainability with Python to Build Reliable Machine Learning Systems  https://amzn.in/d/00dSOwAOutside of work, she is an avid reader and enjoys creative writing. A passionate advocate for diversity and inclusion, she is actively involved in GHCI, LeanIn communities.

TechSperience
Episode 138: Elevating Patient-centered Care with Intelligent Healthcare Technology

TechSperience

Play Episode Listen Later Jun 3, 2025 42:36


Join Jamal Khan and Jennifer Johnson as they explore the evolving landscape of AI in healthcare, focusing on its applications, ethical considerations, data privacy, and the role of Chief AI Officers. This discussion highlights the importance of governance, patient consent, and the potential of AI to improve healthcare workflows while addressing data security challenges. Learn about how to implement AI responsibly for better healthcare outcomes and operational excellence.  Speakers: Jamal Khan, Chief Growth and Innovation Officer at Connection Jennifer Johnson, Director of Healthcare Strategy and Business Development at Connection   Show Notes: 00:00 The Evolution of AI in Healthcare 03:04 Ethics and Governance in AI Applications 06:05 Data Privacy and Security Concerns 08:49 The Role of Chief AI Officers 12:07 Patient Consent and Data Usage 14:54 AI's Impact on Healthcare Workflows 18:00 Computational Power in Health Data Analysis 20:47 Virtual Assistants in Healthcare 24:00 Clinical Trials vs. Drug Discovery 26:55 The Future of Patient Data Management 28:11 AI Adoption in Insurance Companies 33:05 Transparency and Explainability in AI 37:28 AI Use Cases in Healthcare 44:10 Cloud vs On-Prem AI Solutions 49:23 Data Orchestration in Healthcare   For more information on AI services for healthcare, visit https://www.cnxnhelix.com/healthcare.     

The_Whiskey Shaman
125: Whiskey According To ChatGPT

The_Whiskey Shaman

Play Episode Listen Later May 17, 2025 63:53


I Think Ai has some good qualities, but does it belong in whiskey. Today we dive into the when, where and why Ai is rad or sad. Hope y'all enjoy.Patreon.com/the_whiskeyshamanBadmotivatorbarrels.com/shop/?aff=3https://www.instagram.com/zsmithwhiskeyandmixology?utm_source=ig_web_button_share_sheet&igsh=ZDNlZDc0MzIxNw==ChatGPT is a large language model developed by OpenAI. It's an AI chatbot that can understand and respond to natural language, making it useful for tasks like writing, translating, and generating text in various formats. It's built on a machine learning model called a transformer neural network and is trained on vast amounts of text data from the internet. Here's a more detailed breakdown:Natural Language Processing (NLP):ChatGPT excels at processing and understanding human language, allowing it to engage in conversations and generate text that appears natural and coherent. Generative AI:It's a type of generative AI, meaning it can create new content based on user prompts. This includes writing articles, poems, code, emails, and more. Transformer Neural Network:It uses a specific type of neural network called a transformer, which is particularly well-suited for tasks involving natural language. Vast Training Data:ChatGPT is trained on a massive amount of text data from the internet, allowing it to learn patterns and relationships in language. Applications:Its uses are diverse, ranging from customer service and writing assistance to educational tools and content creation. AI safety is a complex issue with both benefits and risks. While AI offers significant potential for advancements in various fields, it also presents dangers like bias, misuse, and potential existential threats if not carefully managed. Safeguards like responsible design, development, and deployment practices, along with ethical considerations, are crucial to mitigate these risks. Here's a more detailed look at the safety aspects of AI: 1. Potential Risks: Bias:AI systems can inherit and amplify biases present in their training data, leading to unfair or discriminatory outcomes. Misuse:AI could be used for malicious purposes, such as creating fake content, manipulating public opinion, or automating cyberattacks. Existential Risks:Some experts fear that advanced AI could pose existential threats, potentially leading to uncontrollable systems that could harm humanity. Lack of Transparency:Many AI systems, particularly deep learning models, can be difficult to understand, making it hard to identify and address potential problems. Cybersecurity:AI-powered systems can be vulnerable to cyberattacks, and AI can also be used to launch more sophisticated attacks. Environmental Impact:The development and use of AI infrastructure can have significant environmental consequences, particularly regarding energy consumption and data center emissions. 2. Mitigation Strategies and Ethical Considerations: Responsible Design and Development:.Opens in new tabImplementing ethical guidelines and standards during the design and development of AI systems is crucial to minimize bias and ensure fairness. Transparency and Explainability:.Opens in new tabDeveloping AI systems that are more transparent and explainable can help users understand how they make decisions and identify potential errors. Human Oversight and Control:.Opens in new tabMaintaining human oversight and control over AI systems is essential to prevent unintended consequences and ensure accountability. Data Ethics:.Opens in new tabAddressing the ethical implications of data used to train AI systems, including issues of privacy, fairness, and security, is crucial. AI Safety Research:.Opens in new tabInvesting in research focused on AI safety and security can help identify and address potential risks before they become widespread. 3. Examples of AI Safety Initiatives: NIST AI Resource Center:

Founder Thesis
Deep Dive: How We Got to GPT (and What's Next) with Vinay Sankarapu (Arya.ai- An Aurionpro Company)

Founder Thesis

Play Episode Listen Later May 2, 2025 153:43


"When people think of AI, they think it's probably happened in the last 10 years or 20 years, but it's a journey of 70 plus years." This quote from Vinay Sankarapu challenges the common perception of AI as a recent phenomenon.Vinay Sankarapu is the Founder & CEO of Arya.ai- An Aurionpro Company, one of India's pioneering AI companies established in 2013. An IIT Bombay alumnus, Vinay led Arya.ai to become a profitable (EBITDA positive for 3+ years) enterprise AI player focusing on the BFSI sector, achieving significant scale (~₹50-100 Cr revenue range) before its acquisition by Aurionpro Solutions. He was named in Forbes 30 Under 30 (Asia) and is now also leading AryaXAI, focusing on making AI interpretable and safe for mission-critical applications. Key Insights from the Conversation:

Eye On A.I.
#247 Barr Moses: Why Reliable Data is Key to Building Good AI Systems

Eye On A.I.

Play Episode Listen Later Apr 13, 2025 55:36


This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.   NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more.   In this episode of Eye on AI, Craig Smith sits down with Barr Moses, Co-Founder & CEO of Monte Carlo, the pioneer of data and AI observability. Together, they explore the hidden force behind every great AI system: reliable, trustworthy data. With AI adoption soaring across industries, companies now face a critical question: Can we trust the data feeding our models? Barr unpacks why data quality is more important than ever, how observability helps detect and resolve data issues, and why clean data—not access to GPT or Claude—is the real competitive moat in AI today.   What You'll Learn in This Episode: Why access to AI models is no longer a competitive advantage How Monte Carlo helps teams monitor complex data estates in real-time The dangers of “data hallucinations” and how to prevent them Real-world examples of data failures and their impact on AI outputs The difference between data observability and explainability Why legacy methods of data review no longer work in an AI-first world Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI     (00:00) Intro (01:08) How Monte Carlo Fixed Broken Data   (03:08) What Is Data & AI Observability?   (05:00) Structured vs Unstructured Data Monitoring   (08:48) How Monte Carlo Integrates Across Data Stacks (13:35) Why Clean Data Is the New Competitive Advantage   (16:57) How Monte Carlo Uses AI Internally   (19:20) 4 Failure Points: Data, Systems, Code, Models   (23:08) Can Observability Detect Bias in Data?   (26:15) Why Data Quality Needs a Modern Definition   (29:22) Explosion of Data Tools & Monte Carlo's 50+ Integrations   (33:18) Data Observability vs Explainability   (36:18) Human Evaluation vs Automated Monitoring   (39:23) What Monte Carlo Looks Like for Users   (46:03) How Fast Can You Deploy Monte Carlo?   (51:56) Why Manual Data Checks No Longer Work   (53:26) The Future of AI Depends on Trustworthy Data 

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 502: Sustainable Growth with AI: Balancing Innovation with Ethical Governance

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Apr 11, 2025 30:07


AI growth with no rules? That's not bold. It's reckless.Everyone's racing to scale AI. More data, faster tools, flashier launches.But here's what no one's saying out loud:Growth without governance doesn't make you innovative. It makes you vulnerable.Ignore ethics, and you're building an empire on quicksand.In this episode, we're breaking down how to scale AI the right way—without wrecking trust, compliance, or your future.Join us live as we break down Sustainable Growth with AI: Balancing Innovation with Ethical Governance — An Everyday AI Chat with Rajeev Kapur and Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Questions for Rajeev or Jordan? Go ask.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Balancing AI Innovation with Ethical GovernanceIntroduction of Rajeev Kapur and Eleven o Five MediaRajeev Kapur's Background in AICompanies Balancing AI Innovation and EthicsFormation of AI Ethics BoardData Management as Competitive AdvantagePrivacy and Ethics as Product FeaturesGovernance and Ethical Standards in AI UseImpact of Regulatory Changes on AI UseDeepfakes and Their ImplicationsEncouragement for Companies to Lead Ethically in AITimestamps:00:00 Navigating AI: Innovation vs. Risks04:00 "AI Startup's Spatial Audio Journey"06:49 AI Ethics Oversight & Governance10:04 Strategic AI Advisory Team Formation15:34 AI Strategy and Governance Essentials16:55 Global Standardization Needed for AI Policies22:47 AI Ethics: Innovation vs. Deepfakes25:48 "Regulate Deepfakes Like Nukes"27:17 Leadership Vision for Future SuccessKeywords:AI innovation, Ethical governance, Large language models, Data privacy, AI ethics board, AI governance, TDWI, Microsoft stack, Generative AI, AI algorithms, Spatial audio, Deep fakes, Data differentiation, Machine learning, Cyber security, Enterprise technology, Rajeev Kapur, 11:05 Media, AI safety, OpenAI, Data utilization, Ethical AI alignment, Regulatory aspect, AI models, Innovation vs. ethics, AI data privacy, Explainability, Data scientists, Third-party audits, Transparent AI usage, AI-driven growth, Monitoring feedback loops, Worst case testing, Smart regulations, Digital twins, Disinformation, AI bias mitigation, Data as new oil, Refining data, Diverse community partnSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

The Road to Accountable AI
Medha Bankhwal and Michael Chui: Implementing AI Trust

The Road to Accountable AI

Play Episode Listen Later Apr 10, 2025 38:45 Transcription Available


Kevin Werbach speaks with Medha Bankhwal and Michael Chui from QuantumBlack, the AI division of the global consulting firm McKinsey. They discuss how McKinsey's AI work has evolved from strategy consulting to hands-on implementation, with AI trust now embedded throughout their client engagements. Chui highlights what makes the current AI moment transformative, while Bankwhal shares insights from McKinsey's recent AI survey of over 760 organizations across 38 countries. As they explain, trust remains a major barrier to AI adoption, although there are geographic differences in AI governance maturity.  Medha Bankhwal, a graduate of Wharton's MBA program, is an Associate Partner, as well as Co-founder of McKinsey's AI Trust / Responsible AI practice. Prior to McKinsey, Medha was at Google and subsequently co-founded a digital learning not-for-profit startup. She co-leads forums for AI safety discussions for policy + tech practitioners, titled “Trustworthy AI Futures” as well as a community of ex-Googlers dedicated to the topic of AI Safety.  Michael Chui is a senior fellow at QuantumBlack, AI by McKinsey. He leads research on the impact of disruptive technologies and innovation on business, the economy, and society. Michael has led McKinsey research in such areas as artificial intelligence, robotics and automation, the future of work, data & analytics, collaboration technologies, the Internet of Things, and biological technologies. Episode Transcript The State of AI: How Organizations are Rewiring to Capture Value (March 12, 2025)  Superagency in the workplace: Empowering people to unlock AI's full potential (January 28, 2025) Building AI Trust: The Key Role of Explainability (November 26, 2024) McKinsey Responsible AI Principles

The Tech Blog Writer Podcast
3228: Thoughtworks on AI Agents, Explainability, and What's Next

The Tech Blog Writer Podcast

Play Episode Listen Later Apr 2, 2025 38:40


What happens when the hype around generative AI starts to mature, and businesses begin asking harder questions about performance, risk, and long-term value? In today's episode, I'm joined by Mike Mason, Chief AI Officer at Thoughtworks, to explore how 2025 is shaping up across the enterprise AI landscape—from the rise of intelligent agents to the growing traction of small, nimble models that prioritize security and specificity. Mike brings a deep, practical perspective on the evolution of AI inside complex organizations. He unpacks how AI agents are moving well beyond basic chatbots and starting to integrate into actual business workflows—performing as teammates that can reason, adapt, and even collaborate with other agents. We dig into examples like Klarna's workforce transformation and examine how this shift could play out across customer service, internal ops, and software development. We also look at what's fueling the boom in open source AI and how companies are navigating the balance between transparency, IP protection, and regulatory readiness. Mike shares why some financial services firms are turning to in-house fine-tuned models for greater control, and how open-weight and fully open-source models are starting to gain real ground. Another key theme is the momentum behind small language models. Mike explains why bigger isn't always better—especially when it comes to data privacy, edge deployment, and resource efficiency. He outlines where SLMs can outperform their larger counterparts and what that means for companies optimizing for security and speed rather than brute force compute. We also discuss Thoughtworks' forthcoming global survey, which reveals a growing divide in generative AI adoption. While mature players are building in bias detection and robust compliance frameworks, newer entrants are leaning toward fast operational gains and interpretability. This gap is shaping how GenAI projects are prioritized across industries and geographies, and Mike offers his take on how leaders can navigate both speed and safety. So, what role will explainability, regulation, and open ecosystems play in shaping the AI tools of tomorrow—and what should business and tech leaders be planning for now? Let's find out in this wide-ranging conversation with Thoughtworks.

Life Sciences 360
Digital Health Expert: THIS is the No.1 Roadblock to Healthcare AI Adoption

Life Sciences 360

Play Episode Listen Later Apr 2, 2025 35:12 Transcription Available


Healthcare AI adoption is transforming the way we address risk, confidentiality, and patient care. In this episode, RJ Kedziora, co-founder of Estenda Solutions talks about the practical steps to safely integrate AI in clinical workflows. Learn how to manage data privacy, mitigate algorithmic bias, and keep a human in the loop to prevent misdiagnoses. Discover real-world strategies for using AI ethically, from ambient listening to second-opinion checks, and why it's irresponsible not to harness AI's potential. The discussion also highlights how AI can enhance the roles of healthcare professionals, ultimately improving patient outcomes.

The Ravit Show
AI Transformation, Data Quality and Explainability

The Ravit Show

Play Episode Listen Later Jan 27, 2025 5:50


Excited to have hosted Brendan Grady, General Manager of the Analytics Business Unit at Qlik, on The Ravit Show at AWS re:Invent! We had a fascinating discussion about how generative AI is transforming the analytics space and shaping user expectations for data-driven decision-making. Brendan shared his expertise on navigating AI adoption and ensuring systems deliver both quality and explainability. Key topics we explored: -- How generative AI is redefining user experiences in data analysis -- Strategies for organizations to select the right AI solutions -- The importance of building systems with robust data quality and explainability to avoid unexplainable recommendations that harm decision-making -- Insights into generational differences in technology fluency and how it impacts analytics adoption -- Real-world examples of AI missteps and how they could have been prevented This was an insightful conversation on the opportunities and challenges of AI in analytics. #data #ai #awsreinvent #awsreinvent2024 #reinvent2024 #qlik #theravitshow

Identity At The Center
#327 - Sponsor Spotlight - Andromeda Security

Identity At The Center

Play Episode Listen Later Jan 22, 2025 58:57


This episode is sponsored by Andromeda Security. Learn more at https://www.andromedasecurity.com/idac⁠ Join Jeff and Jim on the Identity at the Center podcast as they chat with Ashish Shah, co-founder and Chief Product Officer of Andromeda Security. In this sponsored episode, Ashish dives deep into the importance of solving identity security problems, especially in cloud and SaaS environments. He explains how Andromeda's AI-powered platform focuses on both human and non-human identities, offering use case-driven solutions for security maturity. The discussion covers challenges, AI and machine learning applications, and practical insights into permissions management, risk scoring, just-in-time access, and more. Stay tuned for interesting takes on identity security and some fun recommendations for your reading/listening list. Chapters 00:00 Introduction to Identity as a Data Problem 00:41 Overview of Andromeda's Capabilities 01:27 Welcome to the Identity at the Center Podcast 02:03 Meet Ashish Shah, Co-Founder of Andromeda 02:37 The Genesis of Andromeda 03:33 Addressing Identity Security Challenges 05:29 Andromeda's Approach to Identity Security 09:44 Measuring Success with Andromeda 12:21 Andromeda's Market Position and Ideal Customers 18:35 The Rise of Non-Human Identities 28:42 Understanding Identity and Accounts in AWS 28:54 The Concept of Incarnations in Identity Management 29:42 Human and Non-Human Identities 32:13 Challenges in Authorization and Access Control 32:44 Implementing Zero Trust and Least Privilege 35:10 Role of AI and Machine Learning in Identity Management 36:21 Risk Scoring and Behavioral Analysis 39:04 Customer Data and Model Training 41:08 Explainability and Security of AI Models 46:14 Customer Influence on Model Tuning 49:03 Andromeda's Offer and Final Thoughts 51:34 Book Recommendations and Closing Remarks Connect with Ashish: https://www.linkedin.com/in/ashishbshah/ Learn more about Andromeda: https://www.andromedasecurity.com/idac⁠ Connect with us on LinkedIn: Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/ Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/ Visit the show on the web at idacpodcast.com and watch at https://www.youtube.com/@idacpodcast Keywords: Identity security, IAM, cybersecurity, artificial intelligence, AI, machine learning, ML, non-human identities, NHI, just-in-time access, JIT, IGA, privileged access management, PAM, identity threat detection and response, ITDR, cloud security, SaaS security, Andromeda Security, Ashish Shah, IDAC, Identity at the Center, Jim McDonald, Jeff Steadman

The Net Promoter System Podcast – Customer Experience Insights from Loyalty Leaders
Ep. 242: The Black Box Fallacy: Why Wells Fargo Doesn't Trust an AI It Can't Explain

The Net Promoter System Podcast – Customer Experience Insights from Loyalty Leaders

Play Episode Listen Later Jan 16, 2025 37:00


Episode 242: Wells Fargo has established a clear position on artificial intelligence: If you can't explain how an AI model works, you shouldn't deploy it. This stance challenges the common assumption that black box algorithms are acceptable costs of advanced AI capabilities. In this episode, Kunal Madhok, Head of Data, Analytics, and AI for Wells Fargo's consumer business, reveals how the bank has operationalized this philosophy to enhance customer experiences while maintaining rigorous standards for model explainability and ethical deployment. The stakes for financial institutions are substantial. As banking becomes increasingly digitized, organizations must balance sophisticated personalization with transparency and trust. Wells Fargo's approach demonstrates that explainability isn't merely about regulatory compliance—it's a fundamental driver of business value and customer trust. Through rigorous review processes and a commitment to "plain English" explanations of algorithmic decisions, Wells Fargo ensures its models remain logical, aligned with business objectives, and comprehensible to stakeholders at all levels. This transparency serves multiple purposes: avoiding unintended consequences, maintaining human oversight of automated systems, and ensuring data-driven decisions actually drive business value. Discover how Wells Fargo's insistence on explainable AI is reshaping everything from product recommendations to customer service, while setting new standards for responsible innovation in financial services. Guest: Kunal Madhok, EVP, Head of Data, Analytics and AI, Wells Fargo Host: Rob Markey, Partner, Bain & Company Give Us Feedback: We'd love to hear from you. Help us enhance your podcast experience by providing feedback here in our listener survey: http://bit.ly/CCPodcastFeedback Want to get in touch? Send a note to host Rob Markey: https://www.robmarkey.com/contact-rob Time-stamped List of Topics Covered: [00:04:13] Integrating data science into business decisions and ensuring data-driven insights [00:07:29] Kunal's vision for personalization and delivering relevant, value-based products [00:09:22] Wells Fargo's ability to leverage life events and transactional data to better serve customers [00:11:05] Democratizing financial advice and offering tailored advice based on customer needs [00:16:53] Using live experimentation and AI models to tailor product offers and marketing [00:19:17] Strategic investment decisions for new product launches and capacity reservations using simulations [00:22:45] Explainability, and what this looks like in action [00:37:22] Strategies around servicing interactions and the key challenges around this work that demand solving Time-stamped Notable Quotes: [00:00:27] “When a customer walks into a bank, they're expecting you to know them.” [00:04:19] “Part of my role is to make sure we use data science in every business decision we make as an organization. And what that means is not just the quality and the fidelity of data, but also that decisions are made not based on intuition, but on real data outcomes.” 00:07:29] "Good personalization is: We'll give you the right product based on your interests and your needs, and we'll deliver it in a way that you want. Which is the right channel, the right offers.” [00:12:17] “If we can add value to our customers, they expect it. I'm sure when you turn on [a streaming service] today, it gives you a whole bunch of movies, shows to watch, curated just for you, based on your past history. And if they do it well, you actually like that, because you know the next five things to watch. And while that's in entertainment—and financial products are a very different space—that's the bar our customers are expecting us to meet.” [00:22:45] “As we train our talent, we've put a high bar on explainability of the work they do.”

SEOPRESSO PODCAST
K.I. entlang der Marketing-Wertschöpfungskette mit Jan Schoenmakers | Ep.184.

SEOPRESSO PODCAST

Play Episode Listen Later Dec 31, 2024 38:55


In dieser Diskussion reflektieren Jan Schoenmakers (MD Hase & Igel) und Bjoern über die Rolle von KI im Marketing. Sie betonen die Bedeutung von tiefgreifendem Lernen und Austausch auf Veranstaltungen, die den Teilnehmern ermöglichen, sich fachlich weiterzuentwickeln. Jan erklärt, wie KI die Marketing-Wertschöpfungskette optimieren kann, indem sie Prozesse effizienter gestaltet und die Qualität der Ergebnisse verbessert. Gleichzeitig werden die Herausforderungen bei der Implementierung von KI-Lösungen, insbesondere in Bezug auf Datenqualität und Vertrauen, thematisiert. Die Diskussion schließt mit einem Ausblick auf die Zukunft der KI im Marketing und die Notwendigkeit, nicht nur Konsumenten, sondern auch aktive Gestalter der Technologie zu sein. In dieser Diskussion wird die Bedeutung von Datenqualität und ROI im Marketing hervorgehoben, sowie die Notwendigkeit, dass Unternehmen tiefer in die Nutzung von KI eintauchen müssen. Die Zukunft des Marketings wird durch KI revolutioniert, was zu höheren Anforderungen an Marketer führt. Zudem wird die Notwendigkeit eines Umdenkens in den Preismodellen von Agenturen betont, um den Veränderungen durch KI gerecht zu werden. Takeaways Die OMX-Konferenz bietet eine tiefgehende Lernerfahrung. KI kann die Effizienz in der Marketing-Wertschöpfungskette erheblich steigern. Es ist wichtig, die richtige Art von KI für spezifische Probleme zu wählen. Analytische KI und generative KI sollten kombiniert werden. Die Geschwindigkeit der Prozesse kann durch KI dramatisch erhöht werden. Herausforderungen bei der Datenqualität sind entscheidend für den Erfolg von KI. Die Konsumhaltung gegenüber KI kann gefährlich sein. Es ist wichtig, die Funktionsweise von KI zu verstehen. Die Zukunft der KI im Marketing erfordert aktive Mitgestaltung. KI kann helfen, bessere Prognosen und Strategien zu entwickeln. Datenqualität ist entscheidend für den ROI. Unternehmen müssen tiefer lernen, um KI effektiv zu nutzen. Explainability ist essentiell für die Akzeptanz von KI. KI wird Marketing revolutionieren und effizienter machen. Die Jobs im Marketing werden anspruchsvoller und besser bezahlt. Authentizität wird das Premium-Segment der Zukunft. KI ist ein Supercharger für Marketingprozesse. Marketer benötigen mehr Technikverständnis und strategisches Denken. Agenturen müssen sich auf Festpreise umstellen. Die Zukunft gehört den echten Menschen im Marketing. Chapters 00:00 Einführung und Veranstaltungseindruck 05:57 Optimierung durch KI: Prozesse und Effizienz 11:54 Zukunft der KI im Marketing und ihre Auswirkungen 18:49 Datenqualität und ROI im Marketing 26:11 Die Zukunft des Marketings durch KI 36:29 Preismodelländerungen im Agenturbusiness

Future Finance
The Impact of AI on Finance and Business Decision Making with Jon Brewton

Future Finance

Play Episode Listen Later Dec 25, 2024 49:48


In this episode of Future Finance, hosts Paul Barnhurst and Glenn Hopper discuss the intersection of artificial intelligence (AI) and finance. They explore how AI tools like large language models (LLMs) are transforming data analytics and decision-making processes. They also examine the broader implications of AI advancements in other high-stakes industries such as energy, defense, and healthcare.Jon Brewton is the founder and CEO of Data Squared. He brings extensive expertise in machine learning, AI solutions, and digital transformation. With a career spanning roles at BP, Chevron, and military service, Jon has spearheaded projects achieving significant operational efficiencies. At Data Squared, he focuses on creating reliable, traceable, and explainable AI solutions for critical sectors.In this episode, you will learn:Microsoft's advancements in LLMs for better integration with structured data.The five levels of AI capabilities outlined by OpenAI and what they mean.Why traceability and explainability are essential for deploying AI in finance.Innovative applications of Knowledge Graphs and RAG (Retrieval-Augmented Generation) technology Strategies to mitigate AI hallucinations and enhance reliability in decision-making processes.In this episode, Jon Brewton discusses the transformative role of AI in the financial sector, advancements including spreadsheet-specific LLMs, the power of knowledge graphs, and the critical importance of traceability and explainability in AI deployment. Follow Jon:LinkedIn: https://www.linkedin.com/in/jon-brewton-datasquared/Website: https://www.data2.ai/Join hosts Glenn and Paul as they unravel the complexities of AI in finance:Follow Glenn:LinkedIn: https://www.linkedin.com/in/gbhopperiiiFollow Paul:LinkedIn: https://www.linkedin.com/in/thefpandaguyFollow QFlow.AI:Website - https://bit.ly/4i1EkjgFuture Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai. Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.In Today's Episode:[01:50] - Advancements in Spreadsheet LLMs[04:54] - OpenAI's Roadmap to AGI [13:49] - Jon Brewton Introduction[18:45] - Importance of Traceability and Explainability[25:40] - Knowledge Graphs and Financial Data[34:56] - Addressing AI Hallucinations[42:01] - Advice for Finance Leaders[45:48] - Jon's Unique Experiences[48:53] - Closing Remarks

Oracle University Podcast
Oracle AI in Fusion Cloud Human Capital Management

Oracle University Podcast

Play Episode Listen Later Nov 19, 2024 31:04


In this special episode of the Oracle University Podcast, Lois Houston and Nikita Abraham, along with Principal HCM Instructor Jeff Schuster, delve into the intersection of HCM and AI, exploring the practical applications and implications of this technology in human resources. Jeff shares his insights on bias and fairness, the importance of human involvement, and the need for explainability and transparency in AI systems. The discussion also covers the various AI features embedded in HCM and their impact on talent acquisition, performance management, and succession planning.  Oracle AI in Fusion Cloud Human Capital Management: https://mylearn.oracle.com/ou/learning-path/oracle-ai-in-fusion-cloud-human-capital-management-hcm/136722 Oracle Fusion Cloud HCM: Dynamic Skills: https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-hcm-dynamic-skills/116654/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!  00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs here at Oracle University, and with me, is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week's conversation was all about Oracle Database 23ai backup and recovery, where we dove into instance recovery and effective recovery strategies. Today's episode is a really special one, isn't it, Lois? 00:53 Lois: It is, indeed, Niki. Of course, all of our AI episodes are special. But today, we have our friend and colleague Jeff Schuster with us. I think our listeners are really going to enjoy what Jeff has to share with us. Nikita: Yeah definitely! Jeff is a Principal HCM Instructor at Oracle University. He recently put together this really fantastic course on MyLearn, all about the intersection of HCM and AI, and that's what we want to pick his brain about today. Hi Jeff! We're so excited to have you here.  01:22 Jeff: Hey Niki! Hi Lois! I feel special already. Thanks you guys so much for having me. Nikita: You've had a couple of busy months, haven't you?  01:29 Jeff: I have! It's been a busy couple of months with live classes. I try and do one on AI in HCM at least once a month or so so that we can keep up with the latest/greatest stuff in that area. And I also got to spend a few days at Cloud World teaching a few live classes (about artificial intelligence in HCM, as a matter of fact) and meeting our customers and partners. So yeah, absolutely great week. A good time was had by me.  01:55 Lois: I'm sure. Cloud World is such a great experience. And just to clarify, do you think our customers and partners also had a good time, Jeff? It wasn't just you, right? Jeff: Haha! I don't think it was just me, Lois. But, you know, HCM is always a big deal, and now with all the embedded AI functionality, it really wasn't hard to find people who wanted to spend a little extra time talking about AI in the context of our HCM apps. So, there are more than 30 separate AI-powered features in HCM. AI features for candidates to find the right jobs; for hiring managers to find the right candidates; skills, talent, performance management, succession planning— all of it is there and it really covers everything across the Attract/Grow/Keep buckets of the things that HR professionals do for a living. So, anyway, yeah, lots to talk about with a lot of people! There's the functional part that people want to know about—what are these features and how do they work? But obviously, AI carries with it all this cultural significance these days. There's so much uncertainty that comes from this pace of development in that area. So in fact, my Cloud World talk always starts with this really silly intro that we put in place just to knock down that anxiety and get to the more practical, functional stuff. 03:11 Nikita: Ok, we're going to need to discuss the functional stuff, but I feel like we're getting a raw deal if we don't also get that silly intro. Lois: She makes a really good point.  Jeff: Hahaha! Alright, fair enough. Ok, but you guys are gonna have to imagine I've got a microphone and a big room and a lot of echo. AI is everywhere. In your home. In your office. In your homie's home office. 03:39 Lois: I feel like I just watched the intro of a sci-fi movie. Jeff: Yeah. I'm not sure it's one I'd watch, but I think more importantly it's a good way to get into discussing some of the overarching things we need to know about AI and Oracle's approach before we dive into the specific features, so you know, those features will make more sense when we get there?  03:59  Nikita: What are these “overarching” things?  Jeff: Well, the things we work on anytime we're touching AI at Oracle. So, you know, it starts with things like Bias and Fairness. We usually end up in a pretty great conversation about things like how we avoid bias on the front end by making sure we don't ingest things like bias-generating content, which is to say data that doesn't necessarily represent bias by itself, but could be misused. And that pretty naturally leads us into a talk about guardrails. Nikita: Guardrails? Jeff:  Yeah, you can think of those as checkpoints. So, we've got rules about ingestion and bias. And if we check the output coming out of the LLM to ensure it complied with the bias and fairness rules, that's a guardrail. So, we do that. And we do it again on the apps side. And so that's to say, even though it's already been checked on the AI side, before we bring the output into the HCM app, it's checked again. So another guardrail.  04:58 Lois: How effective is that? The guardrails, and not taking in data that's flagged as bias-generating? Jeff: Well, I'll say this: It's both surprisingly good, and also nowhere near good enough.  Lois: Ok, that's as clear as mud. You want to elaborate on that?  Jeff: Haha! I think all it means is that approach does a great job, but our second point in the whole “standards” discussion is about the significance of having a human in the loop. Sometimes more than one, but the point here is that, particularly in HCM, where we're handling some really important and sensitive data, and we're introducing really powerful technology, the H in HCM gets even more important. So, throughout the HCM AI course, we talk about opportunities to have a human in the loop. And it's not just for reviewing things. It's about having the AI make suggestions, and not decisions, for example. And that's something we always have a human in the loop for all the time. In fact, when I started teaching AI for HCM, I always said that I like to think of it is as a great big brain, without any hands.  06:00 Nikita: So, we're not talking about replacing humans in HCM with AI.                                                                         Jeff: No, but we're definitely talking about changing what the humans do and why it's more important than ever what the humans do. So, think of it this way, we can have our embedded AI generate this amazing content, or create really useful predictions, whatever it is that we need. We can use whatever tools we want to get there, but we can still expect people to ask us, “Where did that come from?” or “Does this account for [whatever]?”. So we still have to be able to answer that. So that's another thing we talk about as kind of an overarching important concept: Explainability and Transparency. 06:41 Nikita: I'm assuming that's the part about showing our work, right? Explaining what's being considered, how it's being processed, and what it is that you're getting back. Jeff: That's exactly it. So we like to have that discussion up front, even before we get to things like Gen and Non-Gen AI, because it's great context to have in mind when you start thinking about the technology. Whenever we're looking at the tech or the features, we're always thinking about whether people are appropriately involved, and whether people can understand the AI product as well as they need to.  07:11 Lois: You mentioned Gen and Non-Gen AI. I've also heard people use the term “Classic AI.” And lately, a lot more about RAG and Agents. When you're teaching the course, does everybody manage to keep all the terminology straight? Jeff: Yeah, people usually do a great job with this. I think the trick is, you have to know that you need to know it, if that makes sense.  Lois: I think so, but why don't you spell it out for us. Jeff: Well, the temptation is sometimes to leave that stuff to the implementers or product developers, who we know need to have a deep understanding of all of that. But I think what we've learned is, especially because of all the functional implications, practitioners, product owners, everybody needs to know it too. If for no other reason so they can have more productive conversations with their implementers. You need to know that Classic or Non-Generative AI leverages machine learning, and that that's all you need in order to do some incredibly powerful things like predictions and matching. So in HCM, we're talking about things like predicting time to hire, identifying suggested candidates for job openings, finding candidates similar to ones you already like, suggesting career paths for employees, and finding recommended successors. All really powerful matching stuff. And all of that stuff uses machine learning and it's certainly AI, but none of that uses Generative AI to do that because it doesn't need to. 08:38 Nikita: So how does that fit in with all the hype we've been hearing for a long time now about Gen AI and how it's such a transformative technology that's going to be more impactful than anything else? Jeff: Yeah, and that can be true too. And this is what we really lean into when we do the AI in HCM course live. It's much more of a “right AI for the right job” kind of proposition. Lois: So, just like you wouldn't use a shovel to mix a cake. Use the right tool for the job. I think I've got it. So, the Classic AI is what's driving those kinds of features in HCM? The matching and recommendations?  Jeff: Exactly right. And where we need generative content, that's where we add on the large language model capability. With LLMs, we get the ability to do natural language processing. So it makes sense that that's the technology we'd use for tasks like “write me a job description” or “write me performance development tips for my employee”. 09:33 Nikita: Ok, so how does that fit in with what Lois was asking about RAG and Agents? Is that something people care about, or need to? Jeff: I think it's easiest to think about those as the “what's next” pieces, at least as it relates to the embedded AI. They kind of deal with the inherent limitations of Gen and Non-Gen components. So, RAG, for example - I know you guys know, but your listeners might not...so what's RAG stand for? Lois & Nikita: Retrieval. Augmented. Generation. Jeff: Hahaha! Exactly. Obviously. But I think everything an HCM person needs to know about that is in the name. So for me, it's easiest to read that one backwards. Retrieval Augmented Generation. Well, the Generation just means it's more generative AI. Augmented means it's supplementing the existing AI. And Retrieval just tells you that that's how it's doing it. It's going out and fetching something it didn't already have in order to complete the operation. 10:31 Lois: And this helps with those limitations you mentioned? Nikita: Yeah, and what are they anyway?  Jeff: I think an example most people are familiar with is that large language models are trained on this huge set of information. To a certain point. So that model is trained right up to the point where it stopped getting trained. So if you're talking about interacting with ChatGPT, as an example, it'll blow your doors off right up until you get to about October of 2023 and then, it just hasn't been trained on things after that. So, if you wanted to have a conversation about something that happened after that, it would need to go out and retrieve the information that it needed. For us in HCM, what that means is taking the large language model that you get with Oracle, and using retrieval to augment the AI generation for the things that the large language model wouldn't have had.  11:22 Nikita: So, things that happened after the model was trained? Company-specific data? What kind of augmenting are you talking about? Jeff: It's all of that. All those things happen and it's anything that might be useful, but it's outside the LLM's existing scope. So, let's do an example. Let's say you and Lois are in the market to hire someone. You're looking for a Junior Podcast Assistant. We'd like the AI in HCM to help, and in order to do that, it would be great if it could not just generate a generic job description for the posting, but it could really make it specific to Oracle. Even better, to Oracle University.  So, you'd need the AI to know a few more things in order to make that happen. If it knows the job level, and the department, and the organization—already the job posting description gets a lot better. So what other things do you think it might need to know? 12:13 Lois: Umm I'm thinking…does it need to account for our previous hiring decisions? Can it inform that at all? Jeff: Yes! That's actually a key one. If the AI is aware not only of all the vacancies and all of the transactional stuff that goes along with it (like you know who posted it, what's its metadata, what business group it was in, and all that stuff)...but it also knows who we hired, that's huge. So if we put all that together, we can start doing the really cool stuff—like suggesting candidates based not only on their apparent match on skills and qualifications, but also based on folks that we've hired for similar positions. We know how long it took to make those hires from requisition open to the employee's first start date. So we can also do things like predicting time to hire for each vacancy we have with a lot more accuracy. So now all of a sudden, we're not just doing recruiting, but we have a system that accounts for “how we do it around here,” if that makes any sense.  But the point is, it's the augmented data, it's that kind of training that we do throughout ingestion, going out to other sources for newer or better information, whatever it is we need. The ability to include it alongside everything that's already in the LLM, that's a huge deal.  13:31  Nikita: Ok, so I think the only one we didn't get to was Agents. Jeff: Yeah, so this one is maybe a little less relevant in HCM—for now anyway. But it's something to keep an eye on. Because remember earlier when I described our AI as having a great big brain but no hands?  Lois: Yeah... Jeff: Well, agents are a way of giving it hands. At least for a very well-defined, limited set of purposes. So routine and repetitive tasks. And for obvious reasons, in the HCM space, that causes some concerns. You don't want, for example, your AI moving people forward in the recruiting process or changing their status to “not considered” all by itself. So going forward, this is going to be a balancing act. When we ask the same thing of the AI over and over again, there comes a point where it makes sense to kind of “save” that ask. When, for example, we get the “compare a candidate profile to a job vacancy” results and we got it working just right, we can create an agent. And just that one AI call that specializes in getting that analysis right. It does the analysis, it hands it back to the LLM, and when the human has had what they need to make sure they get what they need to make a decision out of it, you've got automation on one hand and human hands on the other...hand. 14:56 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like large language models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com to find out more. 15:26 Nikita: Welcome back! Jeff, you've mentioned the “Time to Hire” feature a few times? Is that a favorite with people who take your classes? Jeff: The recruiting folks definitely seem to enjoy it, but I think it's just a great example for a couple of reasons. First, it's really powerful non-generative AI. So it helps emphasize the point around the right AI for the right job. And if we're talking about things in chronological order, it's something that shows up really early in the hire-to-retire cycle. And, you know, just between us learning nerds, I like to use Time to Hire as an early example because it gets folks in the habit of working through some use cases. You don't really know if a feature is going to get you what you need until you've done some of that. So, for example, if I tell you that Time to Hire produces an estimated number of days to your first hire. And you're still Lois, and you're still Niki, and you're hiring for a Junior Podcast Assistant. So why do you care about time to hire? And I'm asking you for real—What would you do with that prediction if you had it?  16:29 Nikita: I guess I'd know how long it is before I can expect help to arrive, and I could plan my work accordingly. Jeff: Absolutely. What else. What could you do with a prediction for Time to Hire? Lois: Think about coverage? Jeff: Yeah! Exactly the word I was looking for. Say more about that.  Lois: Well, if I know it's gonna be three months before our new assistant starts, I might be able to plan for some temporary coverage for that work. But if I had a prediction that said it's only going to be two weeks before a new hire could start, it probably wouldn't be worth arranging temporary coverage. Niki can hold things down for a couple of weeks. Jeff: See, I'm positive she could! That's absolutely perfect! And I think that's all you really need to have in terms of prerequisites to understand any of the AI features in HCM. When you know what you might want to do with it, like predicting the need for temp cover, and you've got everything we talked about in the foundation part of the course—the Gen and the Classic, all that stuff, you can look at a feature like Time to Hire and then you can probably pick that up in 30 seconds. 17:29 Nikita: Can we try it? Jeff: Sure! I mean, you know, we're not looking at screens for this conversation, but we can absolutely try it. You're a recruiter. If I tell you that Time to Hire is a feature that you run into on the job requisition and it shows you just a few editable fields, and then of course, the prediction of the number of days to hire—tell me how you think that feature is going to work when you get there. Lois: So, what are the fields? And does it matter? Jeff: Probably not really, but of course you can ask. So, let me tell you. Ready? The fields—they are these. Requisition Title, Location, and Education Level.  Nikita: Ok, well, I have to assume that as I change those things… like from a Junior Podcast Assistant to a Senior Podcast Assistant, or change the location from Redwood Shores to Detroit, or change the required education, the time to hire is going to change, right?  Jeff: 100%, exactly. And it does it in real time as you make those changes to those values. So when you pick a new location, you immediately get a new number of days, so it really is a useful tool. But how does it work? Well, we know it's using a few fields from the job requisition, but that's not enough. Besides those fields, what else would you need in order to make this prediction work? 18:43 Lois: The part where it translates to a number of days. So, this is based on our historic hiring data? How long it took us to hire a podcast assistant the last time? Jeff: Yep! And now you have everything you need. We call that “historic data from our company” bit “ingestion,” by the way. And there's always a really interesting discussion around that when it comes up in the course. But it's the process we use to bring in the HCM data to the AI so it can be considered or predictions exactly like this. Lois: So it's the HCM data making the AI smarter and more powerful. Nikita: And tailored. Jeff: Exactly, it's all of that. And obviously, the HCM is better because we've given it the AI. But the AI is also better because it has the HCM in it. But look, I was able to give you a quick description of Time to Hire, and you were able to tell me what it does, which data it uses, and how it works in just a few seconds. So, that's kind of the goal when we teach this stuff. It's getting everybody ready to be productive from moment #1 because what is it and how does it work stuff is already out of the way, you know?  19:52 Lois: I do know! Nikita: Can we try it with another one? Jeff: Sure! How about we do...Suggested Candidates. Lois: And you're going to tell us what we get on the screen, and we have to tell you how it works, right? Jeff: Yeah, yeah, exactly. Ok—Suggested Candidates. You're a recruiter or a hiring manager. You guys are still looking for your Junior Podcast Assistant. On the requisition, you've got a section called Suggested Candidates. And you see the candidate's name and some scores. Those scores are for profile match, skills match, experience match. And there's also an overall match score, and the highest rated people you notice are sorted to the top of the list. So, you with me so far?  Lois: Yes! Jeff: So you already know that it's suggesting candidates. But if you care about explainability and transparency like we talked about at the start, then you also care about where these suggested candidates came from. So let's see if we can make progress against that. Let's think about those match scores. What would you need in order to come up with match scores like that? 20:54 Nikita: Tell me if I'm oversimplifying this, but everything about the job on the requisition, and everything about the candidate? Their skills and experience? Jeff: Yeah, that's actually simplified pretty perfectly. So in HCM, the candidate profile has their skills and experience, and the req profile has the req requirements.  Lois: So we're comparing the elements of the job profile and the person/candidate profile. And they're weighted, I assume? Jeff: That's exactly how it works. See, 30 seconds and you guys are nailing these! In fairness, when we discuss these things in the course, we go into more detail. And I think it's helpful for HCM practitioners to know which data from the person and the job profiles is being considered (and sometimes just as important, which is not being considered). And don't forget we're also considering our ingested data. Our previously selected candidates. 21:45 Lois: Jeff, can I change the weighting? If I care more about skills than experience or education, can I adjust the weighting and have it re-sort the candidates? Jeff: Super important question. So let me give you the answer first, which is “no.” But because it's important, I want to tell you more. This is a discussion we have in the class around Oracle's Embedded vs. Custom AI. And they're both really important offerings. With Embedded, what we're talking about are the features that come in HCM like any other feature. They might have some enablement steps like profile options, and there's an activation panel. But essentially, that's it. There's no inspection panel for you to open up and start sticking your screwdriver in there and making changes. Believe it or not, that's a big advantage with Embedded AI, if you ask me anyway.  Nikita: It's an advantage to not be able to configure it? Jeff: In this context, I think you can say that it is. You know, we talk about the advantages about the baked-in, Embedded AI in this course, but one of the key things is that it's pre-built and pre-tested. And the big one: that it's ready to use on day one. But one little change in a prompt can have a pretty big butterfly effect across all of your results. So, Oracle provides the Embedded AI because we know it works because we've already tested it, and it's, therefore, ready on day one. And I think that story maybe changes a little bit when you open up the inspection panel and bust out that screwdriver. Now you're signing up to be a test pilot. And that's just fundamentally different than “pre-built and ready on day one.” Not that it's bad to want configuration. 23:24 Lois: That's what the Custom AI path and OCI are about though, right? For when customers have hyper-specific needs outside of Oracle's business processes within the apps, or for when that kind of tuning is really required. And your AI for HCM course—that focuses on the Embedded AI instead of Custom, yes? Jeff: That is exactly it, yes. Nikita: You said there are about 30 of these AI features across HCM. So, when you teach the course, do you go through all of them or are there favorites? Ones that people want to spend more time on so you focus on those? Jeff: The professional part of me wants to tell you that we do try to cover all of them, because that explainability and transparency business we talked about at the beginning. That's for real, so I want our customers to have that for the whole scope.  24:12  Nikita: The professional part? What's the other part?  Jeff: I guess that's the part that says sure, we need to hit all of them. But some of them are just inherently more fun to work on. So, it's usually the learners who drive that in the live classes when they get into something, that's where we spend the most time. So, I have my favorites too. The learners have their favorites. And we spend time where it's everybody's favorite. Lois: Like where? Jeff: Ok, so one is far from the most complex one, but I think it's really elegant in its simplicity. And it's the Celebrate feature, where we do employee recognition. There's an AI Assist available there. So when it's time to recognize a colleague, you just need to enter the headline or the title, and the AI takes it from there and just writes up the recognition. 24:56 Lois: What about that makes it a good example, Jeff? You said it's elegant. What do you mean?  Jeff: I think it's a few things. So, start with the prompt. It's just the one line—just the headline. And that's your one input. So, type in the headline, get the recognition below. It's a great demonstration of not just the simplicity, but the power we get out of that simplicity. I always ask it to recognize my employees for implementing AI features in Oracle HCM, just to see what it comes up with. When it tells the employee that they're helping the company by automating routine tasks, bringing efficiency to the HR department, and then launches into specific examples of how AI features help in HCM, it really is pretty incredible. So, it's a simple demo, but it explains a lot about how the Gen AI works. Lois: That's really cool. 25:45 Nikita: So this one is generative AI. It's using the large language model to create the recognition based on the prompt, which is basically just whatever you entered in the headline. But how does that help explain how Gen AI works in HCM? Jeff: Well, let's take our simple prompt for example. There's a lot happening behind the scenes. It's taking our prompt, it's doing its LLM thing, but before it's done, it's creating the results in a very specific way. An employee recognition reads really differently than a job description. So, I usually describe this as the hidden part of our prompt. The visible part is what we typed. But it needs to know things like our desired output format. Make sure to use the person's name, summarize the benefits, and be sure to thank them for their contribution, that kind of stuff. So, those things are essentially hard-coded into the page. And that's to say, this is another area where we don't get an inspection panel that lets us go in and tweak the prompt.  26:42 Nikita: And that's generally how generative AI works? Jeff: Pretty much. Wherever you see an AI Assist button in HCM, that's more or less what's going on. And so when you get to some of the other more complex features, it's helpful to know that that is what's going on.  Lois: Like where? Jeff: Well, it works that way for the About Me part of your employee profile, for goal creation in performance, and I think a really great example is in performance, where managers are providing the competency development tips. So the prompt there is a little more complex there because it involves the employee's proficiency rating instead of free text. But still, pretty straightforward. You're gonna click AI Assist and it's gonna generate all the development tips for any specific competency listed for that employee. Good development tips. Five of them. Nicely formatted with bullet points. And these aren't random words assembled by an AI. So they conform to best practices in the development of competencies. So, something is telling the LLM to give us results that are that good, in that particular way.  So, it's just another good example of the work AI is doing while protected behind the inspection panel that doesn't exist. So, the coding of that page, in combination with what the LLM generates and the agent that it uses, is what produces the result. That's generally the approach. In the class, we always have a good time digging into what must be going on behind that inspection panel. Generally speaking, the better feel we have for what's going on on these pages, the better we're able to get the results we want, even without having that screwdriver out. 28:21  Nikita: So it's time well-spent, looking at all the individual features? Jeff: I think so, especially if you're anticipating really using any of them. So, the good news is, once you learn a few of them and how they work, and what they're best at, you stop being surprised after a while. But there are always tips and tricks. And like we talked about at the top, explainability and transparency are absolutely key. So, as much as I'm not a fan of the phrase, I do think this is kind of a “knowledge is power” kind of situation. 28:51 Nikita: Sadly, we're just about out of time for this episode.  Lois: That's too bad, I was really enjoying this. Jeff, you were just talking about knowledge—where can we get more?  Jeff: Well, like you mentioned at the start, check out the AI in HCM course on MyLearn. It's about an hour and a half, but it really is time well spent. And we get into detail on everything the three of us discussed here today, and then we have demoscussions of every feature where we show them and how they work and which data they're using and a whole bunch more. So, there's that. Plus, I hear the instructor is excellent. Lois: I can vouch for that! Jeff: Well, then you should definitely look into Dynamic Skills. Different instructor. But we have another course, and again I think about an hour and a half, but when you're done with the AI course, I always feel like Dynamic Skills is where you really wanna go next to really flesh out all the Talent Management ideas that got stirred up while you were having a great time in the AI course.  And then finally, the live classes. It's always really fun to take live questions while we talk about AI in HCM.   29:54 Nikita: Thanks, Jeff! This has been really interesting.  Lois: Yeah, thanks for being here, Jeff. We've loved having you on. Jeff: Thank you guys so much for having me. It's been a pleasure.  Lois: If you want to learn more about what we discussed, go to the show notes for today's episode. You'll find links to the AI for Human Capital Management and Dynamic Skills courses that Jeff mentioned so you can check them out. You can also head over to mylearn.oracle.com to find the live sessions for MyLearn subscribers that Jeff conducts. Nikita: Join us next week as we kick off our “Best of 2024” season, where we'll be revisiting some of our most popular episodes of the year. Until then, this is Nikita Abraham…  Lois: And Lois Houston, signing off!   30:35 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

The Road to Accountable AI
Krishna Gade: Observing AI Explainability...and Explaining AI Observability

The Road to Accountable AI

Play Episode Listen Later Nov 7, 2024 38:05 Transcription Available


Kevin Werbach speaks with Krishna Gade, founder and CEO of Fiddler AI, on the the state of explainability for AI models. One of the big challenges of contemporary AI is understanding just why a system generated a certain output. Fiddler is one of the startups offering tools that help developers and deployers of AI understand what exactly is going on.  In the conversation, Kevin and Krishna explore the importance of explainability in building trust with consumers, companies, and developers, and then dive into the mechanics of Fiddler's approach to the problem. The conversation covers current and potential regulations that mandate or incentivize explainability, and the prospects for AI explainability standards as AI models grow in complexity. Krishna distinguishes explainability from the broader process of observability, including the necessity of maintaining model accuracy through different times and contexts. Finally, Kevin and Krishna discuss the need for proactive AI model monitoring to mitigate business risks and engage stakeholders.  Krishna Gade is the founder and CEO of Fiddler AI, an AI Observability startup, which focuses on monitoring, explainability, fairness, and governance for predictive and generative models. An entrepreneur and engineering leader with strong technical experience in creating scalable platforms and delightful products,Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. At Facebook, Krishna led the News Feed Ranking Platform that created the infrastructure for ranking content in News Feed and powered use-cases like Facebook Stories and user recommendations.   Fiddler.Ai How Explainable AI Keeps Decision-Making Algorithms Understandable, Efficient, and Trustworthy - Krishna Gade x Intelligent Automation Radio    

Born In Silicon Valley
Behind the AI Curtain: How Explainability Builds Trust and Transparency - Vinay Kumar - Arya AI

Born In Silicon Valley

Play Episode Listen Later Nov 5, 2024 42:15


In this episode, we sit down with Vinay Kumar, the founder and CEO of Arya.ai, a leading AI platform designed to make artificial intelligence accessible, explainable, and safe for enterprises—particularly in the banking and financial services industries. Join us as Vinay shares his journey from a small town in Andhra Pradesh, India, to the cutting-edge world of AI, starting with his formative years at IIT Bombay and progressing to his current work with Arya.ai. Vinay dives into Arya.ai's mission: to democratize complex AI while ensuring it's auditable, transparent, and aligned with user goals. His journey began with the development of a STEM research assistant, InvenZone.com, and evolved into Arya.ai, an AI platform for enterprises that deploys deep learning solutions quickly and responsibly. Vinay discusses Arya.ai's role in creating AI systems that adapt to the unique needs of the financial sector, prioritizing safety and explainability to help organizations build trust with AI technologies.

Revenue Builders
Cutting Through the Noise: Understanding AI Through History and Practical Application

Revenue Builders

Play Episode Listen Later Oct 17, 2024 65:19


In this episode, John Kaplan and John McMahon are joined by Devavrat Shah, CEO and co-founder of Ikigai Labs and MIT professor, to demystify the rapidly evolving landscape of artificial intelligence. The conversation spans a wide array of crucial AI topics including the history and applications of AI, causal inference, explainability, and the integration of AI into sales and forecasting processes. Key highlights include the role of AI in consumption pricing, business model transformations, and job market impacts. Shah underscores the importance of governance, ethical use, and education in AI, offering valuable insights into AI tools from Ikigai Labs and their practical implementations in sectors like healthcare, supply chain, and BFSI. The discussion concludes with a focus on the explosive growth of AI, urging businesses to invest in internal education and to approach AI adoption with a 'proof of value' mindset for sustained success and global upskilling.ADDITIONAL RESOURCESConnect and learn more about Devavrat Shah:https://www.linkedin.com/in/devavrat-shah-63b59a2/Learn more about AI through Ikagai Academy: https://www.ikigailabs.io/ai-academyCheck out Force Management's guide on implementing AI for B2B Sales teams: https://hubs.li/Q02TG4tZ0Enjoying the podcast? Sign up to receive new episodes straight to your inbox: https://hubs.li/Q02R10xN0HERE ARE SOME KEY SECTIONS TO CHECK OUT[00:03:02] History and Evolution of AI[00:06:21] Understanding AI Terminology[00:18:37] The Role of Explainability in AI[00:26:45] AI in Consumption Pricing and Forecasting[00:33:33] Future Possibilities and Implications of AI[00:35:58] AI's Role in Healthcare and Decision Making[00:37:08] Human-Machine Interaction and AI[00:38:29] Embracing AI Tools in Daily Life[00:40:33] Challenges and Governance in AI[00:42:44] The Importance of AI Governance[00:49:10] Introduction to IKIGAI Labs[00:54:13] AI's Impact on Industries and Consumers[01:01:18] The AI Revolution: Why Now?HIGHLIGHT QUOTES[00:03:15] "AI, statistics, machine learning, data science, for me, all of those terms have intimate relationships." – Devavret Shah[00:04:32] "Humans primarily do two things really well: mind and muscle." – Devavret Shah[01:00:27] "Don't just rush into AI because it's cool. Carefully choose where you go." – Devavret Shah[01:00:51] "Have internal champions who should be educated in terms of how to use AI." – Devavret Shah[01:04:13] "It's time to just upskill a little around AI so that we are not left behind." – Devavret Shah

Data Transforming Business
GenAI Investments: Are the Benefits Meeting Expectations?

Data Transforming Business

Play Episode Listen Later Oct 16, 2024 25:30


Generative AI and unstructured data are transforming how businesses improve customer experiences and streamline internal processes. As technology evolves, companies find new ways to gain insights, automate tasks, and personalize interactions, unlocking new growth opportunities. The integration of these technologies is reshaping operations, driving efficiency, and enhancing decision-making, helping businesses stay competitive and agile in a rapidly changing landscape. Organizations that embrace these innovations can better adapt to customer needs and market demands, positioning themselves for long-term success.In this episode, Doug Laney speaks to Katrina M. Conn, Senior Practice Director of Data Science at Teradata, and Sri Raghavan, Principal of Data Science and Analytics at AWS, about sustainability efforts and the ethical considerations surrounding AI. Key Takeaways:Generative AI is being integrated into various business solutions.Unstructured data is crucial for enhancing customer experiences.Real-time analytics can improve customer complaint resolution.Sustainability is a key focus in AI resource management.Explainability in AI models is essential for ethical decision-making.The combination of structured and unstructured data enhances insights.AI innovations are making analytics more accessible to users.Trusted AI frameworks are vital for security and governance.Chapters: 00:00 - Introduction to the Partnership and Generative AI02:50 - Technological Integration and Market Expansion06:08 - Leveraging Unstructured Data for Insights08:55 - Innovations in Customer Experience and Internal Processes11:48 - Sustainability and Resource Optimization in AI15:08 - Ensuring Ethical AI and Explainability23:57 - Conclusion and Future Directions

Causal Bandits Podcast
Causal AI at Causal Learning & Representation CLeaR 2024 | Part 1 | CausalBanditsPodcast.com

Causal Bandits Podcast

Play Episode Listen Later Oct 7, 2024 22:11 Transcription Available


Send us a textRoot cause analysis, model explanations, causal discovery.Are we facing a missing benchmark problem?Or not anymore?In this special episode, we travel to Los Angeles to talk with researchers at the forefront of causal research, exploring their projects, key insights, and the challenges they face in their work.Time codes:0:15 - 02:40    Kevin Debeire2:41 - 06:37    Yuchen Zhu06:37 - 10:09   Konstantin Göbler10:09 - 17:05   Urja Pawar17:05 - 23:16  William OrchardEnjoy!Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

Data Bytes
The AI Insider: Data Curation and Privacy Mastery with Jigyasa Grover

Data Bytes

Play Episode Listen Later Sep 12, 2024 33:24


(00:00) Intro: Negative connotations in AI(00:21) Synthetic data fills gaps(00:35) Guest introduction(01:23) Importance of data quality(02:14) Data-centric machine learning focus(03:02) Bias mitigation strategies(03:41) Role of human in AI loop(04:34) Synthetic data in AI(05:29) Pre-trained models and data quality(06:02) Experiments with data quality(06:39) Leading AI and research projects(07:24) Explainability in AI models(08:57) Privacy concerns in AI analysis(10:34) Open source model benchmarking(11:33) Motivation for open source contributions(12:28) Long-term open source involvement(13:50) Mentoring in open source projects(15:19) Starting with open source(16:35) Contributing beyond code(17:50) Building community through collaboration(18:48) Power of open source accessibility(19:52) Open source challenges(20:38) Success factors for open source projects(22:58) Career-defining moments(24:49) First encounter with open source(26:28) Introduction to AI through NLP(28:02) Pivoting from PhD to industry(29:02) Career lessons and continuous learning(30:13) Advice for women in tech --- Support this podcast: https://podcasters.spotify.com/pod/show/women-in-data/support

Reversim Podcast
476 ML Explainability and friends with Dagan from Citrusx

Reversim Podcast

Play Episode Listen Later Aug 19, 2024


[קישור לקובץ mp3]פרק 476 של רברס עם פלטפורמה, שהוקלט ב-25 ביולי 2024 (יומיים אחרי ההקלטה הקודמת). אורי ורן מארחים בשבוע ה-ML (הלא רשמי) את דגן מחברת Citrusx לשיחה על ארגונים ש-ML חשוב להם. 00:45 דגן ו-Citrusx ורברס עם פלטפורמה (באמת)(רן) אז לפני שנצלול לעסק - קצת עליך ועל החברה?(דגן) נעים מאוד, אני דגן. במקור בכלל קיבוצניק מעוטף עזה, עם לינה משותפת וכל ה . . . . (אורי) . . . . אתה בחברה טובה . . . . לא מעוטף-עזה, אבל גם אנחנו, שנינו.(דגן) . . . אז המקצוע הראשון שלי זה רפתן, ואחרי זה עבדתי עם . . . . (אורי) לא, אבל בוא נשאל את השאלה - מתי בפעם הראשונה עשית רברס עם פלטפורמה?(דגן) אז אני הייתי ברפת יותר, זה פחות. זה יותר עם הטרקטור של החלוקת-מזון, ופחות עם הרברסים עם העגלה.(אורי) . . . . חרא עד הברכיים הברכיים - קדימה . . . (דגן) זה כן . . . (רן) טוב, אז אתה בחברה טובה . . . . אוקיי, אז גדלת שם, ואחר כך?...(דגן) אז גדלתי שם, ואחר כך בצבא הגעתי ל-8200, ליחידה מאוד טכנולוגית.ושם נכנסתי לעולם הזה, של תוכנה ואלגוריתמיקה וכל הדברים “המדעיים".ומפה לאוניברסיטה, כשלמדתי מדעי המחשב ומדעי המוח, אי… קרא עוד

The MLOps Podcast

In this episode, Dean speaks with Federico Bacci, a data scientist and ML engineer at Bol, the largest e-commerce company in the Netherlands and Belgium. Federico shares valuable insights into the intricacies of deploying machine learning models in production, particularly for forecasting problems. He discusses the challenges of model explainability, the importance of feature engineering over model complexity, and the critical role of stakeholder feedback in improving ML systems. Federico also offers a compelling perspective on why LLMs aren't always the answer in AI applications, emphasizing the need for tailored solutions. This conversation provides a wealth of practical knowledge for data scientists and ML engineers looking to enhance their understanding of real-world ML operations and challenges in e-commerce. Join our Discord community: https://discord.gg/tEYvqxwhah --- Timestamps: 00:00 Introduction and Background 01:59 Owning the ML Pipeline 02:56 Deployment Process 05:58 Testing and Feedback 07:40 Different Deployment Strategies 11:19 Explainability and Feature Importance 13:46 Challenges in Forecasting 22:33 ML Stack and Tools 26:47 Orchestrating Data Pipelines with Airflow 31:27 Exciting Developments in ML 35:58 Recommendations and Closing Links Dwarkesh podcast with Anthropic and Gemini team members – https://www.dwarkeshpatel.com/p/sholto-douglas-trenton-bricken ➡️ Federico Bacci on LinkedIn – https://www.linkedin.com/in/federico-bacci/ ➡️ Federico Bacci on Twitter – https://x.com/fedebyes

Infinite Machine Learning
Large Action Models

Infinite Machine Learning

Play Episode Listen Later Jul 31, 2024 30:52


Will Lu is the cofounder and CTO of Orby AI, an AI platform to automate people's repetitive tasks. He was previously the Head of Engineering at Google and a Systems Software Engineer at Nvidia. Will's favorite book: Beyond Enterpreneurship (Authors: Jim Collins and William Lazier) (00:01) Introduction(00:07) History of RPA(01:04) Building Blocks of RPA(02:34) Drawbacks of Traditional RPA(05:06) Introduction to AI-Native RPA(06:38) Advantages of AI-Native RPA(08:14) Defining Generative Process Automation (GPA)(10:15) Explanation of Large Action Models(11:47) Role of AI Agents in Process Automation(13:11) Data for Building Large Action Models(14:44) Benchmarking Large Action Models(15:53) Risk Mitigation in AI-Native RPA(17:44) Changing Roles in the RPA Industry(19:14) Adoption of Agent Technologies(21:03) ROI Measurement in AI-Native RPA(23:05) Explainability in AI Systems(24:25) Fast Adoption Teams in Enterprises(25:15) Handling Unstructured Data(26:12) Digital Organizations and Future Automation(27:09) Exciting AI Breakthroughs(28:03) Rapid Fire Round--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi 

The Recruiting Brainfood Podcast
Brainfood Live On Air - Ep264 - How to Ethically Implement AI into Talent Acquisition

The Recruiting Brainfood Podcast

Play Episode Listen Later Jul 20, 2024 62:34


HOW TO ETHICALLY IMPLEMENT AI INTO TALENT ACQUISITION   One of the factors slowing down AI Adoption is justifiable concern on the ethics of AI. How do we ensure that we are not exacerbating bias, privileging one demographic over another and treating human beings as lesser than the machines?   It's been great to see software vendors take the lead in trying find a path forward. A superb how-to guide from our friends Willo was featured in Recruiting Brainfood Issue 399 (free to download, no gate here) and it covered the practical steps required to move from idea to practice.   We all have to get their, so we are using today's Brainfood Live as an opportunity to walk through the guide:   - How AI impacts candidate assessment - How to Build Trust with Employees - New TA Infrastructure for the AI-enabled Candidate - Key areas of recruitment optimisation with AI - How AI can benefit candidate experience - How to ensure AI reduces bias rather than exacerbate it - AI as an EB Co-pilot - Key AI policies: what is the regulatory environment? - Privacy, Referencability, Explainability, Humanity - How to get started: Audit / Analytical framework - Research and Communication Strategy - Launch and Implementation   It's a fantastic guide and we're roping in Euan Cameron and Andrew Wood (Co-founders), Willo to walk us through it. Friday 19th July, 2pm BST           Ep264 is sponsored by our friends Willo   Willo is the virtual interviewing platform trusted by thousands of recruiters worldwide. - Receive video responses to your questions remotely, from anyone, anywhere in the world. 1000's of organisations already use Willo to hear from more people, in less time, and never have to worry about scheduling calls or meetings again.   Join them, it is free to get started and we have no setup fees or contracts.   Plus our incredible UK-based support team is available 24/7 to help you transform your interviewing process.   Schedule a demo with one of our friendly team members today.

Open||Source||Data
Redefining AI Ethics: The Key Role of Explainability with Beth Rudden

Open||Source||Data

Play Episode Listen Later Jul 2, 2024 53:18


Timestamps00:00:00 - Intro00:02:00 - Beth's Journey00:19:33​ - Ontologies in AI00:21:44 - Data Lineage and Provenance00:32:52 - Open Source Tools00:38:38​ - Explainable AI00:44:58- Inspiration from NatureQuotesBeth Rudden: "The best thing that I could tell you that I see is that it's going to shift from more pure mathematical and statistical to much more semantic, more qualitative. Instead of quantity, we're going to have quality."Charna Parkey: "I love that because I've been so mathematical for most of my life. I didn't have a lot of words for the feelings or expressions, right? And so I had sort of this lack of data and the Brené Brown reference you make, like I have many of her books on my shelf and I often pull, I don't even know where it is right now, but the Atlas of the Heart because I am having this feeling and I don't know what it is."LinksConnect with BethConnect with Charna

Experiencing Data with Brian O'Neill
146 - (Rebroadcast) Beyond Data Science - Why Human-Centered AI Needs Design with Ben Shneiderman

Experiencing Data with Brian O'Neill

Play Episode Listen Later Jun 25, 2024 42:07


Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.     I'm so excited to welcome this expert from the field of UX and design to today's episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.     In our chat, we covered: Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy' AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There's no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable' user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55)     Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben's earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc     Quotes from Today's Episode The world of AI has certainly grown and blossomed — it's the hot topic everywhere you go. It's the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they're not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that's where the action is. Of course, what we really want from AI is to make our world a better place, and that's a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person's sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that's where we want to go. - Ben (2:05)   The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it's not just programming, but it also involves the use of data that's used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let's say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There's been bias in facial recognition algorithms, which were less accurate with people of color. That's led to some real problems in the real world. And that's where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)     Every company will tell you, “We do a really good job in checking out our AI systems.” That's great. We want every company to do a really good job. But we also want independent oversight of somebody who's outside the company — someone who knows the field, who's looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that's where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)     There's no such thing as an autonomous device. Someone owns it; somebody's responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it's performing poorly. … Responsibility is a pretty key factor here. So, if there's something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what's happening? What's it doing? What's going wrong and what's going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that's hidden away and you never see it because that's just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what's going on and make sure it gets better. Every quarter. - Ben (19:41)     Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they're at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they're doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)   Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what's usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I'm afraid I haven't seen too many success stories of that working. … I've been diving through this for years now, and I've been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA's XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it's going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let's prevent the user from getting confused and so they don't have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what's happened in each step, you can go back, you can explore, you can change things in each part of it. It's also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

Compliance into the Weeds
AI Accountability and Explainability

Compliance into the Weeds

Play Episode Listen Later Jun 12, 2024 25:34


The award winning, Compliance into the Weeds is the only weekly podcast which takes a deep dive into a compliance related topic, literally going into the weeds to more fully explore a subject. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode Tom Fox and Matt Kelly delve into the recent speech by Michael Hsu, the head of the Office of the Comptroller of the Currency, on the accountability challenges posed by artificial intelligence in the banking sector.  The discussion highlights Hsu's emphasis on the lack of a robust accountability framework for AI, illustrating the issue with the Air Canada chatbot incident. The conversation also touches on potential systemic risks AI could pose to the financial sector, the need for explainable AI, and the shared responsibility model used in cloud computing as a potential template for addressing these challenges. The episode underscores the necessity for compliance officers to ensure contracts and IT controls are in place and stresses the importance of developing trust and accountability mechanisms before widespread AI adoption. Key Highlights ·      AI Accountability: A Regulator's Perspective ·      Case Study: Air Canada's AI Mishap ·      Legal and Technological Challenges ·      Exploring Solutions and Shared Responsibility Resources Matt on Radical Compliance  Tom   Instagram Facebook YouTube Twitter LinkedIn Learn more about your ad choices. Visit megaphone.fm/adchoices

TechSperience
Episode 127: AI in Healthcare – A Revolution in Progress

TechSperience

Play Episode Listen Later Jun 6, 2024 53:09


AI is revolutionizing healthcare by analyzing massive datasets to uncover hidden patterns, leading to breakthroughs in disease diagnosis, treatment, and patient care. Join Jennifer Johnson and Jamal Khan as they explore AI's impact on healthcare. They delve into critical ethical considerations, governance structures, data security measures, and AI's role in clinical decision support. Speakers: Jennifer Johnson, Director of Healthcare Strategy and Business Development at Connection Jamal Kahn, Chief Growth and Innovation Officer at Connection   Show Notes 00:00 Introduction and AI Ecosystem Shifts 02:07 Ethical Considerations and Governance in AI Healthcare 05:49 Challenges of Data Poisoning and Model Drift in AI Healthcare 08:02 Role of CAIOs in Healthcare Governance and Data Strategy 10:48 Importance of Patient Consent and Cross-Jurisdictional Challenges 13:01AI's  Impact on Healthcare Provider Work Environment 17:45 Vetting AI Partners and Virtual Assistants in Healthcare 19:39 Patient Accessibility and Engagement in Healthcare 22:50 Clinical Trials and Technology in Healthcare 24:13 Challenges of Merging Patient Data in Healthcare 27:01 AI Adoption in Healthcare: Impact on Insurance Providers 32:08 Challenges of Transparency and Explainability in AI 35:58 AI in Clinical Settings: Promising Use Cases 37:18 Choosing Hyperscalers for Healthcare AI Implementation 48:01 Data Orchestration for Patient Care with AI 50:17 Following Patients Through Care Settings with AI 52:08 Excitement and Challenges of AI Integration in Healthcare

Infinite Machine Learning
How Symbolic AI is Transforming Critical Infrastructure

Infinite Machine Learning

Play Episode Listen Later May 28, 2024 38:08


Eric Daimler is the cofounder and CEO of Conexus AI, a data management platform that provides composable and machine-verifiable data integration. He was previously an assistant dean and assistant professor at Carnegie Mellon University. He was the founding partner of Hg Analytics and managing director at Skilled Science. He was also the White House Presidential Innovation Fellow for Machine Intelligence and Robotics. Eric's favorite book: ReCulturing (Author: Melissa Daimler) (00:00) Understanding Symbolic AI(02:42) Symbolic AI mirrors biological intelligence(06:01) Category Theory(08:42) Comparing Symbolic AI and Probabilistic AI(11:22) Symbolic Generative AI(14:19) Implementing Symbolic AI(18:25) Symbolic Reasoning(21:24) Explainability(24:39) Neuro Symbolic AI(26:41) The Future of Symbolic AI(30:43) Rapid Fire Round--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi 

The Nonlinear Library
LW - The Schumer Report on AI (RTFB) by Zvi

The Nonlinear Library

Play Episode Listen Later May 25, 2024 60:11


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Schumer Report on AI (RTFB), published by Zvi on May 25, 2024 on LessWrong. Or at least, Read the Report (RTFR). There is no substitute. This is not strictly a bill, but it is important. The introduction kicks off balancing upside and avoiding downside, utility and risk. This will be a common theme, with a very strong 'why not both?' vibe. Early in the 118th Congress, we were brought together by a shared recognition of the profound changes artificial intelligence (AI) could bring to our world: AI's capacity to revolutionize the realms of science, medicine, agriculture, and beyond; the exceptional benefits that a flourishing AI ecosystem could offer our economy and our productivity; and AI's ability to radically alter human capacity and knowledge. At the same time, we each recognized the potential risks AI could present, including altering our workforce in the short-term and long-term, raising questions about the application of existing laws in an AI-enabled world, changing the dynamics of our national security, and raising the threat of potential doomsday scenarios. This led to the formation of our Bipartisan Senate AI Working Group ("AI Working Group"). They did their work over nine forums. 1. Inaugural Forum 2. Supporting U.S. Innovation in AI 3. AI and the Workforce 4. High Impact Uses of AI 5. Elections and Democracy 6. Privacy and Liability 7. Transparency, Explainability, Intellectual Property, and Copyright 8. Safeguarding Against AI Risks 9. National Security Existential risks were always given relatively minor time, with it being a topic for at most a subset of the final two forums. By contrast, mundane downsides and upsides were each given three full forums. This report was about response to AI across a broad spectrum. The Big Spend They lead with a proposal to spend 'at least' $32 billion a year on 'AI innovation.' No, there is no plan on how to pay for that. In this case I do not think one is needed. I would expect any reasonable implementation of that to pay for itself via economic growth. The downsides are tail risks and mundane harms, but I wouldn't worry about the budget. If anything, AI's arrival is a reason to be very not freaked out about the budget. Official projections are baking in almost no economic growth or productivity impacts. They ask that this money be allocated via a method called emergency appropriations. This is part of our government's longstanding way of using the word 'emergency.' We are going to have to get used to this when it comes to AI. Events in AI are going to be happening well beyond the 'non-emergency' speed of our government and especially of Congress, both opportunities and risks. We will have opportunities that appear and compound quickly, projects that need our support. We will have stupid laws and rules, both that were already stupid or are rendered stupid, that need to be fixed. Risks and threats, not only catastrophic or existential risks but also mundane risks and enemy actions, will arise far faster than our process can pass laws, draft regulatory rules with extended comment periods and follow all of our procedures. In this case? It is May. The fiscal year starts in October. I want to say, hold your damn horses. But also, you think Congress is passing a budget this year? We will be lucky to get a continuing resolution. Permanent emergency. Sigh. What matters more is, what do they propose to do with all this money? A lot of things. And it does not say how much money is going where. If I was going to ask for a long list of things that adds up to $32 billion, I would say which things were costing how much money. But hey. Instead, it looks like he took the number from NSCAI, and then created a laundry list of things he wanted, without bothering to create a budget of any kind? It also seems like they took the origin...

The AI Fundamentalists
Responsible AI: Does it help or hurt innovation? With Anthony Habayeb

The AI Fundamentalists

Play Episode Listen Later May 7, 2024 45:59 Transcription Available


Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur,  As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.Show notesPrologue: Why responsible AI? Why now? (00:00:00)Deviating from our normal topics about modeling best practicesContext about where regulation plays a role in industries besides big techCan we learn from other industries about the role of "responsibility" in products? Special guest, Anthony Habayeb (00:02:59)Introductions and start of the discussionOf all the companies you could build around AI, why governance?Is responsible AI the right phrase? (00:11:20)Should we even call good modeling and business practices "responsible AI"?Is having responsible AI a “want to have?” or a “need to have?”Importance of AI regulation and responsibility (00:14:49)People in the AI and regulation worlds have started pushing back on Responsible AI.Do regulations impede freedom?Discussing the big picture of responsibility and governance: Explainability, repeatability, records, and auditWhat about bias and fairness? (00:22:40)You can have fair models that operate with biasBias in practice identifies inequities that models have learnedFairness is correcting for societal biases to level the playing field for safer business and modeling practices to prevail.Responsible deployment and business management (00:35:10)Discussion about what organizations get right about responsible AIAnd what organizations can get completely wrong if they aren't careful.Embracing responsible AI practices (00:41:15)Getting your teams, companies, and individuals involved in the movement towards building AI responsiblyWhat did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

The MLOps Podcast

In this episode, I had the pleasure of speaking with Mila Orlovsky, a pioneer in medical AI. We delve into practical applications, overcoming data challenges, and the intricacies of developing AI tools that meet regulatory standards. Mila discusses her experiences with predictive analytics in patient care, offering tips on navigating the complexities of AI implementation in medical environments. This episode is packed with actionable advice and forward-thinking strategies, making it essential listening for professionals looking to impact healthcare through AI. Join our Discord community: https://discord.gg/tEYvqxwhah --- Timestamps: 00:00 Introduction and Background 4:03 Early Days of Machine Learning in Medicine 5:19 Challenges in Building Medical AI Systems 6:54 Differences Between Medical ML and Other ML Domains 15:36 Unique Challenges of Medical Data in ML 24:01 Counterintuitive Learnings on the Business Side 28:07 Impact and Value of ML Models in Medicine 29:41 The Role of Doctors in the Age of AI 38:44 Explainability in Medical ML 44:31 The FDA and Compliance in Medical ML 48:56 Feedback and Iteration in Medical ML 52:25 Predictions for the Future of ML and AI 53:59 Controversial Predictions in the Field of ML 56:02 Recommendations 57:58 Conclusion ➡️ Mila Orlovsky on LinkedIn – https://www.linkedin.com/in/milaorlovsky/

Fortune's Path Podcast
Sharon Chou — How to AI

Fortune's Path Podcast

Play Episode Listen Later Jan 30, 2024 62:40


TakeawaysUnderstanding the basics of circuits and quantum computing is essential in comprehending the potential of AI.Transparency and explainability are crucial in AI decision-making to ensure accountability and mitigate bias.Data curation is a critical step in developing AI models to avoid unintended biases and improve accuracy.The application of AI in mortgage and loan decisions requires careful consideration of fairness and ethical implications. Higher education is correlated with earnings, but its correlation with credit worthiness is uncertain.Being completely blind to factors like race and gender in the hiring process may be challenging, but efforts can be made to represent everyone equally.Considering each subpopulation separately and simplifying the hiring process can help ensure fair representation.Ethical dilemmas arise when ignoring correlations that have a strong statistical relationship with outcomes.The application of AI in the hiring process can be effective when combined with human decision-making and a structured, data-informed approach.Chapters00:00Introduction and Recording Confirmation00:38Background in Physics and Engineering03:13Research in Material Design and Quantum Physics04:26Understanding Circuits and Quantum Computing06:37Transition from Research to Business11:14Impact of Ideas and Einstein's Equation13:14Ethics and Risks of Artificial Intelligence17:15Applications and Limitations of AI20:39Ethics and Bias in AI Decision-Making25:24Transparency and Explainability in AI29:29Data Curation and Bias in AI Models34:07AI in Mortgage and Loan Decisions38:15Fairness and Ethics in Lending38:41Correlation between Higher Education and Earnings39:21Challenges of Being Blind to Race and Gender39:49Considerations for Representing Everyone Equally40:24Ethical Dilemmas of Ignoring Correlations41:08Product Development and Answering Ethical Questions41:29Simplifying the Hiring Process42:02Data-Informed Recruiting and Hiring43:14Using Data to Find the Right Match44:24Simplifying the Workflow for Recruiters45:16Focusing on Skill-Based Factors in Hiring46:31The Validity of Resumes in Predicting Performance47:25Factors in Deciding a Good Hire48:15The Tricky Nature of Job Descriptions49:05The Importance of Skills and Job Descriptions50:03The Value of Experience and Starting a Business51:09The Role of Emotion in Decision-Making54:02Introducing Scientific Process into Hiring55:53The Application of AI in the Hiring Process56:58The Human Element in Decision-Making58:16Applying the Scientific Method to Business Problems59:18Learning from Past Research and Being Skeptical01:00:45Checking Assumptions and Being Discerning

The Next Byte
156. 2023 Recap & First Annual Saucies!

The Next Byte

Play Episode Listen Later Jan 9, 2024 28:07


4:00) - Most Interesting | 126. Amputees Feel Warmth In Their Missing Hand(8:40) - Listener Favorite | 118. Robotics & AI in Sheet Metal Forming(12:23) - Most Impactful | 112. Bringing Humans Back Into The Loop For AI(16:30) - Hidden Gem(s) | 135. Reinventing Retail in The Connectivity Age & 144. An implantable device could enable injection-free control of diabetes

Sunny Side Up
Ep. 444 | AI Unleashed: Exploring Trends, Strategies, and Best Practices in Marketing

Sunny Side Up

Play Episode Listen Later Dec 20, 2023 36:39


Episode Summary In this episode of Sunny Side Up, Chris Moody interviews Jana Eggers on explainability, trends, and tools in AI marketing. Jana emphasizes the need for transparency to identify and correct biases, urging the prioritization of user needs over technological capabilities. She argues that explainability should not be an afterthought but a fundamental aspect of AI. Chris and Jana discuss trends in AI-driven personalization in marketing, pointing out the need for more nuanced feedback mechanisms. The conversation then shifts to best practices for utilizing AI in marketing strategies. Jana advises a balanced approach, combining trust in AI with human expertise and scepticism.  About the Guest Jana Eggers is CEO of the neuroscience-inspired artificial intelligence platform company Nara Logics. She's an experienced tech exec focused on inspiring teams to build great products. She's started and grown companies and has also led large organizations at public companies. She is active in customer-inspired innovation, the artificial intelligence industry, as well as running and triathlons. She's held technology and executive positions at Intuit, Los Alamos National Laboratory, Basis Technology, Lycos, American Airlines, Spreadshirt, and more.  Connect with Jana Eggers Key Takeaways - Understanding the reasoning behind AI's decisions is crucial for practical sales, marketing, and healthcare applications. - Transparency in AI helps identify and correct biases, ensuring fair and ethical use of technology. - Prioritise understanding user needs over purely focusing on technological capabilities. - Explainability in AI should be viewed not as an add-on but as a crucial element for effective and contextualised user interaction. - Challenge the 'better results' notion to include practical usability and relevance to users' contexts. - Engaging with LLMs helps develop a broader understanding and literacy of AI among users and organisations. - Popular items can sometimes skew AI recommendations, leading to less relevant suggestions. - The goal is to evolve AI systems to a point where users feel that recommendations are genuinely tailored for them. - Don't over-trust AI; use it as a tool while maintaining critical thinking and scepticism. - Employing AI when scaling beyond human capabilities, such as handling multiple data segments, is needed. Quote "Tuning these AI systems without having that explainability is really kind of like surgery with your eyes closed." – Jana Eggers Recommended Resource  Elements of AI: https://www.elementsofai.com ⁠Connect with Jana Eggers⁠  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠| ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow us on LinkedIn ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Website⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠

Voice of Veritas Podcast
Decoding AI: Navigating Governance Through Explainability

Voice of Veritas Podcast

Play Episode Listen Later Dec 19, 2023 19:22


As companies continue to adopt artificial intelligence (AI), there's a growing need for transparency in AI practices. In this podcast, we discuss the challenge of governing AI and emphasize the importance of explainability.  Resources referenced: Voice of Veritas: Avoid the Fine: Capturing Off-Channel Communication  Voice of Veritas: Unlocking the Future of AI, Security, and Veritas Solutions  Veritas Cybersecurity Newsletter on LinkedIn | Issue 6: Cybersecurity, AI and Future TechSee omnystudio.com/listener for privacy information.

The Gradient Podcast
Vera Liao: AI Explainability and Transparency

The Gradient Podcast

Play Episode Listen Later Dec 7, 2023 97:03


In episode 101 of The Gradient Podcast, Daniel Bashir speaks to Vera Liao.Vera is a Principal Researcher at Microsoft Research (MSR) Montréal where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics) group. She is trained in human-computer interaction research and works on human-AI interaction, currently focusing on explainable AI and responsible AI. She aims to bridge emerging AI technologies and human-centered design practices, and use both qualitative and quantitative methods to generate recommendations for technology design. Before joining MSR, Vera worked at IBM TJ Watson Research Center, and her work contributed to IBM products such as AI Explainability 360, Uncertainty Quantification 360, and Watson Assistant.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:41) Vera's background* (07:15) The sociotechnical gap* (09:00) UX design and toolkits for AI explainability* (10:50) HCI, explainability, etc. as “separate concerns” from core AI reseaarch* (15:07) Interfaces for explanation and model capabilities* (16:55) Vera's earlier studies of online social communities* (22:10) Technologies and user behavior* (23:45) Explainability vs. interpretability, transparency* (26:25) Questioning the AI: Informing Design Practices for Explainable AI User Experiences* (42:00) Expanding Explainability: Towards Social Transparency in AI Systems* (50:00) Connecting Algorithmic Research and Usage Contexts* (59:40) Pitfalls in existing explainability methods* (1:05:35) Ideal and real users, seamful systems and slow algorithms* (1:11:08) AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap* (1:11:35) Vera's earlier experiences with chatbots* (1:13:00) Need to understand pitfalls and use-cases for LLMs* (1:13:45) Perspectives informing this paper* (1:20:30) Transparency informing goals for LLM use* (1:22:45) Empiricism and explainability* (1:27:20) LLM faithfulness* (1:32:15) Future challenges for HCI and AI* (1:36:28) OutroLinks:* Vera's homepage and Twitter* Research* Earlier work* Understanding Experts' and Novices' Expertise Judgment of Twitter Users* Beyond the Filter Bubble* Expert Voices in Echo Chambers* HCI / collaboration* Exploring AI Values and Ethics through Participatory Design Fictions* Ways of Knowing for AI: (Chat)bots as Interfaces for ML* Human-AI Collaboration: Towards Socially-Guided Machine Learning* Questioning the AI: Informing Design Practices for Explainable AI User Experiences* Rethinking Model Evaluation as Narrowing the Socio-Technical Gap* Human-Centered XAI: From Algorithms to User Experiences* AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap* Fairness and explainability* Questioning the AI: Informing Design Practices for Explainable AI User Experiences* Expanding Explainability: Towards Social Transparency in AI Systems* Connecting Algorithmic Research and Usage Contexts Get full access to The Gradient at thegradientpub.substack.com/subscribe

IRL - Online Life Is Real Life

From Hollywood to Hip Hop, artists are negotiating new boundaries of consent for use of AI in the creative industries. Bridget Todd speaks to artists who are pushing the boundaries.It's not the first time artists have been squeezed, but generative AI presents new dilemmas. In this episode: a member of the AI working group of the Hollywood writers union; a singer who licenses the use of her voice to others; an emcee and professor of Black music; and an AI music company charting a different path.Van Robichaux is a comedy writer in Los Angeles who helped craft the Writers Guild of America's proposals on managing AI in the entertainment industry. Holly Herndon is a Berlin-based artist and a computer scientist who has developed “Holly +”, a series of deep fake music tools for making music with Holly's voice.Enongo Lumumba-Kasongo creates video games and studies the intersection between AI and Hip Hop at Brown University. Her alias as a rapper is Sammus. Rory Kenny is co-founder and CEO of Loudly, an AI music generator platform that employs musicians to train their AI instead of scraping music from the internet.*Thank you to Sammus for sharing her track ‘1080p.' Visit Sammus' Bandcamp page to hear the full track and check out more of her songs.*

The Gradient Podcast
Martin Wattenberg: ML Visualization and Interpretability

The Gradient Podcast

Play Episode Listen Later Nov 16, 2023 102:05


In episode 99 of The Gradient Podcast, Daniel Bashir speaks to Professor Martin Wattenberg.Professor Wattenberg is a professor at Harvard and part-time member of Google Research's People + AI Research (PAIR) initiative, which he co-founded. His work, with long-time collaborator Fernanda Viégas, focuses on making AI technology broadly accessible and reflective of human values. At Google, Professor Wattenberg, his team, and Professor Viégas have created end-user visualizations for products such as Search, YouTube, and Google Analytics. Note: Professor Wattenberg is recruiting PhD students through Harvard SEAS—info here.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (03:30) Prof. Wattenberg's background* (04:40) Financial journalism at SmartMoney* (05:35) Contact with the academic visualization world, IBM* (07:30) Transition into visualizing ML* (08:25) Skepticism of neural networks in the 1980s* (09:45) Work at IBM* (10:00) Multiple scales in information graphics, organization of information* (13:55) How much information should a graphic display to whom? * (17:00) Progressive disclosure of complexity in interface design* (18:45) Visualization as a rhetorical process* (20:45) Conversation Thumbnails for Large-Scale Discussions* (21:35) Evolution of conversation interfaces—Slack, etc.* (24:20) Path dependence — mutual influences between user behaviors and technology, takeaways for ML interface design* (26:30) Baby Names and Social Data Analysis — patterns of interest in baby names* (29:50) History Flow* (30:05) Why investigate editing dynamics on Wikipedia?* (32:06) Implications of editing patterns for design and governance* (33:25) The value of visualizations in this work, issues with Wikipedia editing* (34:45) Community moderation, bureaucracy* (36:20) Consensus and guidelines* (37:10) “Neutral” point of view as an organizing principle* (38:30) Takeaways* PAIR* (39:15) Tools for model understanding and “understanding” ML systems* (41:10) Intro to PAIR (at Google)* (42:00) Unpacking the word “understanding” and use cases* (43:00) Historical comparisons for AI development* (44:55) The birth of TensorFlow.js* (47:52) Democratization of ML* (48:45) Visualizing translation — uncovering and telling a story behind the findings* (52:10) Shared representations in LLMs and their facility at translation-like tasks* (53:50) TCAV* (55:30) Explainability and trust* (59:10) Writing code with LMs and metaphors for using* More recent research* (1:01:05) The System Model and the User Model: Exploring AI Dashboard Design* (1:10:05) OthelloGPT and world models, causality* (1:14:10) Dashboards and interaction design—interfaces and core capabilities* (1:18:07) Reactions to existing LLM interfaces* (1:21:30) Visualizing and Measuring the Geometry of BERT* (1:26:55) Note/Correction: The “Atlas of Meaning” Prof. Wattenberg mentions is called Context Atlas* (1:28:20) Language model tasks and internal representations/geometry* (1:29:30) LLMs as “next word predictors” — explaining systems to people* (1:31:15) The Shape of Song* (1:31:55) What does music look like? * (1:35:00) Levels of abstraction, emergent complexity in music and language models* (1:37:00) What Prof. Wattenberg hopes to see in ML and interaction design* (1:41:18) OutroLinks:* Professor Wattenberg's homepage and Twitter* Harvard SEAS application info — Professor Wattenberg is recruiting students!* Research* Earlier work* A Fuzzy Commitment Scheme* Stacked Graphs—Geometry & Aesthetics* A Multi-Scale Model of Perceptual Organization in Information Graphics* Conversation Thumbnails for Large-Scale Discussions* Baby Names and Social Data Analysis* History Flow (paper)* At Harvard and Google / PAIR* Tools for Model Understanding: Facets, SmoothGrad, Attacking discrimination with smarter ML* TensorFlow.js* Visualizing translation* TCAV* Other ML papers:* The System Model and the User Model: Exploring AI Dashboard Design (recent speculative essay)* Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task* Visualizing and Measuring the Geometry of BERT* Artwork* The Shape of Song Get full access to The Gradient at thegradientpub.substack.com/subscribe

RadioGraphics Podcasts | RSNA
Translating AI to Clinical Practice

RadioGraphics Podcasts | RSNA

Play Episode Listen Later Nov 8, 2023 11:43


Guest host Dr. Shahriar Faghani talks about an article published in the journal entitled Translating AI to Clinical Practice: Overcoming Data Shift with Explainability. Translating AI to Clinical Practice:Overcoming Data Shift with Explainability. Choi et al. RadioGraphics 2023; 43(5):e220105.

IRL - Online Life Is Real Life
Crash Test Dummies

IRL - Online Life Is Real Life

Play Episode Listen Later Nov 7, 2023 22:27


Why does it so often feel like we're part of a mass AI experiment? What is the responsible way to test new technologies? Bridget Todd explores what it means to live with unproven AI systems that impact millions of people as they roll out across public life. In this episode: a visit to San Francisco, a major hub for automated vehicle testing; an exposé of a flawed welfare fraud prediction algorithm in a Dutch city; a look at how companies comply with regulations in practice; and how to inspire alternative values for tomorrow's AI.Julia Friedlander is senior manager for automated driving policy at San Francisco Municipal Transportation Agency who wants to see AVs regulated  based on safety performance data.Justin-Casimir Braun is a data journalist at Lighthouse Reports who is investigating suspect algorithms for predicting welfare fraud across Europe. Navrina Singh is the founder and CEO of Credo AI, a platform that guides enterprises on how to ‘govern' their AI responsibly in practice.Suresh Venkatasubramanian is the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University and he brings joy to computer science. IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd shares stories about prioritizing people over profit in the context of AI.

This Anthro Life
From Automation to Revolution: Exploring the Future of Conversational AI

This Anthro Life

Play Episode Listen Later Oct 25, 2023 68:26


Did you know that conversation, one of the oldest human technologies, is reshaping the future of how we interact with machines? We're not talking Siri or Alexa here, but conversational AI, interfaces anyone can create and hyper automation that links together our intentions, tools and technologies. Join us in this episode of This Anthro Life as we delve into the fascinating world of conversational AI, hyper automation and composable software systems with the brilliant mind of User Experience (UX) pioneer Robb Wilson. Discover why embracing conversation and removing the barriers of complex software interfaces might just be the key to unlocking new opportunities and improved outcomes in the digital age.In this thought-provoking conversation, we explore the potential shift away from antiquated software interfaces and the exciting possibilities that conversational AI brings. From the potential of no-code approaches and decentralized business models to the interconnectedness of user experience, customer experience, and employee experience, we provide fresh insights on how automation can evolve how we work, interact with technology, elevate customer service, and foster creativity.Key takeaways:Learn how to create your own skills ecosystem and embrace a no-code approach to building AI interfaces.Understand the potential of conversational UI to revolutionize how we interact with technology.Explore the impact of hyper automation on customer service experiences and employee creativity.Conversational AI and hyper-automation could transform jobs by automating routine tasks and freeing up humans for more creative work.Decentralized organizations and composable UIs that allow access to all software through natural language could reshape business models.Total experience design must consider both customer and employee experiences to be effective.Responsible development of conversational AI could help create more empowering technologies.Automation provides the opportunity for humans to engage in more meaningful social interactions.The future of work and business will likely involve a blend of human creativity and machine automation.Our insightful guest, Robb Wilson, brings a wealth of expertise on hyper automation and composable systems. As the founder of OneReach.ai, a company specializing in creating tools for companies to build their own Alexa-like ecosystems, Robb understands the power and potential of AI in transforming industries. Get ready to be inspired and informed by his insights!Key Topics of this Podcast:00:00:23 The future of technology is conversational.00:06:06 Relaxation and space foster creativity.00:09:42 Humans can adapt and find value in automation.00:14:58 Decentralization is the future.00:22:26 Automation creates space for connection.00:27:30 Total experience is transformative.00:30:26 Employee-first approach in business.00:35:15 Revolutionizing technology through conversational interfaces.00:41:00 The future of software interfaces.00:44:36 Explainability is crucial for trust.00:55:06 Machines should make human-like mistakes.01:01:11 Unlocking software through conversational UI.01:06:37 Automation enhances customer experiences.About This Anthro Life This Anthro Life is a thought-provoking podcast that explores the human side of technology, culture, and business. Hosted by Adam Gamwell, we unravel fascinating narratives and connect them to the wider context of our lives. Tune in to https://thisanthrolife.org and subscribe to our Substack at https://thisanthrolife.substack.com for more captivating episodes and engaging content.Connect with Robb.WilsonLinkedin: https://www.linkedin.com/in/invisiblemachines/ Website: https://onereach.ai/ Connect with This Anthro Life:Instagram: https://www.instagram.com/thisanthrolife/ Facebook: https://www.facebook.com/thisanthrolife LinkedIn: https://www.linkedin.com/company/this-anthro-life-podcast/ This Anthro Life website: https://www.thisanthrolife.org/ Substack blog: https://thisanthrolife.substack.comThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5168968/advertisement

IRL - Online Life Is Real Life
The Humans in the Machine

IRL - Online Life Is Real Life

Play Episode Listen Later Oct 24, 2023 21:41


They're the essential workers of AI — yet mostly invisible and exploited. Does it have to be this way? Bridget Todd talks to data workers and entrepreneurs pushing for change.Millions of people work on data used to train AI behind the scenes. Often, they are underpaid and even traumatized by what they see. In this episode: a company charting a different path; a litigator holding big tech accountable; and data workers organizing for better conditions.Thank you to Foxglove and Superrr for sharing recordings from the the Content Moderators Summit in Nairobi, Kenya in May, 2023.Richard Mathenge helped establish a union for content moderators after surviving a traumatic experience as a contractor in Kenya training Open AI's ChatGPT.Mercy Mutemi is a litigator for digital rights in Kenya who has issued challenges to some of the biggest global tech companies on behalf of hundreds of data workers.Krista Pawloski is a full time data worker on Amazon's Mechanical Turk platform and is an organizer with the worker-led advocacy group, Turkopticon.Safiya Husain is the co-founder of Karya, a company in India with an alternative business model to compensate data workers at rates that reflect the high value of the data.IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.

Unsupervised Learning
UL NO. 402: Israeli Footage & Analysis, WSFTP + MOVEIT, AI Explainability, Andreessen vs. Perell on Writing, and more…

Unsupervised Learning

Play Episode Listen Later Oct 11, 2023 26:18


Israel analysis, a genetic data breach, active exploits against critical vulnerabilities, and a brilliant conversation between two writers about creativity

IRL - Online Life Is Real Life
With AIs Wide Open

IRL - Online Life Is Real Life

Play Episode Listen Later Oct 10, 2023 22:01


Are today's large language models too hot to handle? Bridget Todd digs into the risks and rewards of open sourcing the tech that makes ChatGPT talk.In their competitive rush to release powerful LLMs to the world, tech companies are fueling a controversy about what should and shouldn't be open in generative AI.In this episode, we meet open source research communities who have stepped up to develop more responsible machine learning alternatives.David Evan Harris worked at Meta to make AI more responsible and now shares his concerns about the risks of open large language models for disinformation and more. Abeba Birhane is a Mozilla advisor and cognitive scientist who calls for openness to facilitate independent audits of large datasets sourced from the internet. Sasha Luccioni is a researcher and climate lead at Hugging Face who says open source communities are key to developing ethical and sustainable machine learning.Andriy Mulyar is co-founder and CTO of Nomic, the startup behind the open source chatbot GPT4All, an offline and private alternative to ChatGPT.IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.

IRL - Online Life Is Real Life
We're Back! IRL Season 7: People Over Profit

IRL - Online Life Is Real Life

Play Episode Listen Later Sep 26, 2023 1:33


This season, IRL host Bridget Todd meets people who are balancing the upsides of artificial intelligence with the downsides that are coming into view worldwide. Stay tuned for the first of five biweekly episodes on October 10! IRL is an original podcast from the non-profit Mozilla.

The Digital Analytics Power Hour
#223: Explainability in AI with Dr. Janet Bastiman

The Digital Analytics Power Hour

Play Episode Listen Later Jul 11, 2023 57:33


To trust something, you need to understand it. And, to understand something, someone often has to explain it. When it comes to AI, explainability can be a real challenge (definitionally, a "black box" is unexplainable)! With AI getting new levels of press and prominence thanks to the explosion of generative AI platforms, the need for explainability continues to grow. But, it's just as important in more conventional situations. Dr. Janet Bastiman, the Chief Data Scientist at Napier, joined Moe and Tim to, well, explain the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Augmented - the industry 4.0 podcast
Episode 107: Explainability in AI with Julian Senoner

Augmented - the industry 4.0 podcast

Play Episode Listen Later Feb 1, 2023 29:53