POPULARITY
In episode 31 of Recsperts, I sit down with Elisabeth Lex, Full Professor of Human-Computer Interfaces and Inclusive Technologies at Graz University of Technology and a leading researcher at the intersection of recommender systems, psychology, and human–computer interaction. Together, we explore how recommender systems can become truly human-centric by integrating cognitive, emotional, and personality-aware models into their design.Elisabeth begins by addressing a common reductionism in the field: treating users primarily as data points rather than as humans with goals, emotions, memories, and cognitive boundaries. We revisit the origins of psychology-informed recommendation, including the Grundy system -the first recommender system, built nearly 50 years ago - which framed book recommendation through stereotype modeling. From there, we discuss how the community's focus shifted toward solving recommendation mainly as an algorithmic optimization problem, often sidelining richer models of human decision-making.We then map out the three major branches of psychology-informed RecSys - cognition-inspired, affect-aware, and personality-aware - and dive into practical examples. Elisabeth walks us through her work on modeling music re-listening behavior using cognitive architectures such as ACT-R (Adaptive Control of Thought–Rational) and shows how cognitive constructs like memory decay, attention, and familiarity can meaningfully augment standard approaches like collaborative filtering. We also explore how hybrid systems that combine cognitive models with collaborative filtering can yield not just higher accuracy but also more novelty, diversity, and clearer explanations.Our conversation also turns to user-centric evaluation. Elisabeth argues that accuracy metrics alone cannot tell us whether a system is genuinely helpful. Instead, we must measure attitudes, perceptions, motivations, and emotional responses - while carefully accounting for cognitive biases, UI effects, and users' lived experiences.Towards the end, Elisabeth discusses emerging research directions such as hybrid AI (symbolic + sub-symbolic methods), the role of LLMs and agents, the risks of replacing human studies with automated evaluations, and the responsibility our community has to understand users beyond their clicks.Enjoy this enriching episode of RECSPERTS – Recommender Systems Experts.Don't forget to follow the podcast and please leave a review.(00:00) - Introduction (03:15) - About Elisabeth Lex (07:55) - Grundy, the first Recommender System (09:03) - Bridging the Gap between Psychology and Modern RecSys (17:21) - On how and when Elisabeth became a Researcher (21:39) - Survey on Psychology-Informed RecSys (39:29) - Personality-Aware Recommendation (49:43) - Affect- and Emotion-Aware Recommendation (01:01:37) - Cognition-Inspired Recommendation and the ACT-R Framework (01:14:39) - Combining Collaborative Filtering and ACT-R for Explainability (01:21:26) - Human-Centered Design (01:26:15) - Further Challenges and Closing Remarks Links from the Episode:Elisabeth Lex on LinkedInWebsite of ElisabethAI for Society LabFirst International Workshop on Recommender Systems for Sustainability and Social Good | co-located with RecSys 2024Second International Workshop on Recommender Systems for Sustainability and Social Good | co-located with RecSys 2025HyPer Workshop: Hybrid AI for Human-Centric PersonalizationTutorial on Psychology-Informed RecSysACT-R: Adaptive Control of Thought-RationalPOPROX: Platform for OPen Recommendation and Online eXperimentationPapers:Elaine Rich (1979): User Modeling via StereotypesLex et al. (2021): Psychology-informed Recommender SystemsReiter-Haas et al. (2021): Predicting Music Relistening Behavior Using the ACT-R FrameworkMoscati et al. (2023): Integrating the ACT-R Framework with Collaborative Filtering for Explainable Sequential Music RecommendationTran et al. (2024): Transformers Meet ACT-R: Repeat-Aware and Sequential Listening Session RecommendationGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
Marc Elovitz is Global Head of Investment Management Regulatory at McDermott Will & Schulte – a leading global law firm. Marc advises private fund managers on running their businesses consistent with all relevant laws, regulations and legal requirements. Marc's cutting-edge work also covers the latest trends of interest to private funds, including blockchain technology and digital assets. He advises on the legal and regulatory considerations involving virtual and digital currency business initiatives and the blockchain technology behind them. In this podcast, we discuss: From Litigation to Regulation The Private Market Boom "Project Crypto" and Regulatory Harmonisation Beyond Digital Gold The Yield Obstacle in Stablecoins Future-Proofing Digital Assets The Trust Factor in Private Equity Solving the AI Explainability Crisis The Delaware Governance Battle Perspective through Fiction
Im aktuellen Predictive AI Quarterly sprechen wir über zentrale Entwicklungen im Bereich Predictive AI und teilen Erfahrungen aus einem konkreten LLM-Projekt. Thema sind unter anderem TabPFN 2.5, neue Ansätze für Explainability sowie der wachsende Einfluss von AI-Agents auf Softwareentwicklung. Im Praxisteil berichten wir über ein mehrsprachiges Textanalyse-Projekt für den gemeinnützigen Verein Monda Futura. Dabei geht es um die strukturierte Auswertung von rund 850 Zukunftsvisionen mithilfe von LLMs. Abschließend diskutieren wir Learnings zu Modellwahl, Kosten und dem sinnvollen Zusammenspiel von Mensch und KI. **Zusammenfassung** TabPFN 2.5: Skalierung, Distillation für produktive Nutzung und höhere Inferenzgeschwindigkeit ExplainerPFN als Alternative zu SHAP für Feature Importance ohne Zugriff auf das Originalmodell Trend zu AI-Agents, die große Teile der Softwareentwicklung übernehmen Use Case Monda Futura: Analyse von 850 mehrsprachigen Zukunftsvisionen (DE/FR/IT) Pipeline: Fragmentierung, Themenextraktion, Klassifikation und Szenarienerstellung Effektiver Einsatz von GPT-5-Mini vs. GPT-5.2-Pro je nach Aufgabentyp Zentrales Learning: Beste Ergebnisse durch Human-in-the-Loop statt Vollautomatisierung **Links** Prior Labs TabPFN-2.5 Model Report https://priorlabs.ai/technical-reports/tabpfn-2-5-model-report ExplainerPFN Forschungs-Paper (zero-shot Feature Importance) https://arxiv.org/abs/2601.23068 OpenCode – Open Source AI Coding Agent https://opencode.ai/ Monda Futura https://mondafutura.org/ OpenAI API & GPT-Modelle Überblick https://platform.openai.com/docs/models OpenAI Structured Output Guide https://platform.openai.com/docs/guides/structured-outputs
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Larry Swanson, a knowledge architect, community builder, and host of the Knowledge Graph Insights podcast. They explore the relationship between knowledge graphs and ontologies, why these technologies matter in the age of AI, and how symbolic AI complements the current wave of large language models. The conversation traces the history of neuro-symbolic AI from its origins at Dartmouth in 1956 through the semantic web vision of Tim Berners-Lee, examining why knowledge architecture remains underappreciated despite being deployed at major enterprises like Netflix, Amazon, and LinkedIn. Swanson explains how RDF (Resource Description Framework) enables both machines and humans to work with structured knowledge in ways that relational databases can't, while Alsop shares his journey from knowledge management director to understanding the practical necessity of ontologies for business operations. They discuss the philosophical roots of the field, the separation between knowledge management practitioners and knowledge engineers, and why startups often overlook these approaches until scale demands them. You can find Larry's podcast at KGI.fm or search for Knowledge Graph Insights on Spotify and YouTube.Timestamps00:00 Introduction to Knowledge Graphs and Ontologies01:09 The Importance of Ontologies in AI04:14 Philosophy's Role in Knowledge Management10:20 Debating the Relevance of RDF15:41 The Distinction Between Knowledge Management and Knowledge Engineering21:07 The Human Element in AI and Knowledge Architecture25:07 Startups vs. Enterprises: The Knowledge Gap29:57 Deterministic vs. Probabilistic AI32:18 The Marketing of AI: A Historical Perspective33:57 The Role of Knowledge Architecture in AI39:00 Understanding RDF and Its Importance44:47 The Intersection of AI and Human Intelligence50:50 Future Visions: AI, Ontologies, and Human BehaviorKey Insights1. Knowledge Graphs Combine Structure and Instances Through Ontological Design. A knowledge graph is built using an ontology that describes a specific domain you want to understand or work with. It includes both an ontological description of the terrain—defining what things exist and how they relate to one another—and instances of those things mapped to real-world data. This combination of abstract structure and concrete examples is what makes knowledge graphs powerful for discovery, question-answering, and enabling agentic AI systems. Not everyone agrees on the precise definition, but this understanding represents the practical approach most knowledge architects use when building these systems.2. Ontology Engineering Has Deep Philosophical Roots That Inform Modern Practice. The field draws heavily from classical philosophy, particularly ontology (the nature of what you know), epistemology (how you know what you know), and logic. These thousands-year-old philosophical frameworks provide the rigorous foundation for modern knowledge representation. Living in Heidelberg surrounded by philosophers, Swanson has discovered how much of knowledge graph work connects upstream to these philosophical roots. This philosophical grounding becomes especially important during times when institutional structures are collapsing, as we need to create new epistemological frameworks for civilization—knowledge management and ontology become critical tools for restructuring how we understand and organize information.3. The Semantic Web Vision Aimed to Transform the Internet Into a Distributed Database. Twenty-five years ago, Tim Berners-Lee, Jim Hendler, and Ora Lassila published a landmark article in Scientific American proposing the semantic web. While Berners-Lee had already connected documents across the web through HTML and HTTP, the semantic web aimed to connect all the data—essentially turning the internet into a giant database. This vision led to the development of RDF (Resource Description Framework), which emerged from DARPA research and provides the technical foundation for building knowledge graphs and ontologies. The origin story involved solving simple but important problems, like disambiguating whether "Cook" referred to a verb, noun, or a person's name at an academic conference.4. Symbolic AI and Neural Networks Represent Complementary Approaches Like Fast and Slow Thinking. Drawing on Kahneman's "thinking fast and slow" framework, LLMs represent the "fast brain"—learning monsters that can process enormous amounts of information and recognize patterns through natural language interfaces. Symbolic AI and knowledge graphs represent the "slow brain"—capturing actual knowledge and facts that can counter hallucinations and provide deterministic, explainable reasoning. This complementarity is driving the re-emergence of neuro-symbolic AI, which combines both approaches. The fundamental distinction is that symbolic AI systems are deterministic and can be fully explained, while LLMs are probabilistic and stochastic, making them unsuitable for applications requiring absolute reliability, such as industrial robotics or pharmaceutical research.5. Knowledge Architecture Remains Underappreciated Despite Powering Major Enterprises. While machine learning engineers currently receive most of the attention and budget, knowledge graphs actually power systems at Netflix (the economic graph), Amazon (the product graph), LinkedIn, Meta, and most major enterprises. The technology has been described as "the most astoundingly successful failure in the history of technology"—the semantic web vision seemed to fail, yet more than half of web pages now contain RDF-formatted semantic markup through schema.org, and every major enterprise uses knowledge graph technology in the background. Knowledge architects remain underappreciated partly because the work is cognitively difficult, requires talking to people (which engineers often avoid), and most advanced practitioners have PhDs in computer science, logic, or philosophy.6. RDF's Simple Subject-Predicate-Object Structure Enables Meaning and Data Linking. Unlike relational databases that store data in tables with rows and columns, RDF uses the simplest linguistic structure: subject-predicate-object (like "Larry knows Stuart"). Each element has a unique URI identifier, which permits precise meaning and enables linked data across systems. This graph structure makes it much easier to connect data after the fact compared to navigating tabular structures in relational databases. On top of RDF sits an entire stack of technologies including schema languages, query languages, ontological languages, and constraints languages—everything needed to turn data into actionable knowledge. The goal is inferring or articulating knowledge from RDF-structured data.7. The Future Requires Decoupled Modular Architectures Combining Multiple AI Approaches. The vision for the future involves separation of concerns through microservices-like architectures where different systems handle what they do best. LLMs excel at discovering possibilities and generating lists, while knowledge graphs excel at articulating human-vetted, deterministic versions of that information that systems can reliably use. Every one of Swanson's 300 podcast interviews over ten years ultimately concludes that regardless of technology, success comes down to human beings, their behavior, and the cultural changes needed to implement systems. The assumption that we can simply eliminate people from processes misses that huma...
Claire chatted to Elmira Yadollahi from Lancaster University about how children interact with and relate to robots. Elmira Yadollahi is an Assistant Professor of Computer Science at Lancaster University. She has a joint PhD in robotics and computer science from EPFL in Switzerland and Instituto Superior Técnico in Portugal. Her research tackles explainability in robotics, as well as multimodal perception and explanation methods. Her core expertise is in child–robot interaction, with a focus on expectation management, trust, and AI literacy. She has organised workshops on Explainability in Human-Robot Interaction and the Design and Development of Robots and AI with Children. Support Robot Talk on Patreon: https://www.patreon.com/ClaireAsher
Text us your thoughts on the episode or the show!In this episode of Ops Cast, we are talking about metrics, but not dashboards, tools, or attribution models for their own sake.Michael Hartmann is joined by our guest Josh McClanahan, Co-Founder and CEO of AccountAim. Josh brings a business operations perspective to reporting and analytics, working closely with leadership teams to identify which numbers actually matter and how to use them to make better decisions.This conversation focuses on the shift from reporting activity to driving action. Josh shares why many teams produce technically impressive metrics that fail to influence leadership, and how Ops professionals can reframe data in a way that connects directly to revenue, profitability, and how the business truly makes money.You will hear Josh break down which metrics executives care about most, including financial measures like LTV and CAC, how those metrics change as companies mature, and why explainability often matters more than precision.The group also discusses how Ops teams can decide when data is “good enough” to act on, how to prepare for executive conversations beyond pulling numbers, and the common mistakes teams make when data is presented without context.This episode is especially relevant for Marketing Ops, RevOps, and BizOps professionals who want to move from being seen as report builders to trusted business advisors.Topics covered include: • The gap between reporting and decision-making • Metrics that matter most to executives • Financial literacy for Ops leaders • Explainability versus complexity in analytics • Communicating data in a way that drives actionMake sure to watch this episode if you want to better align your reporting with business outcomes and elevate the impact of your Ops work.Episode Brought to You By MO Pros The #1 Community for Marketing Operations Professionals MarketingOps.com is curating the GTM Ops Track at Demand & Expand (May 19-20, San Francisco) - the premier B2B marketing event featuring 600+ practitioners sharing real solutions to real problems. Use code MOPS20 for 20% off tickets, or get 35-50% off as a MarketingOps.com member. Learn more at demandandexpand.com.Support the show
Send us a textWhat happens when artificial intelligence moves beyond images and begins interpreting clinical notes, kidney biopsies, multimodal cancer data, and even healthcare costs?In this episode, I open the year by exploring four recent studies that show how AI is expanding across the full spectrum of medical data. From Large Language Models (LLM) reading unstructured clinical text to computational pathology supporting rare kidney disease diagnosis, multimodal cancer prediction, and cost-effectiveness modeling in oncology, this session connects innovation with real-world clinical impact.Across all discussions, one theme is clear: progress depends not just on performance, but on integration, validation, interpretability, and trust.HIGHLIGHTS:00:00–05:30 | Welcome & 2026 Outlook New year reflections, global community check-in, and upcoming Digital Pathology Place initiatives.05:30–16:00 | LLMs for Clinical Phenotyping How GPT-4 and NLP automate phenotyping from free-text EHR notes in Crohn's disease, reducing manual chart review while matching expert performance.16:00–23:30 | AI Screening for Fabry Nephropathy A computational pathology pipeline identifies foamy podocytes on renal biopsies and introduces a quantitative Zebra score to support nephropathologists.23:30–29:30 | Is AI Cost-Effective in Oncology? A Markov model evaluates AI-based response prediction in locally advanced rectal cancer, highlighting when AI delivers value—and when it does not.29:30–38:30 | LLM-Guided Arbitration in Multimodal AI A multi-expert deep learning framework uses large language models to resolve disagreement between AI models, improving transparency and robustness.38:30–44:30 | Real-World AI & Cautionary Notes Ambient clinical scribing in practice, AI hallucinated citations, and why guardrails remain essential.KEY TAKEAWAYS• LLMs can extract meaningful clinical phenotypes from narrative notes at scale • AI can support rare disease diagnosis without replacing expert judgment • Economic value matters as much as technical performance • Explainability and arbitration are becoming critical in multimodal AI systems • Human oversight remains central to responsible adoptionResources & ReferencesDigital Pathology Place: https://www.digitalpathologyplace.comDigital Pathology 101 (free PDF, updates included)Automating clinical phenotyping using natural language processingZebra bodies recognition by artificial intelligence (ZEBRA): a computational tool for Fabry nephropathyCost-effectiveness analysis of artificial intelligence (AI) for response prediction of neoadjuvant radio(chemo)therapy in locally advanced rectal cancer (LARC) in the NetherlandsA multi-expert deep learning framework with LLM-guided arbitration for multimodal histopathology predictionSupport the showGet the "Digital Pathology 101" FREE E-book and join us!
Dr. Kelly Cohen is a Professor of Aerospace Engineering at the University of Cincinnati and a leading authority in explainable, certifiable AI systems. With more than 31 years of experience in artificial intelligence, his research focuses on fuzzy logic, safety-critical systems, and responsible AI deployment in aerospace and autonomous environments. His lab's work has received international recognition, with students earning top global research awards and building real-world AI products used in industry.In this episode 190 of the Disruption Now Podcast,
Most organizations are sitting on mountains of documents, PDFs, emails, and images they still cannot fully search, organize, or understand.In this episode of IT Visionaries, host Chris Brandt sits down with Tim McIntire, CTO of Hyland, to unpack why unstructured data is still one of the biggest blockers between AI hype and real results. Tim breaks down why so many companies struggle to access the content they already have and what it really takes to make that information usable, trustworthy, and valuable.From building content that is ready for AI to unlocking new context-aware agents and improving governance and transparency, Tim explains how leading organizations are finally turning everyday content into real business impact and why the future of enterprise AI starts with cleaning up what is already in the basement. Key Moments:02:36 - What Hyland Actually Does04:18 - Why 80% of Enterprise Data Is Unusable06:46 - From 30,000 Manual Indexes to Automation07:47 - Vectorization: Making Documents AI-Ready09:45 - The ROI of Eliminating Mundane Work10:55 - AI vs RPA: Why Intelligence Changes Everything13:23 - Federate Don't Migrate: Meeting Customers Where They Are16:29 - Governance Can't Be an Afterthought18:11 - The Explainability Breakthrough20:10 - Day 1 to Day 90: Faster Time to Value22:56 - Enterprise Agent Mesh Explained24:06 - Context Engineering: The New AI Superpower26:36 - Confidence Scores: When Humans Step In28:12 - Right Model for the Right Job31:00 - The 18-Month Prediction: Agents Everywhere -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
How do you turn mission into products that actually work?In this episode of Between Product and Partnerships, Pandium CEO Cristina Flaschen sits down with product leader Jacqueline Karlin to unpack how mission-driven thinking translates into real-world execution across vastly different scales.From small business lending at Amazon, to global expansion on Alexa, to early conversational commerce at WhatsApp, Jacqueline shares concrete examples of how anchoring on customer problems shapes better decisions, especially when navigating new technologies like AI and agentic commerce. The conversation goes deep on how product teams move from conviction to action, turning “why” into repeatable, defensible “how.”Who we sat down withJacqueline Karlin is a senior product leader with experience building and scaling products at Amazon (Lending & Alexa), WhatsApp, PayPal, Expedia, and more. Her work spans financial inclusion, commerce, AI-powered interfaces, and international platform expansion.Across roles, Jacqueline has focused on:Working backwards from real customer problemsLaunching and localizing products globallyBuilding trust-first experiences in regulated, high-stakes domains like payments and commerceToday, she's deeply engaged in the evolution of agentic commerce and how AI agents are changing how consumers discover, decide, and transact.Key topicsMission-driven product building and defining “why” How Jacqueline's personal mission shaped her career choices, and why understanding what motivates you as a product leader is critical to building products with long-term impact.Specific use cases Jacqueline has worked on Real examples from Amazon Lending, Alexa's international expansion, and WhatsApp's early commerce tooling, showing how different customer problems emerge at different layers of scale.Getting from “why” to “how” How strong teams translate mission into execution through hypotheses, customer conversations, localization, experimentation, and fast feedback (without chasing trends or shipping for novelty's sake).Episode highlights02:20 — Choosing roles based on mission, not momentum 06:36 — Learning from small business sellers and reshaping lending products 11:34 — What it really takes to launch Alexa in new countries21:36 — Early lessons from conversational commerce on WhatsApp 22:50 — Defining agentic commerce and where it's already showing up 25:12 — Why explainability matters when AI touches money 29:23 — Using hypotheses to move from intuition to executionKey takeaways1. Mission creates clarity when decisions get hard Mission acts as a decision filter, helping product leaders prioritize the right problems and navigate tradeoffs with confidence.2. Customer insight beats assumptions at every scale Direct conversations with users consistently surfaced constraints and opportunities that dashboards alone couldn't reveal.3. “Why” must survive contact with reality Strong teams treat ideas as hypotheses, testing and refining them quickly based on real feedback.4. Global products are built locally Successful international launches depend on cultural relevance, local partners, and thoughtful defaults.5. Trust is foundational in AI-driven commerce Explainability and transparency become core requirements as agents take on transactional responsibility.For more insights on partnerships, ecosystems, and integrations, visit www.pandium.com
Dennis Wei, Senior Research Scientist at IBM specializing in human-centered trustworthy AI, speaks with Pitt's HexAI podcast host, Jordan Gass-Pooré, about his work focusing on trustworthy machine learning, including interpretability of machine learning models, algorithmic fairness, robustness, causal inference and graphical models.Concentrating on explainable AI, they speak in depth about the explainability of Large Language Models (LLMs), the field of in-context explainability and IBM's new In-Context Explainability 360 (ICX360) toolkit. They explore research project ideas for students and touch on the personalization of explainability outputs for different users and on leveraging explainability to help guide and optimize LLM reasoning. They also discuss IBM's interest in collaborating with university labs around explainable AI in healthcare and on related work at IBM looking at the steerability of LLMs and combining explainability and steerability to evaluate model modifications.This episode provides a deep dive into explainable AI, exploring how the field's cutting-edge research is contributing to more trustworthy applications of AI in healthcare. The discussion also highlights emerging research directions ideal for stimulating new academic projects and university-industry collaborations.Guest profile: https://research.ibm.com/people/dennis-weiICX360 Toolkit: https://github.com/IBM/ICX360
How do you establish trust in an AI SOC, especially in a regulated environment? Grant Oviatt, Head of SOC at Prophet Security and a former SOC leader at Mandiant and Red Canary, tackles this head-on as a self-proclaimed "AI skeptic". Grant shared that after 15 years of being "scared to death" by high-false-positive AI, modern LLMs have changed the game .The key to trust lies in two pillars: explainability (is the decision reasonable?) and traceability (can you audit the entire data trail, including all 40-50 queries?) . Grant talks about yje critical architectural components for regulated industries, including single-tenancy , bring-your-own-cloud (BYOC) for data sovereignty , and model portability.In this episode we will be comparing AI SOC to traditional MDRs and talking about real-world "bake-off" results where an AI SOC had 99.3% agreement with a human team on 12,000 alerts but was 11x faster, with an average investigation time of just four minutes .Guest Socials - Grant's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security Podcast(00:00) Introduction(02:00) Who is Grant Oviatt?(02:30) How to Establish Trust in an AI SOC for Regulated Environments(03:45) Explainability vs. Traceability: The Two Pillars of Trust(06:00) The "Hard SOC Life": Pre-AI vs. AI SOC(09:00) From AI Skeptic to AI SOC Founder: What Changed? (10:50) The "Aha!" Moment: Breaking Problems into Bite-Sized Pieces(12:30) What Regulated Bodies Expect from an AI SOC(13:30) Data Management: The Key for Regulated Industries (PII/PHI) (14:40) Why Point-in-Time Queries are Safer than a SIEM (15:10) Bring-Your-Own-Cloud (BYOC) for Financial Services (16:20) Single-Tenant Architecture & No Training on Customer Data (17:40) Bring-Your-Own-Model: The Rise of Model Portability (19:20) AI SOC vs. MDR: Can it Replace Your Provider? (19:50) The 4-Minute Investigation: Speed & Custom Detections (21:20) The Reality of Building Your Own AI SOC (Build vs. Buy)(23:10) Managing Model Drift & Updates(24:30) Why Prophet Avoids MCPs: The Lack of Auditability (26:10) How Far Can AI SOC Go? (Analysis vs. Threat Hunting)(27:40) The Future: From "Human in the Loop" to "Manager in the Loop" (28:20) Do We Still Need a Human in the Loop? (95% Auto-Closed) (29:20) The Red Lines: What AI Shouldn't Automate (Yet) (30:20) The Problem with "Creative" AI Remediation(33:10) What AI SOC is Not Ready For (Risk Appetite)(35:00) Gaining Confidence: The 12,000 Alert Bake-Off (99.3% Agreement) (37:40) Fun Questions: Iron Mans, Texas BBQ & SeafoodThank you to Prophet Security for sponsoring this episode.
As companies rush to implement AI and automated decision-making tools, they may be walking into a legal minefield. On this episode of Today in Tech, host Keith Shaw speaks with attorney Rob Taylor from Carstens, Allen & Gourley about the growing legal risks tied to agentic AI, automated hiring, and the rise of ADM (automated decision-making) regulations. Rob breaks down: * Why AI tools used in hiring and insurance may trigger liability * How companies are getting ADM compliance wrong * What laws already apply even without new AI regulations * Real-world examples like credit scoring, job screening, and sentiment analysis * Why disclosure, explainability, and data retention are essential * Who's liable: the company or the AI developer? Chapters 00:00 Legal risks in AI and ADM 01:00 Common mistakes companies make 06:00 High-risk use cases: hiring, credit, insurance 10:00 Disclosure and consent pitfalls 15:00 Explainability and record-keeping laws 20:00 Unintentional bias in hiring algorithms 28:00 Who is liable: developer or deployer? 34:00 What future lawsuits might target 37:00 Fixing flawed AI governance 41:00 Litigation as the great teacher
By Adam Turteltaub Why did the AI do that? It's a simple and common question, but the answer is often opaque, with people referring to black boxes, algorithms and other words that only those in the know tend to understand. Allesia Falsone, a non-executive director of Innovate UK, says that's a problem. In cases where AI has run amok, the fallout is often worse because the company is unable to explain why the AI made the decision it made and what data it was relying on. AI, she argues, needs to be explainable to regulators and the public. That way all sides can understand what the AI is doing (or has done) and why. To create more explainable AI, she recommends the creation of a dashboard showing the factors that influence the decisions made. In addition, teams need to track changes made to the model over time. By doing so, when the regulator or public asks why something happened, the organization can respond quickly and clearly. In addition, by embracing a more transparent process, and involving compliance early, organizations can head off potential AI issues early in the process. Listen is to hear her explain the virtues of explainability.
If most companies are using the same AI systems, how can they stand out and get ahead? And as agentic AI becomes table stakes, what do enterprises need to keep in mind to make AI work? And how can we even trust an AI-powered workplace when most people can't even explain the basics of AI? We're learning from the experts. Accenture's Mary Hamilton joins the Everyday AI show to talk about building trust in an autonomous workplace, how we can prepare for the future of work, and four emerging AI trends you can't miss. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:AI-Powered Autonomy Shaping Future WorkGenerative AI's Impact on Business TransformationAccenture Technology Vision 2025 OverviewKey Trends: Autonomy and Enterprise AI AdoptionHuman Capability Expansion via AI ToolsTrust, Explainability, and Responsible AI PracticesAgentic AI Models and Productivity ShiftsContinuous Learning Loops in Workplace AIAI-Powered Robotics and Multimodal IntegrationPersonalization and Brand Voice with AI AgentsTimestamps:00:00 "AI's Impact on Business Autonomy"03:33 Accenture's Global Consultancy Overview09:48 Technology as a Game-Changing Partner12:16 Reinventing Responsible Tech Use14:31 Building Trust Through AI Interactions18:17 Building Trust in Enterprise Data23:20 Embracing AI: Active Learning Loop26:24 "Embracing Efficiency with AI Agents"Keywords:AI powered autonomy, generative AI, large language models, future of work, automation, business transformation, Accenture, innovation centers, strategic visioning, co-creation, ecosystem partners, digital core, technology consultancy, technology reinvention, enterprise AI adoption, operational efficiency, Technology Vision 2025, AI trends, human-like capabilities, language barrier, technology acceleration, digital agents, digital transformation, customer interaction, trust in AI, responsible AI, data platform, knowledge graphs, AI-driven robotics, warehouse automation, personalization at scale, brand voice in AI, digital twin, agentic models, observability, traceability, explainability, continuous learning loop, employee upskilling, generative AI productivity, change management, value-driven outcomes, super agents, utility agents, orchestrator agents, AI partner, human agency, AI collaboration, AI model accuracy, enSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
The Presidential Task Force on AI and Digital Technologies welcomes you to Wexford, a fictional city that has purchased AI-enabled law enforcement tools with a black box provision from a tech vendor. Task Force member Elizabeth Daitz moderates a discussion on the complexities of AI usage in criminal investigations and prosecution. Panelists Christian Quinn, Andrew Warshawer, Jerome Greco, and the Honorable Paul Grimm provide insightful perspectives on the significance of the black box provision, ethical and legal implications, and the need for transparency and coordination among stakeholders to ensure these tools are used effectively and justly in the criminal justice system. 00:00 Introductions 02:18 Fictional Case Study: Sentinel AI in Law Enforcement 03:31 Regulatory Landscape and Challenges 05:50 Bias and Explainability in AI 12:19 AI in Law Enforcement 18:01 Legal Implications of AI in Prosecution 35:52 Defense Perspective on AI Evidence 43:54 Challenging Unverifiable Evidence 46:57 Litigation Strategy and Expert Witnesses 49:00 Economic Barriers in Defense Technology 53:33 Judicial Perspectives on AI Evidence 01:13:53 Key Takeaways and Leadership in AI 01:22:11 Conclusion and Final Thoughts
Every SaaS company is racing to “add AI,” but most are doing it wrong. In this episode, Megh Gautam, Former Chief Product Officer at Crunchbase, reveals the hard truths behind building AI into established SaaS products. From avoiding hype-driven features to building trust through data quality and transparency, Megh shares how Crunchbase rolled out AI-powered capabilities without breaking user trust. He also breaks down the internal alignment, cross-functional execution, and relentless feedback loops required to ship AI features that actually matter.Key Takeaways -Start with Real User ProblemsAI should not be an “add-on story” — it must solve a core customer pain.Crunchbase began with AI in search, a high-usage, high-friction feature.Prioritize critical workflows over “nice-to-have” gimmicks.Data Quality Determines TrustBad data in = garbage out, especially with AI models.Crunchbase spent a decade building clean, reliable data pipelines before layering AI.Trustworthy results require grounding AI outputs in verified “truth sets.”User Trust Demands TransparencyCustomers don't just want answers — they want to know how those answers were derived.Explainability and confidence thresholds are essential for adoption.If unsure, don't hallucinate — caveat results and suggest alternatives.AI is a Company-Wide Effort, Not Just a Product LaunchDesigners, engineers, PMs, marketing, and GTM must move in lockstep.Pricing, packaging, and positioning are as critical as the technical build.Internal discomfort is normal — priorities will shift faster than in traditional SaaS launches.Continuous Feedback Loops Drive IterationEarly adopter programs and dense customer feedback cycles are critical.Patterns of confusion often surface only after repeated customer interactions.AI workflows blur traditional SaaS team boundaries — ownership must evolve.Chapters: 00:10 - Introduction 00:50 - Megh's SaaS journey (Twilio, Dropbox, Crunchbase) 02:45 - AI hype vs. solving real user problems 06:05 - Why Crunchbase started with AI in search 10:17 - Data quality as the foundation for trustworthy AI 15:07 - Overcoming AI skepticism with transparency 20:01 - Aligning product, engineering, marketing, and GTM on AI launches 25:46 - Feedback loops and customer education 30:32 - Lightning Round: Megh's favorite AI tools 36:27 - Closing thoughts and key remindersVisit our website - https://saassessions.com/Connect with me on LinkedIn - https://www.linkedin.com/in/sunilneurgaonkar/
AI growth with no rules? That's not bold. It's reckless.Everyone's racing to scale AI. More data, faster tools, flashier launches.But here's what no one's saying out loud:Growth without governance doesn't make you innovative. It makes you vulnerable.Ignore ethics, and you're building an empire on quicksand.In this episode, we're breaking down how to scale AI the right way—without wrecking trust, compliance, or your future.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Questions for Rajeev or Jordan? Go ask.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Balancing AI Innovation with Ethical GovernanceIntroduction of Rajeev Kapur and Eleven o Five MediaRajeev Kapur's Background in AICompanies Balancing AI Innovation and EthicsFormation of AI Ethics BoardData Management as Competitive AdvantagePrivacy and Ethics as Product FeaturesGovernance and Ethical Standards in AI UseImpact of Regulatory Changes on AI UseDeepfakes and Their ImplicationsEncouragement for Companies to Lead Ethically in AITimestamps:00:00 Navigating AI: Innovation vs. Risks04:00 "AI Startup's Spatial Audio Journey"06:49 AI Ethics Oversight & Governance10:04 Strategic AI Advisory Team Formation15:34 AI Strategy and Governance Essentials16:55 Global Standardization Needed for AI Policies22:47 AI Ethics: Innovation vs. Deepfakes25:48 "Regulate Deepfakes Like Nukes"27:17 Leadership Vision for Future SuccessKeywords:AI innovation, Ethical governance, Large language models, Data privacy, AI ethics board, AI governance, TDWI, Microsoft stack, Generative AI, AI algorithms, Spatial audio, Deep fakes, Data differentiation, Machine learning, Cyber security, Enterprise technology, Rajeev Kapur, 11:05 Media, AI safety, OpenAI, Data utilization, Ethical AI alignment, Regulatory aspect, AI models, Innovation vs. ethics, AI data privacy, Explainability, Data scientists, Third-party audits, Transparent AI usage, AI-driven growth, Monitoring feedback loops, Worst case testing, Smart regulations, Digital twins, Disinformation, AI bias mitigation, Data as new oil, Refining dataSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this episode of Crazy Wisdom, Stewart Alsop speaks with Juan Verhook, founder of Tender Market, about how AI reshapes creativity, work, and society. They explore the risks of AI-generated slop versus authentic expression, the tension between probability and uniqueness, and why the complexity dilemma makes human-in-the-loop design essential. Juan connects bureaucracy to proto-AI, questions the incentives driving black-box models, and considers how scaling laws shape emergent intelligence. The conversation balances skepticism with curiosity, reflecting on authenticity, creativity, and the economic realities of building in an AI-driven world. You can learn more about Juan Verhook's work or connect with him directly through his LinkedIn or via his website at tendermarket.eu.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart and Juan open by contrasting AI slop with authentic creative work. 05:00 – Discussion of probability versus uniqueness and what makes output meaningful. 10:00 – The complexity dilemma emerges, as systems grow opaque and fragile. 15:00 – Why human-in-the-loop remains central to trustworthy AI. 20:00 – Juan draws parallels between bureaucracy and proto-AI structures. 25:00 – Exploration of black-box models and the limits of explainability. 30:00 – The role of economic incentives in shaping AI development. 35:00 – Reflections on nature versus nurture in intelligence, human and machine. 40:00 – How scaling laws drive emergent behavior, but not always understanding. 45:00 – Weighing authenticity and creativity against automation's pull. 50:00 – Closing thoughts on optimism versus pessimism in the future of work.Key InsightsAI slop versus authenticity – Juan emphasizes that much of today's AI output tends toward “slop,” a kind of lowest-common-denominator content driven by probability. The challenge, he argues, is not just generating more information but protecting uniqueness and cultivating authenticity in an age where machines are optimized for averages.The complexity dilemma – As AI systems grow in scale, they become harder to understand, explain, and control. Juan frames this as a “complexity dilemma”: every increase in capability carries a parallel increase in opacity, leaving us to navigate trade-offs between power and transparency.Human-in-the-loop as necessity – Instead of replacing people, AI works best when embedded in systems where humans provide judgment, context, and ethical grounding. Juan sees human-in-the-loop design not as a stopgap, but as the foundation for trustworthy AI use.Bureaucracy as proto-AI – Juan provocatively links bureaucracy to early forms of artificial intelligence. Both are systems that process information, enforce rules, and reduce individuality into standardized outputs. This analogy helps highlight the social risks of AI if left unexamined: efficiency at the cost of humanity.Economic incentives drive design – The trajectory of AI is not determined by technical possibility alone but by the economic structures funding it. Black-box models dominate because they are profitable, not because they are inherently better for society. Incentives, not ideals, shape which technologies win.Nature, nurture, and machine intelligence – Juan extends the age-old debate about human intelligence into the AI domain, asking whether machine learning is more shaped by architecture (nature) or training data (nurture). This reflection surfaces the uncertainty of what “intelligence” even means when applied to artificial systems.Optimism and pessimism in balance – While AI carries risks of homogenization and loss of meaning, Juan maintains a cautiously optimistic view. By prioritizing creativity, human agency, and economic models aligned with authenticity, he sees pathways where AI amplifies rather than diminishes human potential.
Work is changing from human-led to AI-powered autonomy. How should we all prepare? And how can we even trust an AI-powered workplace when most people can't even explain the basics of AI? We're learning from the experts. Accenture's Mary Hamilton joins the Everyday AI show to talk about building trust in an autonomous workplace, how we can prepare for the future of work, and four emerging AI trends you can't miss. Don't miss this. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:AI-Powered Autonomy Shaping Future WorkGenerative AI's Impact on Business TransformationAccenture Technology Vision 2025 OverviewKey Trends: Autonomy and Enterprise AI AdoptionHuman Capability Expansion via AI ToolsTrust, Explainability, and Responsible AI PracticesAgentic AI Models and Productivity ShiftsContinuous Learning Loops in Workplace AIAI-Powered Robotics and Multimodal IntegrationPersonalization and Brand Voice with AI AgentsTimestamps:00:00 "AI's Impact on Business Autonomy"03:33 Accenture's Global Consultancy Overview09:48 Technology as a Game-Changing Partner12:16 Reinventing Responsible Tech Use14:31 Building Trust Through AI Interactions18:17 Building Trust in Enterprise Data23:20 Embracing AI: Active Learning Loop26:24 "Embracing Efficiency with AI Agents"Keywords:AI powered autonomy, generative AI, large language models, future of work, automation, business transformation, Accenture, innovation centers, strategic visioning, co-creation, ecosystem partners, digital core, technology consultancy, technology reinvention, enterprise AI adoption, operational efficiency, Technology Vision 2025, AI trends, human-like capabilities, language barrier, technology acceleration, digital agents, digital transformation, customer interaction, trust in AI, responsible AI, data platform, knowledge graphs, AI-driven robotics, warehouse automation, personalization at scale, brand voice in AI, digital twin, agentic models, observability, traceability, explainability, continuous learning loop, employee upskilling, generative AI productivity, change management, value-driven outcomes, super agents, utility agents, orchestrator agents, AI partner, human agency, AI collaboration, AI model accuracy, enterprise adaptation, digital twin technology, business process automation, AI in branding, personalized AI assistants, AI-powered design tools, responsible data usage, AI-enabled content creationSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this episode of Digitally Curious, host Andrew Grill, renowned futurist and author, sits down with Shannon Scott, Senior Vice President and Global Head of Product at Airwallex, one of the world's fastest-growing FinTech innovators.Key Topics Covered:Shannon's Journey:From rural Victoria to leading global product strategy at Airwallex, Shannon shares how his background in computer science and mechatronic engineering shapes his approach to building next-generation financial products.Engineering Mindset in Product Leadership:Discover how thinking from first principles and understanding technology “under the hood” enables Airwallex to deliver seamless, global financial services and challenge industry assumptions.AI's Transformative Role in Financial Services:Explore how AI is not just automating traditional tasks like fraud detection and compliance, but fundamentally transforming business workflows, onboarding, and financial operations — turning hours of manual work into minutes.Agentic AI Explained:Shannon demystifies agentic AI, describing how autonomous AI agents can handle complex, multi-step financial processes, from vendor onboarding to payment reconciliation, and what this means for both large and small businesses.Trust, Explainability & Regulation:The episode delves into the importance of maintaining trust and explainability in AI-driven finance, the role of human feedback, and why robust regulation gives financial services a head start in adopting AI responsibly.Data as a Strategic Asset:Learn why proprietary, high-quality data is the new competitive edge in the AI era, and how modular, adaptable data infrastructure is critical for future-proofing financial services.The Future of Decision-Making:Andrew and Shannon discuss the evolution of AI from an operational tool to a strategic decision partner, capable of suggesting best practices, optimising approval flows, and proactively managing risk.Actionable Insights:Shannon shares three practical steps for listeners to better understand and leverage agentic AI in finance:Embrace podcasts and diverse learning sourcesExperiment with new AI tools and servicesContinuously question and seek better ways of workingResourcesAirwallex WebsiteShannon on LinkedInThanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/orderYour Host is Actionable Futurist® Andrew GrillFor more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com Andrew's Social ChannelsAndrew on LinkedIn@AndrewGrill on Twitter @Andrew.Grill on InstagramKeynote speeches hereOrder Digitally Curious
Her early inspiration while growing up in Goa with limited exposure to career options. Her Father's intellectual influence despite personal hardships and shift in focus to technology.Personal tragedy sparked a resolve to become financially independent and learn deeply.Inspirational quote that shaped her mindset: “Even if your dreams haven't come true, be grateful that so haven't your nightmares.”Her first role at a startup with Hands-on work with networking protocols (LDAP, VPN, DNS). Learning using only RFCs and O'Reilly books—no StackOverflow! Importance of building deep expertise for long-term success.Experiences with Troubleshooting and System Thinking; Transitioned from reactive fixes to logical, structured problem-solving. Her depth of understanding helped in debugging and system optimization.Career move to Yahoo where she led Service Engineering for mobile and ads across global data centers got early exposure to big data and machine learning through ad recommendation systems and built "performance and scale muscle" through working at massive scale.Challenges of Scale and Performance Then vs. Now: Problems remain the same, but data volumes and complexity have exploded. How modern tools (like AI/ML) can help identify relevance and anomalies in large data sets.Design with Scale in Mind - Importance of flipping the design approach: think scale-first, not POC-first. Encourage starting with a big-picture view, even when building a small prototype. Highlights multiple scaling dimensions—data, compute, network, security.Getting Into ML and Data Science with early spark from MOOCs, TensorFlow experiments, and statistics; Transition into data science role at Infoblox, a cybersecurity firm with focus areas on DNS security, anomaly detection, threat intelligence.Building real-world ML model applications like supervised models for threat detection and storage forecasting; developing graph models to analyze DNS traffic patterns for anomalies and key challenges of managing and processing massive volumes of security data.Data stack and what it takes to build data lakes that support ML with emphasis on understanding the end-to-end AI pipelineShifts from “under the hood” ML to front-and-center GenAI & Barriers: Data readiness, ROI, explainability, regulatory compliance.Explainability in AI and importance of interpreting model decisions, especially in regulated industries.How Explainability Works -Trade-offs between interpretable models (e.g., decision trees) and complex ones (e.g., deep learning); Techniques for local and global model understanding.Aruna's Book on Interpretability and Explainability in AI Using Python (by Aruna C).The world of GenAI & Transformers - Explainability in LLMs and GenAI: From attention weights to neuron activation.Challenges of scale: billions of parameters make models harder to interpret. Exciting research areas: Concept tracing, gradient analysis, neuron behavior.GenAI Agents in Action - Transition from task-specific GenAI to multi-step agents. Agents as orchestrators of business workflows using tools + reasoning.Real-world impact of agents and AI for everyday lifeAruna Chakkirala is a seasoned leader with expertise in AI, Data and Cloud. She is an AI Solutions Architect at Microsoft where she was instrumental in the early adoption of Generative AI. In prior roles as a Data Scientist she has built models in cybersecurity and holds a patent in community detection for DNS querying. Through her two-decade career, she has developed expertise in scale, security, and strategy at various organizations such as Infoblox, Yahoo, Nokia, EFI, and Verisign. Aruna has led highly successful teams and thrives on working with cutting-edge technologies. She is a frequent technical and keynote speaker, panelist, author and an active blogger. She contributes to community open groups and serves as a guest faculty member at premier academic institutes. Her book titled "Interpretability and Explainability in AI using Python" covers the taxonomy and techniques for model explanations in AI including the latest research in LLMs. She believes that the success of real-world AI applications increasingly depends on well- defined architectures across all encompassing domains. Her current interests include Generative AI, applications of LLMs and SLMs, Causality, Mechanistic Interpretability, and Explainability tools.Her recently published book linkInterpretability and Explainability in AI Using Python: Decrypt AI Decision-Making Using Interpretability and Explainability with Python to Build Reliable Machine Learning Systems https://amzn.in/d/00dSOwAOutside of work, she is an avid reader and enjoys creative writing. A passionate advocate for diversity and inclusion, she is actively involved in GHCI, LeanIn communities.
Join Jamal Khan and Jennifer Johnson as they explore the evolving landscape of AI in healthcare, focusing on its applications, ethical considerations, data privacy, and the role of Chief AI Officers. This discussion highlights the importance of governance, patient consent, and the potential of AI to improve healthcare workflows while addressing data security challenges. Learn about how to implement AI responsibly for better healthcare outcomes and operational excellence. Speakers: Jamal Khan, Chief Growth and Innovation Officer at Connection Jennifer Johnson, Director of Healthcare Strategy and Business Development at Connection Show Notes: 00:00 The Evolution of AI in Healthcare 03:04 Ethics and Governance in AI Applications 06:05 Data Privacy and Security Concerns 08:49 The Role of Chief AI Officers 12:07 Patient Consent and Data Usage 14:54 AI's Impact on Healthcare Workflows 18:00 Computational Power in Health Data Analysis 20:47 Virtual Assistants in Healthcare 24:00 Clinical Trials vs. Drug Discovery 26:55 The Future of Patient Data Management 28:11 AI Adoption in Insurance Companies 33:05 Transparency and Explainability in AI 37:28 AI Use Cases in Healthcare 44:10 Cloud vs On-Prem AI Solutions 49:23 Data Orchestration in Healthcare For more information on AI services for healthcare, visit https://www.cnxnhelix.com/healthcare.
I Think Ai has some good qualities, but does it belong in whiskey. Today we dive into the when, where and why Ai is rad or sad. Hope y'all enjoy.Patreon.com/the_whiskeyshamanBadmotivatorbarrels.com/shop/?aff=3https://www.instagram.com/zsmithwhiskeyandmixology?utm_source=ig_web_button_share_sheet&igsh=ZDNlZDc0MzIxNw==ChatGPT is a large language model developed by OpenAI. It's an AI chatbot that can understand and respond to natural language, making it useful for tasks like writing, translating, and generating text in various formats. It's built on a machine learning model called a transformer neural network and is trained on vast amounts of text data from the internet. Here's a more detailed breakdown:Natural Language Processing (NLP):ChatGPT excels at processing and understanding human language, allowing it to engage in conversations and generate text that appears natural and coherent. Generative AI:It's a type of generative AI, meaning it can create new content based on user prompts. This includes writing articles, poems, code, emails, and more. Transformer Neural Network:It uses a specific type of neural network called a transformer, which is particularly well-suited for tasks involving natural language. Vast Training Data:ChatGPT is trained on a massive amount of text data from the internet, allowing it to learn patterns and relationships in language. Applications:Its uses are diverse, ranging from customer service and writing assistance to educational tools and content creation. AI safety is a complex issue with both benefits and risks. While AI offers significant potential for advancements in various fields, it also presents dangers like bias, misuse, and potential existential threats if not carefully managed. Safeguards like responsible design, development, and deployment practices, along with ethical considerations, are crucial to mitigate these risks. Here's a more detailed look at the safety aspects of AI: 1. Potential Risks: Bias:AI systems can inherit and amplify biases present in their training data, leading to unfair or discriminatory outcomes. Misuse:AI could be used for malicious purposes, such as creating fake content, manipulating public opinion, or automating cyberattacks. Existential Risks:Some experts fear that advanced AI could pose existential threats, potentially leading to uncontrollable systems that could harm humanity. Lack of Transparency:Many AI systems, particularly deep learning models, can be difficult to understand, making it hard to identify and address potential problems. Cybersecurity:AI-powered systems can be vulnerable to cyberattacks, and AI can also be used to launch more sophisticated attacks. Environmental Impact:The development and use of AI infrastructure can have significant environmental consequences, particularly regarding energy consumption and data center emissions. 2. Mitigation Strategies and Ethical Considerations: Responsible Design and Development:.Opens in new tabImplementing ethical guidelines and standards during the design and development of AI systems is crucial to minimize bias and ensure fairness. Transparency and Explainability:.Opens in new tabDeveloping AI systems that are more transparent and explainable can help users understand how they make decisions and identify potential errors. Human Oversight and Control:.Opens in new tabMaintaining human oversight and control over AI systems is essential to prevent unintended consequences and ensure accountability. Data Ethics:.Opens in new tabAddressing the ethical implications of data used to train AI systems, including issues of privacy, fairness, and security, is crucial. AI Safety Research:.Opens in new tabInvesting in research focused on AI safety and security can help identify and address potential risks before they become widespread. 3. Examples of AI Safety Initiatives: NIST AI Resource Center:
This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more. In this episode of Eye on AI, Craig Smith sits down with Barr Moses, Co-Founder & CEO of Monte Carlo, the pioneer of data and AI observability. Together, they explore the hidden force behind every great AI system: reliable, trustworthy data. With AI adoption soaring across industries, companies now face a critical question: Can we trust the data feeding our models? Barr unpacks why data quality is more important than ever, how observability helps detect and resolve data issues, and why clean data—not access to GPT or Claude—is the real competitive moat in AI today. What You'll Learn in This Episode: Why access to AI models is no longer a competitive advantage How Monte Carlo helps teams monitor complex data estates in real-time The dangers of “data hallucinations” and how to prevent them Real-world examples of data failures and their impact on AI outputs The difference between data observability and explainability Why legacy methods of data review no longer work in an AI-first world Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Intro (01:08) How Monte Carlo Fixed Broken Data (03:08) What Is Data & AI Observability? (05:00) Structured vs Unstructured Data Monitoring (08:48) How Monte Carlo Integrates Across Data Stacks (13:35) Why Clean Data Is the New Competitive Advantage (16:57) How Monte Carlo Uses AI Internally (19:20) 4 Failure Points: Data, Systems, Code, Models (23:08) Can Observability Detect Bias in Data? (26:15) Why Data Quality Needs a Modern Definition (29:22) Explosion of Data Tools & Monte Carlo's 50+ Integrations (33:18) Data Observability vs Explainability (36:18) Human Evaluation vs Automated Monitoring (39:23) What Monte Carlo Looks Like for Users (46:03) How Fast Can You Deploy Monte Carlo? (51:56) Why Manual Data Checks No Longer Work (53:26) The Future of AI Depends on Trustworthy Data
AI growth with no rules? That's not bold. It's reckless.Everyone's racing to scale AI. More data, faster tools, flashier launches.But here's what no one's saying out loud:Growth without governance doesn't make you innovative. It makes you vulnerable.Ignore ethics, and you're building an empire on quicksand.In this episode, we're breaking down how to scale AI the right way—without wrecking trust, compliance, or your future.Join us live as we break down Sustainable Growth with AI: Balancing Innovation with Ethical Governance — An Everyday AI Chat with Rajeev Kapur and Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Questions for Rajeev or Jordan? Go ask.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Balancing AI Innovation with Ethical GovernanceIntroduction of Rajeev Kapur and Eleven o Five MediaRajeev Kapur's Background in AICompanies Balancing AI Innovation and EthicsFormation of AI Ethics BoardData Management as Competitive AdvantagePrivacy and Ethics as Product FeaturesGovernance and Ethical Standards in AI UseImpact of Regulatory Changes on AI UseDeepfakes and Their ImplicationsEncouragement for Companies to Lead Ethically in AITimestamps:00:00 Navigating AI: Innovation vs. Risks04:00 "AI Startup's Spatial Audio Journey"06:49 AI Ethics Oversight & Governance10:04 Strategic AI Advisory Team Formation15:34 AI Strategy and Governance Essentials16:55 Global Standardization Needed for AI Policies22:47 AI Ethics: Innovation vs. Deepfakes25:48 "Regulate Deepfakes Like Nukes"27:17 Leadership Vision for Future SuccessKeywords:AI innovation, Ethical governance, Large language models, Data privacy, AI ethics board, AI governance, TDWI, Microsoft stack, Generative AI, AI algorithms, Spatial audio, Deep fakes, Data differentiation, Machine learning, Cyber security, Enterprise technology, Rajeev Kapur, 11:05 Media, AI safety, OpenAI, Data utilization, Ethical AI alignment, Regulatory aspect, AI models, Innovation vs. ethics, AI data privacy, Explainability, Data scientists, Third-party audits, Transparent AI usage, AI-driven growth, Monitoring feedback loops, Worst case testing, Smart regulations, Digital twins, Disinformation, AI bias mitigation, Data as new oil, Refining data, Diverse community partnSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Kevin Werbach speaks with Medha Bankhwal and Michael Chui from QuantumBlack, the AI division of the global consulting firm McKinsey. They discuss how McKinsey's AI work has evolved from strategy consulting to hands-on implementation, with AI trust now embedded throughout their client engagements. Chui highlights what makes the current AI moment transformative, while Bankwhal shares insights from McKinsey's recent AI survey of over 760 organizations across 38 countries. As they explain, trust remains a major barrier to AI adoption, although there are geographic differences in AI governance maturity. Medha Bankhwal, a graduate of Wharton's MBA program, is an Associate Partner, as well as Co-founder of McKinsey's AI Trust / Responsible AI practice. Prior to McKinsey, Medha was at Google and subsequently co-founded a digital learning not-for-profit startup. She co-leads forums for AI safety discussions for policy + tech practitioners, titled “Trustworthy AI Futures” as well as a community of ex-Googlers dedicated to the topic of AI Safety. Michael Chui is a senior fellow at QuantumBlack, AI by McKinsey. He leads research on the impact of disruptive technologies and innovation on business, the economy, and society. Michael has led McKinsey research in such areas as artificial intelligence, robotics and automation, the future of work, data & analytics, collaboration technologies, the Internet of Things, and biological technologies. Episode Transcript The State of AI: How Organizations are Rewiring to Capture Value (March 12, 2025) Superagency in the workplace: Empowering people to unlock AI's full potential (January 28, 2025) Building AI Trust: The Key Role of Explainability (November 26, 2024) McKinsey Responsible AI Principles
What happens when the hype around generative AI starts to mature, and businesses begin asking harder questions about performance, risk, and long-term value? In today's episode, I'm joined by Mike Mason, Chief AI Officer at Thoughtworks, to explore how 2025 is shaping up across the enterprise AI landscape—from the rise of intelligent agents to the growing traction of small, nimble models that prioritize security and specificity. Mike brings a deep, practical perspective on the evolution of AI inside complex organizations. He unpacks how AI agents are moving well beyond basic chatbots and starting to integrate into actual business workflows—performing as teammates that can reason, adapt, and even collaborate with other agents. We dig into examples like Klarna's workforce transformation and examine how this shift could play out across customer service, internal ops, and software development. We also look at what's fueling the boom in open source AI and how companies are navigating the balance between transparency, IP protection, and regulatory readiness. Mike shares why some financial services firms are turning to in-house fine-tuned models for greater control, and how open-weight and fully open-source models are starting to gain real ground. Another key theme is the momentum behind small language models. Mike explains why bigger isn't always better—especially when it comes to data privacy, edge deployment, and resource efficiency. He outlines where SLMs can outperform their larger counterparts and what that means for companies optimizing for security and speed rather than brute force compute. We also discuss Thoughtworks' forthcoming global survey, which reveals a growing divide in generative AI adoption. While mature players are building in bias detection and robust compliance frameworks, newer entrants are leaning toward fast operational gains and interpretability. This gap is shaping how GenAI projects are prioritized across industries and geographies, and Mike offers his take on how leaders can navigate both speed and safety. So, what role will explainability, regulation, and open ecosystems play in shaping the AI tools of tomorrow—and what should business and tech leaders be planning for now? Let's find out in this wide-ranging conversation with Thoughtworks.
Healthcare AI adoption is transforming the way we address risk, confidentiality, and patient care. In this episode, RJ Kedziora, co-founder of Estenda Solutions talks about the practical steps to safely integrate AI in clinical workflows. Learn how to manage data privacy, mitigate algorithmic bias, and keep a human in the loop to prevent misdiagnoses. Discover real-world strategies for using AI ethically, from ambient listening to second-opinion checks, and why it's irresponsible not to harness AI's potential. The discussion also highlights how AI can enhance the roles of healthcare professionals, ultimately improving patient outcomes.
Muscles hurt first before you build em.
This episode is sponsored by Andromeda Security. Learn more at https://www.andromedasecurity.com/idac Join Jeff and Jim on the Identity at the Center podcast as they chat with Ashish Shah, co-founder and Chief Product Officer of Andromeda Security. In this sponsored episode, Ashish dives deep into the importance of solving identity security problems, especially in cloud and SaaS environments. He explains how Andromeda's AI-powered platform focuses on both human and non-human identities, offering use case-driven solutions for security maturity. The discussion covers challenges, AI and machine learning applications, and practical insights into permissions management, risk scoring, just-in-time access, and more. Stay tuned for interesting takes on identity security and some fun recommendations for your reading/listening list. Chapters 00:00 Introduction to Identity as a Data Problem 00:41 Overview of Andromeda's Capabilities 01:27 Welcome to the Identity at the Center Podcast 02:03 Meet Ashish Shah, Co-Founder of Andromeda 02:37 The Genesis of Andromeda 03:33 Addressing Identity Security Challenges 05:29 Andromeda's Approach to Identity Security 09:44 Measuring Success with Andromeda 12:21 Andromeda's Market Position and Ideal Customers 18:35 The Rise of Non-Human Identities 28:42 Understanding Identity and Accounts in AWS 28:54 The Concept of Incarnations in Identity Management 29:42 Human and Non-Human Identities 32:13 Challenges in Authorization and Access Control 32:44 Implementing Zero Trust and Least Privilege 35:10 Role of AI and Machine Learning in Identity Management 36:21 Risk Scoring and Behavioral Analysis 39:04 Customer Data and Model Training 41:08 Explainability and Security of AI Models 46:14 Customer Influence on Model Tuning 49:03 Andromeda's Offer and Final Thoughts 51:34 Book Recommendations and Closing Remarks Connect with Ashish: https://www.linkedin.com/in/ashishbshah/ Learn more about Andromeda: https://www.andromedasecurity.com/idac Connect with us on LinkedIn: Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/ Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/ Visit the show on the web at idacpodcast.com and watch at https://www.youtube.com/@idacpodcast Keywords: Identity security, IAM, cybersecurity, artificial intelligence, AI, machine learning, ML, non-human identities, NHI, just-in-time access, JIT, IGA, privileged access management, PAM, identity threat detection and response, ITDR, cloud security, SaaS security, Andromeda Security, Ashish Shah, IDAC, Identity at the Center, Jim McDonald, Jeff Steadman
The Net Promoter System Podcast – Customer Experience Insights from Loyalty Leaders
Episode 242: Wells Fargo has established a clear position on artificial intelligence: If you can't explain how an AI model works, you shouldn't deploy it. This stance challenges the common assumption that black box algorithms are acceptable costs of advanced AI capabilities. In this episode, Kunal Madhok, Head of Data, Analytics, and AI for Wells Fargo's consumer business, reveals how the bank has operationalized this philosophy to enhance customer experiences while maintaining rigorous standards for model explainability and ethical deployment. The stakes for financial institutions are substantial. As banking becomes increasingly digitized, organizations must balance sophisticated personalization with transparency and trust. Wells Fargo's approach demonstrates that explainability isn't merely about regulatory compliance—it's a fundamental driver of business value and customer trust. Through rigorous review processes and a commitment to "plain English" explanations of algorithmic decisions, Wells Fargo ensures its models remain logical, aligned with business objectives, and comprehensible to stakeholders at all levels. This transparency serves multiple purposes: avoiding unintended consequences, maintaining human oversight of automated systems, and ensuring data-driven decisions actually drive business value. Discover how Wells Fargo's insistence on explainable AI is reshaping everything from product recommendations to customer service, while setting new standards for responsible innovation in financial services. Guest: Kunal Madhok, EVP, Head of Data, Analytics and AI, Wells Fargo Host: Rob Markey, Partner, Bain & Company Give Us Feedback: We'd love to hear from you. Help us enhance your podcast experience by providing feedback here in our listener survey: http://bit.ly/CCPodcastFeedback Want to get in touch? Send a note to host Rob Markey: https://www.robmarkey.com/contact-rob Time-stamped List of Topics Covered: [00:04:13] Integrating data science into business decisions and ensuring data-driven insights [00:07:29] Kunal's vision for personalization and delivering relevant, value-based products [00:09:22] Wells Fargo's ability to leverage life events and transactional data to better serve customers [00:11:05] Democratizing financial advice and offering tailored advice based on customer needs [00:16:53] Using live experimentation and AI models to tailor product offers and marketing [00:19:17] Strategic investment decisions for new product launches and capacity reservations using simulations [00:22:45] Explainability, and what this looks like in action [00:37:22] Strategies around servicing interactions and the key challenges around this work that demand solving Time-stamped Notable Quotes: [00:00:27] “When a customer walks into a bank, they're expecting you to know them.” [00:04:19] “Part of my role is to make sure we use data science in every business decision we make as an organization. And what that means is not just the quality and the fidelity of data, but also that decisions are made not based on intuition, but on real data outcomes.” 00:07:29] "Good personalization is: We'll give you the right product based on your interests and your needs, and we'll deliver it in a way that you want. Which is the right channel, the right offers.” [00:12:17] “If we can add value to our customers, they expect it. I'm sure when you turn on [a streaming service] today, it gives you a whole bunch of movies, shows to watch, curated just for you, based on your past history. And if they do it well, you actually like that, because you know the next five things to watch. And while that's in entertainment—and financial products are a very different space—that's the bar our customers are expecting us to meet.” [00:22:45] “As we train our talent, we've put a high bar on explainability of the work they do.”
In this episode of Future Finance, hosts Paul Barnhurst and Glenn Hopper discuss the intersection of artificial intelligence (AI) and finance. They explore how AI tools like large language models (LLMs) are transforming data analytics and decision-making processes. They also examine the broader implications of AI advancements in other high-stakes industries such as energy, defense, and healthcare.Jon Brewton is the founder and CEO of Data Squared. He brings extensive expertise in machine learning, AI solutions, and digital transformation. With a career spanning roles at BP, Chevron, and military service, Jon has spearheaded projects achieving significant operational efficiencies. At Data Squared, he focuses on creating reliable, traceable, and explainable AI solutions for critical sectors.In this episode, you will learn:Microsoft's advancements in LLMs for better integration with structured data.The five levels of AI capabilities outlined by OpenAI and what they mean.Why traceability and explainability are essential for deploying AI in finance.Innovative applications of Knowledge Graphs and RAG (Retrieval-Augmented Generation) technology Strategies to mitigate AI hallucinations and enhance reliability in decision-making processes.In this episode, Jon Brewton discusses the transformative role of AI in the financial sector, advancements including spreadsheet-specific LLMs, the power of knowledge graphs, and the critical importance of traceability and explainability in AI deployment. Follow Jon:LinkedIn: https://www.linkedin.com/in/jon-brewton-datasquared/Website: https://www.data2.ai/Join hosts Glenn and Paul as they unravel the complexities of AI in finance:Follow Glenn:LinkedIn: https://www.linkedin.com/in/gbhopperiiiFollow Paul:LinkedIn: https://www.linkedin.com/in/thefpandaguyFollow QFlow.AI:Website - https://bit.ly/4i1EkjgFuture Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai. Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.In Today's Episode:[01:50] - Advancements in Spreadsheet LLMs[04:54] - OpenAI's Roadmap to AGI [13:49] - Jon Brewton Introduction[18:45] - Importance of Traceability and Explainability[25:40] - Knowledge Graphs and Financial Data[34:56] - Addressing AI Hallucinations[42:01] - Advice for Finance Leaders[45:48] - Jon's Unique Experiences[48:53] - Closing Remarks
Kevin Werbach speaks with Krishna Gade, founder and CEO of Fiddler AI, on the the state of explainability for AI models. One of the big challenges of contemporary AI is understanding just why a system generated a certain output. Fiddler is one of the startups offering tools that help developers and deployers of AI understand what exactly is going on. In the conversation, Kevin and Krishna explore the importance of explainability in building trust with consumers, companies, and developers, and then dive into the mechanics of Fiddler's approach to the problem. The conversation covers current and potential regulations that mandate or incentivize explainability, and the prospects for AI explainability standards as AI models grow in complexity. Krishna distinguishes explainability from the broader process of observability, including the necessity of maintaining model accuracy through different times and contexts. Finally, Kevin and Krishna discuss the need for proactive AI model monitoring to mitigate business risks and engage stakeholders. Krishna Gade is the founder and CEO of Fiddler AI, an AI Observability startup, which focuses on monitoring, explainability, fairness, and governance for predictive and generative models. An entrepreneur and engineering leader with strong technical experience in creating scalable platforms and delightful products,Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. At Facebook, Krishna led the News Feed Ranking Platform that created the infrastructure for ranking content in News Feed and powered use-cases like Facebook Stories and user recommendations. Fiddler.Ai How Explainable AI Keeps Decision-Making Algorithms Understandable, Efficient, and Trustworthy - Krishna Gade x Intelligent Automation Radio
In this episode, we sit down with Vinay Kumar, the founder and CEO of Arya.ai, a leading AI platform designed to make artificial intelligence accessible, explainable, and safe for enterprises—particularly in the banking and financial services industries. Join us as Vinay shares his journey from a small town in Andhra Pradesh, India, to the cutting-edge world of AI, starting with his formative years at IIT Bombay and progressing to his current work with Arya.ai. Vinay dives into Arya.ai's mission: to democratize complex AI while ensuring it's auditable, transparent, and aligned with user goals. His journey began with the development of a STEM research assistant, InvenZone.com, and evolved into Arya.ai, an AI platform for enterprises that deploys deep learning solutions quickly and responsibly. Vinay discusses Arya.ai's role in creating AI systems that adapt to the unique needs of the financial sector, prioritizing safety and explainability to help organizations build trust with AI technologies.
In this episode, John Kaplan and John McMahon are joined by Devavrat Shah, CEO and co-founder of Ikigai Labs and MIT professor, to demystify the rapidly evolving landscape of artificial intelligence. The conversation spans a wide array of crucial AI topics including the history and applications of AI, causal inference, explainability, and the integration of AI into sales and forecasting processes. Key highlights include the role of AI in consumption pricing, business model transformations, and job market impacts. Shah underscores the importance of governance, ethical use, and education in AI, offering valuable insights into AI tools from Ikigai Labs and their practical implementations in sectors like healthcare, supply chain, and BFSI. The discussion concludes with a focus on the explosive growth of AI, urging businesses to invest in internal education and to approach AI adoption with a 'proof of value' mindset for sustained success and global upskilling.ADDITIONAL RESOURCESConnect and learn more about Devavrat Shah:https://www.linkedin.com/in/devavrat-shah-63b59a2/Learn more about AI through Ikagai Academy: https://www.ikigailabs.io/ai-academyCheck out Force Management's guide on implementing AI for B2B Sales teams: https://hubs.li/Q02TG4tZ0Enjoying the podcast? Sign up to receive new episodes straight to your inbox: https://hubs.li/Q02R10xN0HERE ARE SOME KEY SECTIONS TO CHECK OUT[00:03:02] History and Evolution of AI[00:06:21] Understanding AI Terminology[00:18:37] The Role of Explainability in AI[00:26:45] AI in Consumption Pricing and Forecasting[00:33:33] Future Possibilities and Implications of AI[00:35:58] AI's Role in Healthcare and Decision Making[00:37:08] Human-Machine Interaction and AI[00:38:29] Embracing AI Tools in Daily Life[00:40:33] Challenges and Governance in AI[00:42:44] The Importance of AI Governance[00:49:10] Introduction to IKIGAI Labs[00:54:13] AI's Impact on Industries and Consumers[01:01:18] The AI Revolution: Why Now?HIGHLIGHT QUOTES[00:03:15] "AI, statistics, machine learning, data science, for me, all of those terms have intimate relationships." – Devavret Shah[00:04:32] "Humans primarily do two things really well: mind and muscle." – Devavret Shah[01:00:27] "Don't just rush into AI because it's cool. Carefully choose where you go." – Devavret Shah[01:00:51] "Have internal champions who should be educated in terms of how to use AI." – Devavret Shah[01:04:13] "It's time to just upskill a little around AI so that we are not left behind." – Devavret Shah
Send us a textRoot cause analysis, model explanations, causal discovery.Are we facing a missing benchmark problem?Or not anymore?In this special episode, we travel to Los Angeles to talk with researchers at the forefront of causal research, exploring their projects, key insights, and the challenges they face in their work.Time codes:0:15 - 02:40 Kevin Debeire2:41 - 06:37 Yuchen Zhu06:37 - 10:09 Konstantin Göbler10:09 - 17:05 Urja Pawar17:05 - 23:16 William OrchardEnjoy!Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
(00:00) Intro: Negative connotations in AI(00:21) Synthetic data fills gaps(00:35) Guest introduction(01:23) Importance of data quality(02:14) Data-centric machine learning focus(03:02) Bias mitigation strategies(03:41) Role of human in AI loop(04:34) Synthetic data in AI(05:29) Pre-trained models and data quality(06:02) Experiments with data quality(06:39) Leading AI and research projects(07:24) Explainability in AI models(08:57) Privacy concerns in AI analysis(10:34) Open source model benchmarking(11:33) Motivation for open source contributions(12:28) Long-term open source involvement(13:50) Mentoring in open source projects(15:19) Starting with open source(16:35) Contributing beyond code(17:50) Building community through collaboration(18:48) Power of open source accessibility(19:52) Open source challenges(20:38) Success factors for open source projects(22:58) Career-defining moments(24:49) First encounter with open source(26:28) Introduction to AI through NLP(28:02) Pivoting from PhD to industry(29:02) Career lessons and continuous learning(30:13) Advice for women in tech --- Support this podcast: https://podcasters.spotify.com/pod/show/women-in-data/support
[קישור לקובץ mp3]פרק 476 של רברס עם פלטפורמה, שהוקלט ב-25 ביולי 2024 (יומיים אחרי ההקלטה הקודמת). אורי ורן מארחים בשבוע ה-ML (הלא רשמי) את דגן מחברת Citrusx לשיחה על ארגונים ש-ML חשוב להם. 00:45 דגן ו-Citrusx ורברס עם פלטפורמה (באמת)(רן) אז לפני שנצלול לעסק - קצת עליך ועל החברה?(דגן) נעים מאוד, אני דגן. במקור בכלל קיבוצניק מעוטף עזה, עם לינה משותפת וכל ה . . . . (אורי) . . . . אתה בחברה טובה . . . . לא מעוטף-עזה, אבל גם אנחנו, שנינו.(דגן) . . . אז המקצוע הראשון שלי זה רפתן, ואחרי זה עבדתי עם . . . . (אורי) לא, אבל בוא נשאל את השאלה - מתי בפעם הראשונה עשית רברס עם פלטפורמה?(דגן) אז אני הייתי ברפת יותר, זה פחות. זה יותר עם הטרקטור של החלוקת-מזון, ופחות עם הרברסים עם העגלה.(אורי) . . . . חרא עד הברכיים הברכיים - קדימה . . . (דגן) זה כן . . . (רן) טוב, אז אתה בחברה טובה . . . . אוקיי, אז גדלת שם, ואחר כך?...(דגן) אז גדלתי שם, ואחר כך בצבא הגעתי ל-8200, ליחידה מאוד טכנולוגית.ושם נכנסתי לעולם הזה, של תוכנה ואלגוריתמיקה וכל הדברים “המדעיים".ומפה לאוניברסיטה, כשלמדתי מדעי המחשב ומדעי המוח, אי… קרא עוד
In this episode, Dean speaks with Federico Bacci, a data scientist and ML engineer at Bol, the largest e-commerce company in the Netherlands and Belgium. Federico shares valuable insights into the intricacies of deploying machine learning models in production, particularly for forecasting problems. He discusses the challenges of model explainability, the importance of feature engineering over model complexity, and the critical role of stakeholder feedback in improving ML systems. Federico also offers a compelling perspective on why LLMs aren't always the answer in AI applications, emphasizing the need for tailored solutions. This conversation provides a wealth of practical knowledge for data scientists and ML engineers looking to enhance their understanding of real-world ML operations and challenges in e-commerce. Join our Discord community: https://discord.gg/tEYvqxwhah --- Timestamps: 00:00 Introduction and Background 01:59 Owning the ML Pipeline 02:56 Deployment Process 05:58 Testing and Feedback 07:40 Different Deployment Strategies 11:19 Explainability and Feature Importance 13:46 Challenges in Forecasting 22:33 ML Stack and Tools 26:47 Orchestrating Data Pipelines with Airflow 31:27 Exciting Developments in ML 35:58 Recommendations and Closing Links Dwarkesh podcast with Anthropic and Gemini team members – https://www.dwarkeshpatel.com/p/sholto-douglas-trenton-bricken ➡️ Federico Bacci on LinkedIn – https://www.linkedin.com/in/federico-bacci/ ➡️ Federico Bacci on Twitter – https://x.com/fedebyes
HOW TO ETHICALLY IMPLEMENT AI INTO TALENT ACQUISITION One of the factors slowing down AI Adoption is justifiable concern on the ethics of AI. How do we ensure that we are not exacerbating bias, privileging one demographic over another and treating human beings as lesser than the machines? It's been great to see software vendors take the lead in trying find a path forward. A superb how-to guide from our friends Willo was featured in Recruiting Brainfood Issue 399 (free to download, no gate here) and it covered the practical steps required to move from idea to practice. We all have to get their, so we are using today's Brainfood Live as an opportunity to walk through the guide: - How AI impacts candidate assessment - How to Build Trust with Employees - New TA Infrastructure for the AI-enabled Candidate - Key areas of recruitment optimisation with AI - How AI can benefit candidate experience - How to ensure AI reduces bias rather than exacerbate it - AI as an EB Co-pilot - Key AI policies: what is the regulatory environment? - Privacy, Referencability, Explainability, Humanity - How to get started: Audit / Analytical framework - Research and Communication Strategy - Launch and Implementation It's a fantastic guide and we're roping in Euan Cameron and Andrew Wood (Co-founders), Willo to walk us through it. Friday 19th July, 2pm BST Ep264 is sponsored by our friends Willo Willo is the virtual interviewing platform trusted by thousands of recruiters worldwide. - Receive video responses to your questions remotely, from anyone, anywhere in the world. 1000's of organisations already use Willo to hear from more people, in less time, and never have to worry about scheduling calls or meetings again. Join them, it is free to get started and we have no setup fees or contracts. Plus our incredible UK-based support team is available 24/7 to help you transform your interviewing process. Schedule a demo with one of our friendly team members today.
Timestamps00:00:00 - Intro00:02:00 - Beth's Journey00:19:33 - Ontologies in AI00:21:44 - Data Lineage and Provenance00:32:52 - Open Source Tools00:38:38 - Explainable AI00:44:58- Inspiration from NatureQuotesBeth Rudden: "The best thing that I could tell you that I see is that it's going to shift from more pure mathematical and statistical to much more semantic, more qualitative. Instead of quantity, we're going to have quality."Charna Parkey: "I love that because I've been so mathematical for most of my life. I didn't have a lot of words for the feelings or expressions, right? And so I had sort of this lack of data and the Brené Brown reference you make, like I have many of her books on my shelf and I often pull, I don't even know where it is right now, but the Atlas of the Heart because I am having this feeling and I don't know what it is."LinksConnect with BethConnect with Charna
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence. I'm so excited to welcome this expert from the field of UX and design to today's episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems. In our chat, we covered: Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy' AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There's no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable' user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55) Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben's earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc Quotes from Today's Episode The world of AI has certainly grown and blossomed — it's the hot topic everywhere you go. It's the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they're not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that's where the action is. Of course, what we really want from AI is to make our world a better place, and that's a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person's sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that's where we want to go. - Ben (2:05) The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it's not just programming, but it also involves the use of data that's used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let's say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There's been bias in facial recognition algorithms, which were less accurate with people of color. That's led to some real problems in the real world. And that's where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10) Every company will tell you, “We do a really good job in checking out our AI systems.” That's great. We want every company to do a really good job. But we also want independent oversight of somebody who's outside the company — someone who knows the field, who's looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that's where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04) There's no such thing as an autonomous device. Someone owns it; somebody's responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it's performing poorly. … Responsibility is a pretty key factor here. So, if there's something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what's happening? What's it doing? What's going wrong and what's going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that's hidden away and you never see it because that's just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what's going on and make sure it gets better. Every quarter. - Ben (19:41) Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they're at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they're doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36) Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what's usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I'm afraid I haven't seen too many success stories of that working. … I've been diving through this for years now, and I've been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA's XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it's going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let's prevent the user from getting confused and so they don't have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what's happened in each step, you can go back, you can explore, you can change things in each part of it. It's also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)
AI is revolutionizing healthcare by analyzing massive datasets to uncover hidden patterns, leading to breakthroughs in disease diagnosis, treatment, and patient care. Join Jennifer Johnson and Jamal Khan as they explore AI's impact on healthcare. They delve into critical ethical considerations, governance structures, data security measures, and AI's role in clinical decision support. Speakers: Jennifer Johnson, Director of Healthcare Strategy and Business Development at Connection Jamal Kahn, Chief Growth and Innovation Officer at Connection Show Notes 00:00 Introduction and AI Ecosystem Shifts 02:07 Ethical Considerations and Governance in AI Healthcare 05:49 Challenges of Data Poisoning and Model Drift in AI Healthcare 08:02 Role of CAIOs in Healthcare Governance and Data Strategy 10:48 Importance of Patient Consent and Cross-Jurisdictional Challenges 13:01AI's Impact on Healthcare Provider Work Environment 17:45 Vetting AI Partners and Virtual Assistants in Healthcare 19:39 Patient Accessibility and Engagement in Healthcare 22:50 Clinical Trials and Technology in Healthcare 24:13 Challenges of Merging Patient Data in Healthcare 27:01 AI Adoption in Healthcare: Impact on Insurance Providers 32:08 Challenges of Transparency and Explainability in AI 35:58 AI in Clinical Settings: Promising Use Cases 37:18 Choosing Hyperscalers for Healthcare AI Implementation 48:01 Data Orchestration for Patient Care with AI 50:17 Following Patients Through Care Settings with AI 52:08 Excitement and Challenges of AI Integration in Healthcare
Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur, As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.Show notesPrologue: Why responsible AI? Why now? (00:00:00)Deviating from our normal topics about modeling best practicesContext about where regulation plays a role in industries besides big techCan we learn from other industries about the role of "responsibility" in products? Special guest, Anthony Habayeb (00:02:59)Introductions and start of the discussionOf all the companies you could build around AI, why governance?Is responsible AI the right phrase? (00:11:20)Should we even call good modeling and business practices "responsible AI"?Is having responsible AI a “want to have?” or a “need to have?”Importance of AI regulation and responsibility (00:14:49)People in the AI and regulation worlds have started pushing back on Responsible AI.Do regulations impede freedom?Discussing the big picture of responsibility and governance: Explainability, repeatability, records, and auditWhat about bias and fairness? (00:22:40)You can have fair models that operate with biasBias in practice identifies inequities that models have learnedFairness is correcting for societal biases to level the playing field for safer business and modeling practices to prevail.Responsible deployment and business management (00:35:10)Discussion about what organizations get right about responsible AIAnd what organizations can get completely wrong if they aren't careful.Embracing responsible AI practices (00:41:15)Getting your teams, companies, and individuals involved in the movement towards building AI responsiblyWhat did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
From Hollywood to Hip Hop, artists are negotiating new boundaries of consent for use of AI in the creative industries. Bridget Todd speaks to artists who are pushing the boundaries.It's not the first time artists have been squeezed, but generative AI presents new dilemmas. In this episode: a member of the AI working group of the Hollywood writers union; a singer who licenses the use of her voice to others; an emcee and professor of Black music; and an AI music company charting a different path.Van Robichaux is a comedy writer in Los Angeles who helped craft the Writers Guild of America's proposals on managing AI in the entertainment industry. Holly Herndon is a Berlin-based artist and a computer scientist who has developed “Holly +”, a series of deep fake music tools for making music with Holly's voice.Enongo Lumumba-Kasongo creates video games and studies the intersection between AI and Hip Hop at Brown University. Her alias as a rapper is Sammus. Rory Kenny is co-founder and CEO of Loudly, an AI music generator platform that employs musicians to train their AI instead of scraping music from the internet.*Thank you to Sammus for sharing her track ‘1080p.' Visit Sammus' Bandcamp page to hear the full track and check out more of her songs.*
Why does it so often feel like we're part of a mass AI experiment? What is the responsible way to test new technologies? Bridget Todd explores what it means to live with unproven AI systems that impact millions of people as they roll out across public life. In this episode: a visit to San Francisco, a major hub for automated vehicle testing; an exposé of a flawed welfare fraud prediction algorithm in a Dutch city; a look at how companies comply with regulations in practice; and how to inspire alternative values for tomorrow's AI.Julia Friedlander is senior manager for automated driving policy at San Francisco Municipal Transportation Agency who wants to see AVs regulated based on safety performance data.Justin-Casimir Braun is a data journalist at Lighthouse Reports who is investigating suspect algorithms for predicting welfare fraud across Europe. Navrina Singh is the founder and CEO of Credo AI, a platform that guides enterprises on how to ‘govern' their AI responsibly in practice.Suresh Venkatasubramanian is the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University and he brings joy to computer science. IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd shares stories about prioritizing people over profit in the context of AI.
They're the essential workers of AI — yet mostly invisible and exploited. Does it have to be this way? Bridget Todd talks to data workers and entrepreneurs pushing for change.Millions of people work on data used to train AI behind the scenes. Often, they are underpaid and even traumatized by what they see. In this episode: a company charting a different path; a litigator holding big tech accountable; and data workers organizing for better conditions.Thank you to Foxglove and Superrr for sharing recordings from the the Content Moderators Summit in Nairobi, Kenya in May, 2023.Richard Mathenge helped establish a union for content moderators after surviving a traumatic experience as a contractor in Kenya training Open AI's ChatGPT.Mercy Mutemi is a litigator for digital rights in Kenya who has issued challenges to some of the biggest global tech companies on behalf of hundreds of data workers.Krista Pawloski is a full time data worker on Amazon's Mechanical Turk platform and is an organizer with the worker-led advocacy group, Turkopticon.Safiya Husain is the co-founder of Karya, a company in India with an alternative business model to compensate data workers at rates that reflect the high value of the data.IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.
Are today's large language models too hot to handle? Bridget Todd digs into the risks and rewards of open sourcing the tech that makes ChatGPT talk.In their competitive rush to release powerful LLMs to the world, tech companies are fueling a controversy about what should and shouldn't be open in generative AI.In this episode, we meet open source research communities who have stepped up to develop more responsible machine learning alternatives.David Evan Harris worked at Meta to make AI more responsible and now shares his concerns about the risks of open large language models for disinformation and more. Abeba Birhane is a Mozilla advisor and cognitive scientist who calls for openness to facilitate independent audits of large datasets sourced from the internet. Sasha Luccioni is a researcher and climate lead at Hugging Face who says open source communities are key to developing ethical and sustainable machine learning.Andriy Mulyar is co-founder and CTO of Nomic, the startup behind the open source chatbot GPT4All, an offline and private alternative to ChatGPT.IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.
This season, IRL host Bridget Todd meets people who are balancing the upsides of artificial intelligence with the downsides that are coming into view worldwide. Stay tuned for the first of five biweekly episodes on October 10! IRL is an original podcast from the non-profit Mozilla.