POPULARITY
The promise of agentic AI has been massive, autonomous systems that act, reason, and make business decisions, but most enterprises are still struggling to see results.In this episode, host Chris Brandt sits down with Sumeet Arora, Chief Product Officer at Teradata, to unpack why the gap exists between AI hype and actual impact, and what it takes to make AI scale, explainable, and ROI-driven.From the shift toward “AI with ROI” to the new era of human + AI systems and data quality challenges, Sumeet shares how leading enterprises are moving from flashy demos to measurable value and trust in the next phase of AI. CHAPTER MARKERS00:00 The AI Hackathon Era03:10 Hype vs Reality in Agentic AI06:05 Redesigning the Human AI Interface09:15 From Demos to Real Economic Outcomes12:20 Why Scaling AI Still Fails15:05 The Importance of AI Ready Knowledge18:10 Data Quality and the Biggest Bottleneck20:46 Building the Customer 360 Knowledge Layer23:35 Push vs Pull Systems in Modern AI26:15 Rethinking Enterprise Workflows29:20 AI Agents and Outcome Driven Design32:45 Where Agentic AI Works Today36:10 What Enterprises Still Get Wrong39:30 How AI Changes Engineering Priorities55:49 The Future of GPUs and Efficiency Challenges -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Systems should make life easier, not more complicated. That idea runs through our conversation with technology strategist “VPN,” whose journey from SAP in India to the UN in Geneva to advising global institutions shaped a simple practice: start with the problem, then use data and AI to serve people with clarity and care.We dig into what most teams get wrong about data—confusing volume with insight and falling into confirmation bias. Instead of chasing clever dashboards, we map a workflow where hypotheses are tested, methods are transparent, and systems explain themselves in plain language. The result is trust. And trust is what unlocks adoption, the critical moment when data actually changes a decision. From HR policy Q&A to legal discovery, we show how AI can strip away repetitive labor so humans focus on context, tradeoffs, and fairness.Designing for the public means building for real settings: clinics with noise, fields with poor connectivity, and city services that must be accessible, secure, and easy to use. We explore digital twins, predictive maintenance, and crowdsourced reporting—and why each only works when the loop closes and action is visible. Along the way, we share a framework for people-first AI strategy: educate users, co-design with business owners, choose use cases where automation is safe and useful, and require explainability where stakes are high. The through line is constant: human judgment at the end of the loop, with AI as the force multiplier.If you care about ethical AI, public sector innovation, and data that leads to better outcomes—not just faster reports—you'll find practical steps you can apply today. Subscribe, share with a colleague who wrangles dashboards for a living, and leave a review with one question you want AI to help your community answer next.Send us a textCheck out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats. The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.
Artificial intelligence often struggles with the ambiguity, nuance, and shifting context that defines human reasoning. Fuzzy logic offers an alternative, by modelling meaning in degrees rather than absolutes.In this roundtable episode, ResearchPod speaks with Professors Edy Portmann, Irina Perfilieva, Vilem Novak, Cristina Puente, and José María Alonso about how fuzzy systems capture perception, language, social cues, and uncertainty. Their insights contribute to the upcoming FMsquare Foundation booklet on fuzzy logic, exploring the role of uncertainty-aware reasoning in the future of AI.You can read the previous booklet from this series here: Fuzzy Design-Science ResearchYou can listen to previous fuzzy podcasts here: fmsquare.org
Deep Dives with Iman, hosted by Iman Mossavat and featured by Radio 4 Brainport, features Associate Professor Jelle Zuidema from the University of Amsterdam's Institute for Logic, Language, and Computation (ILLC).Zuidema analyzes large language models (LLMs) using explainable AI, including striking examples to show how model knowledge can be surgically edited. He briefly discusses how researchers understand LLMs, fix biases, and analyze how they reason. Listeners gain insight into how these models represent knowledge, interact with data, and how analysis can clarify complex AI systems. This episode gives you an excellent view into the inner workings of modern language models.
Wykład prof. dr hab. Przemysława Biecka w ramach Festiwalu Nauki w Warszawie [27 września 2025 r.] Czy można zrozumieć jak działają super złożone modele AI? Podczas prelekcji przedstawimy powody dlaczego musimy je wyjaśniać oraz aktualny stan wiedzy o technikach pozwalających na ich wyjaśnianie.Wykład kierowany dla osób interesujących się modelami uczenia maszynowego i sztucznej inteligencji. Przedstawię podczas niego nowe wyniki dotyczące technik eksploracji i wyjaśniania złożonych modeli. Pojawią się też przykładowe rozwiązania dla zastosowań medycznych i zastosowań w branży kosmicznej.Graficzne streszczenie wykładu znajduje się w komiksie:https://bit.ly/3GXU4pwprof. dr hab. Przemysław Biecek - specjalista w dziedzinie statystyki matematycznej i nauk inżynieryjno-technicznych, od lat związany z Politechniką Warszawską i Uniwersytetem Warszawskim. Studia oraz doktorat ukończył na Politechnice Wrocławskiej, gdzie w 2007 roku obronił rozprawę poświęconą testowaniu zbiorów hipotez z uwzględnieniem relacji hierarchii i ich zastosowaniom w genetyce. W 2013 roku uzyskał habilitację w Instytucie Biocybernetyki i Inżynierii Biomedycznej PAN, a w 2023 roku otrzymał tytuł profesora nauk inżynieryjno-technicznych. Jego badania koncentrują się na metodach statystycznych i uczeniu maszynowym, ze szczególnym uwzględnieniem wyjaśnialnej sztucznej inteligencji (Explainable AI). Interesuje się interpretacją złożonych modeli, wizualizacją danych oraz zastosowaniami sztucznej inteligencji w medycynie i biologii. Jest autorem i współautorem wielu publikacji naukowych oraz narzędzi open source, w tym popularnego pakietu DALEX, wspierającego interpretację modeli uczenia maszynowego. Profesor Biecek prowadzi grupę badawczą MI².AI działającą na Uniwersytecie Warszawskim i Politechnice Warszawskiej. Angażuje się także w projekty związane z testowaniem bezpieczeństwa modeli generatywnej sztucznej inteligencji, rozwija metody wyjaśniania modeli dla różnych typów danych i aktywnie działa na rzecz otwartej nauki oraz edukacji opartej na danych. Poza działalnością naukową pełnił funkcję prodziekana ds. rozwoju Wydziału Matematyki i Nauk Informacyjnych Politechniki Warszawskiej w kadencji 2020–2024, a także uczestniczy w pracach komisji uczelnianych dotyczących finansowania badań i etyki. Jest uznanym popularyzatorem wiedzy o statystyce i sztucznej inteligencji, inspirując kolejne pokolenia badaczy i praktyków danych.Jeśli chcesz wspierać Wszechnicę w dalszym tworzeniu treści, organizowaniu kolejnych #rozmówWszechnicy, możesz:1. Zostać Patronem Wszechnicy FWW w serwisie https://patronite.pl/wszechnicafwwPrzez portal Patronite możesz wesprzeć tworzenie cyklu #rozmowyWszechnicy nie tylko dobrym słowem, ale i finansowo. Będąc Patronką/Patronem wpłacasz regularne, comiesięczne kwoty na konto Wszechnicy, a my dzięki Twojemu wsparciu możemy dalej rozwijać naszą działalność. W ramach podziękowania mamy dla Was drobne nagrody.2. Możesz wspierać nas, robiąc zakupy za pomocą serwisu Fanimani.pl - https://tiny.pl/wkwpkJeżeli robisz zakupy w internecie, możesz nas bezpłatnie wspierać. Z każdego Twojego zakupu średnio 2,5% jego wartości trafi do Wszechnicy, jeśli zaczniesz korzystać z serwisu FaniMani.pl Ty nic nie dopłacasz!3. Możesz przekazać nam darowiznę na cele statutowe tradycyjnym przelewemDarowizny dla Fundacji Wspomagania Wsi można przekazywać na konto nr:33 1600 1462 1808 7033 4000 0001Fundacja Wspomagania WsiZnajdź nas: https://www.youtube.com/c/WszechnicaFWW/https://www.facebook.com/WszechnicaFWW1/https://anchor.fm/wszechnicaorgpl---historiahttps://anchor.fm/wszechnica-fww-naukahttps://wszechnica.org.pl/#ai #technologia #nauka #festiwalnauki #sztucznainteligencja #uczeniemaszynowe
In this episode of Most People Don't… But YOU Do!, Bart sits down with Dr. Fares Khalid Alaboud, Regional Director of Al Fares International Travel & Tourism. With a PhD in Artificial Intelligence and 15 years of experience in innovation and product leadership, Dr. Fares bridges travel, technology, and business strategy. From gratitude stones to groundbreaking AI applications in hospitality, this conversation blends personal storytelling with future-facing lessons on service, risk-taking, and curiosity.
Send us a textCan your dentist use artificial intelligence (AI) to spot health problems sooner? Imagine an extra set of eyes that never gets tired — that's what AI is bringing to dentistry. In this episode, Ahmed Sultan, BDS, PhD, director of the Division of Artificial Intelligence Research at the University of Maryland School of Dentistry, shares how new AI tools are helping dentists catch issues like cavities and oral cancer earlier. He also talks about why it matters to use diverse data, the ethical questions behind AI in health care, and how these advances could especially benefit people in rural and low-income communities.Tune in to discover how AI is shaping the future of dental visits — and maybe even protecting more than just your smile.Learn more about AI research at the University of Maryland School of Dentistry at https://www.dental.umaryland.edu/ai/Listen to The UMB Pulse on Apple, Spotify, Amazon Music, and wherever you like to listen. The UMB Pulse is also now on YouTube.Visit our website at umaryland.edu/pulse or email us at umbpulse@umaryland.edu.
In this episode, we explore how Explainable AI (XAI) is revolutionizing drug development, rare disease research, and precision medicine.Our guest, Frédéric Parmentier, Vice President of Data Science at Ariana Pharma, shares how their Explainable Artificial Intelligence (XAI) platform, KEM (Knowledge Extraction and Management), helps uncover critical biomarker signatures in small cohort clinical trials. Unlike traditional black box AI models, XAI delivers transparent, interpretable results that regulators and clinicians can trust—making it a game-changer for early-phase drug development, rare disease trials, and personalized medicine.We dive into:Why Explainable AI in clinical trials is essential for regulatory acceptance and scientific validation.How XAI can identify biomarkers of best responders in cohorts as small as 20–100 patients.Real-world examples where AI revealed insights that traditional statistics missed.The future of AI in drug development, precision medicine, and rare disease research.The difference between black box AI vs. Explainable AI in healthcare and why transparency matters.If you work in biopharma, clinical research, AI-driven healthcare, or drug discovery, this episode will give you powerful insights into how XAI can accelerate development, reduce trial failures, and enable personalized treatment strategies.About the PodcastAI for Pharma Growth is a podcast focused on exploring how artificial intelligence can revolutionise healthcare by addressing disparities and creating equitable systems. Join us as we unpack groundbreaking technologies, real-world applications, and expert insights to inspire a healthier, more equitable future.This show brings together leading experts and changemakers to demystify AI and show how it's being used to transform healthcare. Whether you're in the medical field, technology sector, or just curious about AI's role in social good, this podcast offers valuable insights.AI For Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr. Andree Bates created to help organisations understand how the use of AI based technologies can easily save them time and grow their brands and business. This show blends deep experience in the sector with demystifying AI for all pharma people, from start up biotech right through to Big Pharma. In this podcast Dr Andree will teach you the tried and true secrets to building a pharma company using AI that anyone can use, at any budget. As the author of many peer-reviewed journals and having addressed over 500 industry conferences across the globe, Dr Andree Bates uses her obsession with all things AI and futuretech to help you to navigate through the, sometimes confusing but, magical world of AI powered tools to grow pharma businesses. This podcast features many experts who have developed powerful AI powered tools that are the secret behind some time saving and supercharged revenue generating business results. Those who share their stories and expertise show how AI can be applied to sales, marketing, production, social media, psychology, customer insights and so much more. Dr. Andree Bates LinkedIn | Facebook | Twitter
In this episode of Earley AI Podcasts, host Seth Earley welcomes Mark Anderson, co-founder and CEO of Pattern Computer, for a fascinating exploration of what lies beyond the current AI mainstream. With a career grounded in technology, strategy, and scientific innovation—including receiving the Alexandra J. Nobel Award for his contributions to computing and medicine—Mark brings nearly a decade of experience in developing proprietary pattern recognition technologies that move far beyond traditional machine learning models.Together, Seth and Mark dive deep into the journey of Pattern Computer, unveiling its revolutionary Pattern Discovery Engine—a platform with the unique ability to make discoveries in data that have eluded conventional approaches. Mark explains how his passion for science and the shortcomings of the classic scientific method sparked the creation of new mathematical and architectural foundations in AI, leading to major breakthroughs not only in medicine but also across enterprise applications.Key Takeaways:Origins of Pattern Computer: The story behind the formation of Pattern Computer and its foundational mission to turn pattern discovery from an art into a science.A New Approach to AI: How the Pattern Discovery Engine goes beyond finding incremental improvements, enabling true discovery by flipping the traditional scientific method.Breakthroughs in Medicine: The real-world impact of Pattern Computer's approach, including the discovery of gene patterns and the development of new drugs for triple negative breast cancer.Pattern Discovery vs. Large Language Models: The critical differences between pattern discovery engines and LLMs, and how these technologies can work together to combine human-friendly communication with genuine scientific discovery.Explainable AI and Ethics: Why true explainability, interpretability, and ethical data are at the heart of next-generation AI—and how Pattern Computer is leading the way with interpretable outputs and transparency.Enterprise & Science Applications: Use cases in aerospace, mining, healthcare, and more, where Pattern Computer's approach has led to major discoveries in seconds—successes that eluded brute-force methods for years.Advice for Organizations: How businesses and innovators can access and test the Pattern Discovery Engine for their own complex data challenges.Insightful Quote from Mark Anderson:“Instead of having a hypothesis and then you run, you want to go again, it's the opposite. You're not allowed to have any hypothesis. You can't bring your bias to the game. And instead of that, you have good data. You run the data and you generate the hypothesis. That's the right way to solve problems.”LinksLinkedIn: https://www.linkedin.com/in/markandersonpredicts/Website: https://www.patterncomputer.comThanks to our sponsors: VKTR Earley Information Science AI Powered Enterprise Book
Peter Maurer, Assistant Professor of Molecular Engineering at the University of Chicago Pritzker School of Molecular Engineering, speaks with Pitt's HexAI podcast host, JordanGass-Pooré, about the future impact of quantum sensing on biomedical research and diagnostics.Peter's research lab leverages the extreme environmental sensitivity of quantum systems to develop powerful sensors suitable for cutting-edge biological research that are optically addressable and can operate under ambient conditions. He outlines both near-term and future applications of powerful quantum sensors in pathology and laboratory medicine. He provides a key example of how these sensors could enable a new type of nanoscale NMR spectroscopy, capable of measuring magnetic fields from biomolecules to non-invasively probe their chemical information and signaling pathways. In the near future, he points to diagnostic tools, currently being developed by companies, that use the unique optical signatures of quantum sensors for highly sensitive, background-free protein detection in small volumes. For the long term, he envisions the technology as a "field opener" for studying protein aggregation in neurodegenerative diseases like Alzheimer's and Parkinson's.Peter outlines how AI can be applied to analyze complex data from sensors that respond to multiple environmental factors and highlights the challenge of bringing together experts from quantum technology, biophysics, and medicine who can "talk each other's language.” He also touches on how the use of synthetic data in quantum sensing is a "completely under-appreciated" area with the potential to analyze complex environmental properties that would otherwise be missed by looking at single types of measurements. To advance the field from academic proofs-of-concept to clinical tools, he stresses the need for collaboration with academic and industry partners who can help engineer robust, "turnkey" systems that can be widely tested and used.The University of Pittsburgh Health and Explainable AI podcast is a collaborative initiative between the Health and Explainable AI (HexAI) Research Laboratory in the Department of Health Information Management at the School of Health and Rehabilitation Sciences, and the Computational Pathology and AI Center of Excellence (CPACE), at the University of Pittsburgh School of Medicine.Hosted by Jordan Gass-Pooré, a health and science reporter, this podcast series explores the transformative integration of responsible and explainable artificial intelligence into health informatics, clinical decision-making, and computational medicine. From reshaping diagnostic accuracy to enhancing patient care pathways, we'll highlight how AI is creating new bridges between researchers, clinicians, and healthcare innovators. Led by Ahmad P. Tafti, Hooman Rashidi and Liron Pantanowitz, the HexAI podcast is committed to democratizing knowledge around ethical, explainable, and clinically relevant AI. Through insightful conversations with domain experts, AI practitioners and students will spotlight the latest breakthroughs, discuss real-world applications, and unpack the challenges and opportunities that lie ahead in responsible AI in healthcare. So whether you're a student, practitioner, researcher, or policymaker, this is your gateway to the future of AI-powered healthcare
We think of computers as neutral, but what if they're learning our worst habits? From hiring to home loans, AI is making huge decisions about our lives, and it often gets things wrong in ways that are deeply unfair. Our latest feature article dives into the problem of algorithmic bias, using real-world examples to show how flawed data creates a biased machine. But it's not all doom and gloom. We also explore the solutions—from "Explainable AI" to the crucial role of human oversight. If you care about fairness and technology, this is a must-read. #AI #Bias #TechEthics #SocialJustice #FutureTech https://englishpluspodcast.com/algorithmic-bias-unmasking-the-flaws-in-our-digital-mirrors/ To unlock full access to all our episodes, consider becoming a premium subscriber on Apple Podcasts or Patreon. And don't forget to visit englishpluspodcast.com for even more content, including articles, in-depth studies, and our brand-new audio series and courses now available in our Patreon Shop!
In this episode of The AI Report, An AI Showdown: Musk Accuses Apple & OpenAI of Rigging the Future, and he's suing. Artie Intel and Micheline Learning report on all things Artificial Intelligence. Today, the duo explores breakthrough research in neuro-symbolic systems, the rollout of the EU’s AI Act, and how innovation, regulation, and everyday life are colliding in real time. Topics this episode covers include:
Was passiert, wenn ein Deep-Tech-Visionär aus dem Silicon Valley zurückkehrt, um Europas eigene Antwort auf generative KI zu entwickeln? In dieser Folge trifft Carsten Puschmann Jonas Andrulis, Gründer von Aleph Alpha, live auf dem 30 Jahre Jubiläumsevent ‘NEXT is NOW' der Brand Experience Agenturgruppe LIGANOVA. Im Gespräch vor Ort geht es um ungefilterte Perspektiven und eine mutige Vision für KI made in Europe. Und warum er dabei nicht auf den schnellen Exit, sondern auf Souveränität, Transparenz und Vertrauen setzt.Die beiden reden über:
Send us a textAI in home healthcare serves as a decision support system that helps professionals make better choices by analyzing data, surfacing key information, and providing suggestions while keeping final decisions in human hands.• AI scribes can transcribe caregiver-client conversations and automatically populate required forms• Digital assistants prepare caregivers for visits by providing quick client summaries and highlighting recent changes• Predictive models identify patients at risk of adverse events, helping clinicians prioritize care• AI tools must maintain the same security permissions as existing systems and comply with HIPAA regulations• Explainable AI models help build clinician trust by showing why specific recommendations were made• Voice technology represents the exciting future of AI in healthcare, moving from assistive to autonomous capabilitiesEpisode Resources:AI Resource Hub: https://alayacare.com/ai-resource-hub/Podcast: Agentic AI and the future of home-based care: Predictions for 2025 with Adrian SchauerPodcsat: Tackling home care staffing challenges with AI technologyToolkit: AI in home and community care: Your complete guideIf you liked this episode and want to learn more about all things home-based care, you can explore all our episodes at alayacare.com/homehealth360.
AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. What's stopping large language models from being truly enterprise-ready? In this episode, Vectara CEO and co-founder Amr Awadallah breaks down how his team is solving one of AI's biggest problems: hallucinations. From his early work at Yahoo and Cloudera to building Vectara, Amr shares his mission to make AI accurate, secure, and explainable. He dives deep into why RAG (Retrieval-Augmented Generation) is essential, how Vectara detects hallucinations in real-time, and why trust and transparency are non-negotiable for AI in business. Whether you're a developer, founder, or enterprise leader, this conversation sheds light on the future of safe, reliable, and production-ready AI. Don't miss this if you want to understand how AI will really be used at scale. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
In this episode of the Becker's Healthcare Podcast, Lukas Voss speaks with Gaurav Gupta, SVP of Product Strategy and Performance Management at Med-Metrix, about the rising role of artificial intelligence in revenue cycle management—and why transparency matters more than ever. Gaurav unpacks the challenges of “black box” AI, including staff distrust, compliance risk, and limited oversight, and explains how explainable AI can drive smarter, more trustworthy outcomes. Tune in to learn how healthcare leaders can evaluate AI tools with confidence and future-proof their RCM strategies.This episode is sponsored by Med-Metrix.
In dieser Podcast-Folge gebe ich zusammen mit meinem Kollegen, dem Head of AI & Automation, Thomas Fröhlich, ein Update zu den neuesten Entwicklungen im Bereich KI und Automatisierung. Wir sprechen über coole neue Features, die uns die Arbeit erleichtern, und über unsere Eindrücke von der InsureNXT 2025. Hier sind 5 Highlights aus unserer Unterhaltung: PDF-Formatierung & Zusammenfassung: Ich berichte von der neuen Möglichkeit, PDFs nicht nur zu erstellen, sondern auch zu formatieren und lange Berichte automatisch zusammenzufassen lassen – sowohl mit Groq AI als auch ChatGPT (inklusive Grafiken!). YouTube-Transkribierung mit Gemini: Gemini (kostenpflichtige Version) transkribiert jetzt Youtube-Videos superschnell. Das funktioniert zwar nicht immer fehlerfrei, aber es spart mir trotzdem enorm Zeit und zusätzliche Tools. Wir diskutieren, warum das so schnell geht (Youtube-Untertitel) und vergleichen es mit Drittanbietern. Custom GPTs (Gems in Gemini): Ich erkläre das Konzept der "Custom GPTs", wo man eigene, wiederverwendbare Prompts erstellt, um sich wiederholende Aufgaben zu automatisieren. Gemini bietet diese Funktion kostenlos als "Gems" an. Wir skizzieren, was "Gems" im Kontext von System Prompts und dem Gesamt-Prompt bedeuten. Diese Erklärung wird in einer späteren Folge vertieft. InsureNXT 2025: Wir teilen unsere Eindrücke von der InsureNXT 2025. KI und Automatisierung waren omnipräsent! Wir diskutieren die dort vorgestellten Themen: Strategien, Use Cases (Kundenservice, Schadenmanagement), die Herausforderung der Implementierung von KI in Unternehmen, und die oft fehlende "Hands-on"-Mentalität in den Präsentationen. Wir sprechen darüber, dass KI-Modelle letztendlich immer noch Software sind, die sorgfältig in die Organisationsstrukturen integriert werden müssen. System Prompts: Wir tauchen tief in das Thema "System Prompts" ein – die Anweisungen, die ein KI-Modell steuern und seine Persönlichkeit, sein Verhalten und seine Fähigkeiten definieren. Wir analysieren, warum sie so wichtig sind (Explainable AI, Nachvollziehbarkeit, Risikomanagement, Jailbreak Resistance), und was der Leak des Cloud-System Prompts bedeutet. Ich teile, was man aus der Analyse des geleakten Cloud-System Prompts lernen kann (z.B. Länge der Instruktionen, Umgang mit Wissen, etc.). Wir stellen fest, wie wichtig gute Prompt Engineers und Product Owner in Zukunft werden. Links in dieser Ausgabe Zur Homepage von Jonas Piela Zum LinkedIn-Profil von Jonas Piela Zum LinkedIn-Profil von Thomas Fröhlich Die Liferay Digital Experience Platform Kunden erwarten digitale Services für die Kommunikation, Schadensmeldung und -abwicklung. Liferays Digital Experience Platform bietet Out-of-the-Box-Funktionen wie Low-Code, höchste Sicherheit & Zuverlässigkeit. Jetzt Kontakt aufnehmen.
Notatki i linki wymienione w tym odcinku znajdziecie na naszej stronie: designpractice.pl/076---W tym odcinku rozmawiamy o:→ o projektowaniu dla AI→ styku UX z Data science→ życiu i pracy w Szwajcarii---Naszą gościnią jest Ania Maria Szlachta. Ania jest product designerką, która specjalizuje się w projektowaniu produktów i systemów związanych z AI oraz innowacjami. Prowadzi również badania i fascynuje się Human AI Interaction. Na co dzień mieszka i pracuje w Szwajcarii, gdzie prowadzi swoją własną firmę. ---Timestamps:0:00 Start1:19 Jaką książkę ostatnio przeczytałaś?1:58 Czym się zajmujesz?3:25 Z czym przychodzą do Ciebie designerzy?4:59 Projektowanie human-AI interaction9:00 Z jakim typem AI pracujesz?11:43 Z czym najczęściej mierzą się UX designerzy, projektując dla AI?15:42 Czy są poszukiwani designerzy typowo pod AI i data science?16:47 Co powinniśmy wiedzieć o AI jako designerzy?19:00 Czym jest user model feedback loop?22:18 Czym jest Explainable AI?24:26 Jak wygląda Twój proces pracy?29:44 Jakich narzędzi używasz i z kim współpracujesz?31:02 Skąd czerpać wiedzę na temat projektowania dla AI?37:23 Współpracujesz z Polską czy zagranicą?37:50 Jak bada się interfejsy tworzone dla AI?41:12 Interakcje głosowe z AI43:33 Jak wykorzystujesz AI w swojej pracy?45:07 Projektowanie dla AI – perspektywy pracy dla designerów48:34 Różnice kulturowe między Polską a Szwajcarią51:23 Plusy i minusy mieszkania w Szwajcarii54:48 Na rozwoju jakich umiejętności chciałabyś się skupić w najbliższym czasie?55:49 Zakończenie
Diese Woche haben wir zur Abwechslung echte Expertise im Podcast. Das Thema künstliche Intelligenz ist zurzeit in aller Munde. Innerhalb der Bevölkerung wird es aktuell aber hauptsächlich in kreativen Anwendungsgebieten wie Foto- und Videografie benutzt. Als Tool zur Unterstützung von Texterstellung und maßgeschneidertes Wikipedia Device. Aber künstliche Intelligenz hat in sehr viel mehr Bereichen Einzug gehalten und hilft heute beispielweise in der Diagnostik und der Früherkennung von Krebszellen und anderen Krankheiten auf zellbiologischer Ebene. Leonid Mill entwickelt mit seinem StartUp - MIRA Vision - ein solches Verfahren, das zukünftig Ärzten dabei helfen wird, Diagnosen schneller und präziser erstellen zu können. (00:00:00) - Intro (00:00:10) - Gast stellt sich vor (00:07:23) - AI in der Zellbiologie (00:11:33) - Definition und Funktionsweise von AI (00:14:31) - AI zur Krebsfrüherkennung (00:20:11) - Wie reagieren Ärzte auf AI (00:34:33) - Hightech Monitoring im Gesundheitsbereich (00:40:35) - Vertrauen wir AI mehr als Ärzten? (00:50:35) - Ethischer Umgang mit Daten (00:58:53) - Explainable AI (01:07:30) - AI und Pharmazie (01:10:07) - Verabschiedung
KI in der Medizin: Ethische Fragen praxisnah beantwortet
Ob Kredite, Jobs oder medizinische Diagnosen: Bei Entscheidungen darüber wird Künstliche Intelligenz bereits eingesetzt. Dabei sind die Einschätzungen komplexer KI-Systeme für Menschen oft kaum im Detail verständlich. Lässt sich das ändern? Krauter,Ralf; Schroeder,Carina
Linda Ivy-Rosser, Vice President for Forrester, outlines the evolution of business applications and forward thinking predictions of their future.Topics Include:Linda Ivy-Rosser has extensive business applications experience since the 1990s.Business applications historically seen as rigid and lethargic.1990s: On-premise software with limited scale and flexibility.2000s: SaaS emergence with Salesforce, AWS, and Azure.2010s: Mobile-first applications focused on accessibility.Present: AI-driven applications characterize the "AI economy."Purpose of applications evolved from basic to complex capabilities.User expectations grew from friendly interfaces to intelligent systems.Four agreements: AI-infused, composable, cloud-native, ecosystem-driven.AI-infused: 69% consider essential/important in vendor selection.Composability expected to grow in importance with API architectures.Cloud-native: 79% view as foundation for digital transformation.Ecosystem-driven: 68% recognize importance of strategic alliances.Challenges: integration, interoperability, data accessibility, user adoption.43% prioritizing cross-functional workflow and data accessibility capabilities.Tech convergence recycles as horizontal strategy for software companies.Data contextualization crucial for employee adoption of intelligent applications.Explainable AI necessary to build trust in recommendations.Case study: 83% of operators rejected AI recommendations without explanations.Tulip example demonstrated three of four agreements successfully.Software giants using strategic alliances as competitive advantage.AWS offers comprehensive AI infrastructure, platforms, models, and services.Salesforce created ecosystem both within and outside their platform.SaaS marketplaces bridge AI model providers and businesses.Innovation requires partnerships between software vendors and ISVs.Enterprises forming cohorts with startups to solve business challenges.Software supply chain transparency increasingly important.Government sector slower to adopt cloud and AI technologies.Change resistance remains significant challenge for adoption.69% prioritize improving innovation capability over next year.Participants:Linda Ivy-Rosser - Vice President, Enterprise Software, IT services and Digital Transformation Executive Portfolio, ForresterSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
2023 was the year of the Gen AI boom and multimodal AI. 2024 was the year of overinflated valuations but also the start of AI agents. What trends are you going to see in the year 2025 that you need to be ready for? From hyper-personalization to Explainable AI to AI-driven decision-making, 2025 is set to be a defining year for careers and businesses. In this episode, we break down 7 key AI trends that will shape the way we work, invest, and interact with technology. Listen to find out.Looking to become an influential and effective security leader? Don't know where to start or how to go about it? Follow Monica Verma (LinkedIn) and Monica Talks Cyber (Youtube) for more content on cybersecurity, technology, leadership and innovation, and 10x your career. Subscribe to The Monica Talks Cyber newsletter at https://www.monicatalkscyber.com.
JUG İstanbul Dijital Yolculuklar podcast serimizin yeni bölümünde, moderatörümüz Özlem Güncan ile birlikte, Senior Subject Matter Expert Fatih Bildirici'yi ağırlıyoruz.Bu bölümde yapay zeka ve MLOps süreçlerini derinlemesine inceleyeceğiz. Yapay zekanın temellerinden, açıklanabilir yapay zeka kavramına kadar birçok ilginç konuya değineceğiz. Günlük hayatımızda yapay zekanın yerini ve gelecekte bizi nelerin beklediğini öğrenmek için dinlemeye devam edin. Keyifli dinlemeler!Konuk: Fatih Bildirici (Senior Subject Matter Expert @Aselsan)Bölümün Konu Başlıkları:Yapay zeka nedir ve nasıl çalışır? Yapay zekanın aldığı kararları nasıl anlayabiliriz? Açıklanabilir yapay zeka (Explainable AI) ne demektir?MLOps nedir ve yapay zeka projelerinde neden önemlidir? Bir yapay zeka modeli nasıl geliştirilir ve uygulanır?Günlük hayatımızda yapay zekayı nerelerde görürüz?Yapay zeka sistemlerinde karşılaşılan temel zorluklar nelerdir?Gelecekte yapay zeka ve MLOps süreçlerinde neler bekleniyor?
Why is generative AI essential now? I hosted Kevin McGrath, Co-Founder & CEO, Meibel on The Ravit Show at The AI Summit New York to discuss Generative AI for Production.Kevin shared how Meibel's Explainable AI platform is empowering product and engineering leaders to build and deploy generative AI solutions with confidence. From accelerating innovation to measuring ROI and ensuring AI accountability, Meibel's approach is a game-changer for organizations aiming to integrate AI into their product.During our conversation, we explored:-- The growing importance of generative AI in today's landscape-- How generative AI differs fundamentally from traditional ML/AI approaches-- The value of companies building their own AI solutions to stay competitive-- The typical journey customers experience when implementing generative AI-- Strategies to address challenges like expertise gaps and risk mitigationIt was an insightful discussion that highlighted the transformative potential of generative AI and practical strategies for making it work in real-world production environments.#data #ai #aisummitnewyork #meibel #theravitshow
In this episode of AI, Government, and the Future, host Marc Leh is joined by Peter Swartz, co-founder and chief scientist at Altana, to discuss how AI is transforming global trade and supply chain management. Peter shares insights on Altana's AI-driven approach to providing visibility into complex value chains, highlighting its applications in both public and private sectors. The conversation covers the challenges of AI adoption in government, the importance of public-private partnerships, and the future of AI in international commerce.
From grounded AI models to explainable AI and overcoming data silos, the conversation dives into the nuts and bolts of creating domain-specific AI solutions. Balancing people, processes, and technology, this episode sheds light on the challenges and opportunities in leveraging AI for manufacturing reliability, maintenance, and optimization.
About Ricardo Berrios:Ricardo C. Berrios is a seasoned entrepreneur and senior executive with over 25 years of experience building and leading businesses in technology, healthcare, manufacturing, e-commerce, and retail. As the founding CEO of Adhera Health, he is pioneering digital solutions for families managing pediatric chronic conditions. Ricardo's expertise lies in leveraging technology to improve the patient experience, demonstrated by his leadership in developing Adhera's AI-powered digital companion platform. He has a proven track record in operational management, including team building, business development, and strategic marketing. Ricardo's global perspective, honed through cross-border initiatives in North America, Latin America, Europe, the Middle East, and Asia Pacific, informs his innovative approach to healthcare.About Luis Fernandez:Dr. Luis Fernandez is a digital health innovator with over 20 years of experience, driven by a personal commitment to improving healthcare access and equity. As Chief Scientific Officer at Adhera Health, he leads the development of their AI-powered digital companion platform, focusing on supporting families of children with chronic conditions. Luis combines expertise in AI and behavioral science to create inclusive and responsible solutions. His extensive research background, including work in mobile health, wearables, and gamification, informs his approach to personalized healthcare. Luis's global experience spans various countries, including Spain, Norway, Qatar, and the US, providing him with a unique perspective on the challenges and opportunities in digital health.Things You'll Learn:Healthcare innovation often faces implementation challenges due to slow processes and monolithic systems in countries with strong public healthcare.While European healthcare systems provide broad access, disparities exist and often receive less attention than in the US.A key difference between the US and Europe regarding healthcare innovation lies in the entrepreneurial ecosystem.Data privacy regulations, particularly in Europe, can create challenges for AI development.Transparency and user experience are vital in healthcare technology. Explainable AI and streamlined consent processes are crucial for building trust and empowering families.Resources:Connect with and learn more about Ricardo Berrios on LinkedIn.Follow and connect with Luis Fernandez on LinkedIn.Discover more about Adhera Health on their LinkedIn and visit their website.
Doug Shannon is an esteemed IT automation professional with over 20 years of experience in advanced technology roles. In this episode, KJ and Doug explore the evolving culture in businesses, the impact of multi-generational workforces, and the critical need for integrating new technologies effectively while maintaining a human touch. Doug also shares practical advice on fostering collaboration and planning for organizational changes in a rapidly evolving AI landscape. Key Takeaways: 08:29 The Changing Culture in Business 15:02 Planning for Attrition and Automation 21:59 Generational Differences in Data Sharing 23:12 The Future of AI and Human Integration 27:11 The Concept of Explainable AI 29:55 The Importance of Being a Jack of All Trades Quote of the Show (27:00): “You should be enabling, empowering, and emboldening your employees because they're your first customers.” – Doug Shannon Join our Anti-PR newsletter where we're keeping a watchful and clever eye on PR trends, PR fails, and interesting news in tech so you don't have to. You're welcome. Want PR that actually matters? Get 30 minutes of expert advice in a fast-paced, zero-nonsense session from Karla Jo Helms, a veteran Crisis PR and Anti-PR Strategist who knows how to tell your story in the best possible light and get the exposure you need to disrupt your industry. Click here to book your call: https://info.jotopr.com/free-anti-pr-eval Ways to connect with Doug Shannon: LinkedIn: https://www.linkedin.com/in/doug-shannon/ Company Website: https://www.theiathinktank.com/ Company LinkedIn: https://www.linkedin.com/company/solutionsreview-com/ How to get more Disruption/Interruption: Amazon Music - https://music.amazon.com/podcasts/eccda84d-4d5b-4c52-ba54-7fd8af3cbe87/disruption-interruption Apple Podcast - https://podcasts.apple.com/us/podcast/disruption-interruption/id1581985755 Spotify - https://open.spotify.com/show/6yGSwcSp8J354awJkCmJlDSee omnystudio.com/listener for privacy information.
Ever wonder what the difference is between explainable AI and understandable AI? In this episode, we break it down so you can sound sharp at your next meeting. Host Courtney Baker is joined by Knownwell CEO David DeWolf and Chief Product Officer Mohan Rao to explore why these terms matter and how they impact AI adoption in business. They discuss the importance of explainable AI for technical insights and regulatory compliance, while highlighting understandable AI's role in building trust and enhancing user experience. Our guest, Dom Nicastro, Editor-in-Chief at CMSWire, shares insights on AI's growing influence in customer experience and journalism. From empowering frontline agents to aiding journalists without replacing their expertise, Nicastro reveals how AI serves as a transformative but complementary tool. Plus, don't miss the debut of our new segment, Dragnet, where Pete Buer uncovers how AI helped the U.S. Treasury detect over $1 billion in fraud in 2024. It's a real-world example of AI's potential for good. Watch this episode on YouTube: https://youtu.be/txAJLP3iTvE Want to shape the future of AI in your business? Sign up for Knownwell's early access program and beta waitlist at Knownwell.com.
This episode unpacks how AI isn't just about futuristic robotics but is already reshaping industries as unassuming as brooms and brushes. What makes a self-learning system adaptable in such high-stakes settings? Why does it matter that AI “never stops learning”? Join us as we uncover the unexpected depth behind automation and explore how cutting-edge AI is changing MedTech manufacturing from the ground up.
From explainable AI to organizing massive AI projects, this isn't just about tech; it's about solving real-world challenges. Listen in as they share candid discussions, lessons learned, and visions for the future, all without the buzzword hype—just real insights from people making AI work in industry. Thanks for listening. We welcome suggestions for topics, criticism and a few stars on Apple, Spotify and Co. We thank our partner **SIEMENS** https://www.siemens.de/de/ Our event in January in Frankfurt ([more](https://www.hannovermesse.de/de/rahmenprogramm/special-events/ki-in-der-industrie/))
The basis for the discussion is a paper by Prof. Dr. Marco Huber, which was published a few weeks ago: How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law. Thanks for listening. We welcome suggestions for topics, criticism and a few stars on Apple, Spotify and Co. We thank our partner **SIEMENS** https://www.siemens.de/de/ Our event in January in Frankfurt ([more](https://www.hannovermesse.de/de/rahmenprogramm/special-events/ki-in-der-industrie/)) Our guests are [Marco Huber ](https://www.linkedin.com/in/marco-huber-78a1a151/) and [Tom Cadera ](https://www.linkedin.com/in/tom-cadera-05948636/) #machinelearning #ai #aimodel #industrialautomation #manufacturing #automation #genai #datascience #mlops #llm #IndustrialAI #artificialintelligence #Safety #NVIDIA #xLSTM #IndustrialAI #bluecollar #Transformer #HPE #Compute #Hardware #robotics #vision #PLC #Automation #Robotics #IndustrialAIPodcast #Vanderlande #Warehouse #Logistics #XAI #FraunhoferIPA
In this episode of AI, Government, and the Future, host Marc Leh is joined by Peter Swartz, co-founder and chief scientist at Altana, to discuss how AI is transforming global trade and supply chain management. Peter shares insights on Altana's AI-driven approach to providing visibility into complex value chains, highlighting its applications in both public and private sectors. The conversation covers the challenges of AI adoption in government, the importance of public-private partnerships, and the future of AI in international commerce.
QuotesBrian Magerko“We're really trying to show that we could co-create experiences with AI technology that augmented our experience rather than served as something to replace us in creative act”.“For every project like [LuminAI], there's a thousand companies out there just trying to do their best to get our money... That's an uncomfortable place to be in for someone who has worked in AI for decades”.“I had no idea what was going to happen kind of in the future. When we started EarSketch... we were advised by a couple of colleagues to not do it. And here we are, having engaged over a million and a half learners globally”.Charna Parkey"I remember the first robot that I built. It was part of the first robotic systems... and watching these machines work with each other was just crazy."“If you're building a product and your goal is to engage underrepresented groups, it is on you to make sure that you're educating the folks in a way that you're trying to reach.”Episode timestamps(01:11) Brian Magerko's Journey into AI and Robotics (05:00) LuminAI and Human-Machine Collaboration in Dance(09:00) Challenges of AI Literacy and Public Perception(17:32) Explainable AI and Accountability (20:00) The Future of AI and Its Impact on Human Interaction
Speaker Resources:Ashleigh's YT Channel: https://www.youtube.com/@AshleighFaithTools of the Month:Abk: Any Airline using Windows 3.1
Send us a Text Message.Ever wondered how artificial intelligence could transform cancer diagnosis? Join us on the Follow the Brand Podcast, where we sit down with Dr. Akash Parvatikar, an AI scientist at Histowid. Dr. Parvatikar shares his unique journey from electrical engineering to pioneering explainable AI for early breast cancer detection. We promise you'll gain a deep understanding of how AI can classify medical images and why making these processes transparent is crucial for improving diagnostic accuracy and reducing misdiagnosis rates. This episode is a treasure trove of insights into the future of healthcare and the revolutionary role of advanced technology. In a series of enlightening discussions, Dr. Parvatikar breaks down the integration of AI and digital pathology in personalized medicine. Discover how deep learning and graph-based approaches are identifying subtle clues in medical images, bridging the gap between misdiagnosis and correct diagnosis. We also simplify these complex AI concepts for a young audience, likening AI learning to everyday experiences like recognizing kitchens from photos. Listen in to learn how digitizing tissue biopsy slides is revolutionizing pathological diagnoses, enhancing both the quality and reliability of cancer detection. This episode is a must-listen for tech enthusiasts and healthcare professionals alike, looking to understand the transformative power of AI in medicine.Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest marketing trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates from us, be sure to follow us at 5starbdm.com. See you next time on Follow The Brand!
Showing your work isn't just for math class, it's also for AI! As AI systems become increasingly complex and integrated into our daily lives, the need for transparency and understanding in AI decision-making processes has never been more critical. We are joined by industry expert and Director of Data Science at Western Digital, Srinimisha Morkonda Gnanasekaran, for a discussion of the why, the how, and the importance of explainable AI. Panelists: Srinimisha Morkonda Gnanasekaran, Dir. Or Data Science & Advanced Analytics @ Western Digital - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: SHAP documentationEpisode 159: Exploring Bias in AIAlteryx's Explainable AI White PaperAlteryx Machine LearningEpisode 149: Crafting Your Message with Data Storytelling Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Dibble, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.
In this episode of the NVIDIA AI Podcast, recorded live at the GTC 2024, host Noah Kravitz sits down with Adam Wenchel, co-founder and CEO of Arthur. Arthur enhances the performance of AI systems across various metrics like accuracy, explainability, and fairness. Wenchel shares insights into the challenges and opportunities of deploying generative AI. The discussion spans a range of topics, including AI bias, the observability of AI systems, and the practical implications of AI in business. For more on Arthur, visit arthur.ai.
Ryan Staley sits down with Sankar Sundaresan, CEO and founder of Sky Genie, an AI-native company revolutionizing pipeline management for tech companies. Sankar shares invaluable insights on common mistakes CROs make, the importance of explainable AI in predicting pipeline yield, and how to align demand generation efforts with recent market traction. Don't miss this information-packed episode that could transform your approach to revenue growth! Join 2,500+ readers getting weekly practical guidance to scale themselves and their companies using Artificial Intelligence and Revenue Cheat Codes. Explore becoming Superhuman here: https://superhumanrevenue.beehiiv.com/ KEY TAKEAWAYS Sky Genie is a growth acceleration platform helping revenue teams drive efficient and predictable growth by reverse-engineering pipeline needs and providing a GPS-like system for planning and course correction. The biggest mistake CROs make is relying on rules of thumb for pipeline requirements without considering past execution data and the importance of having enough capacity to create and convert pipelines within the quarter. Explainable AI mimics the judgement process of experienced CROs, capturing their expertise in ML models trained against past performance data to provide accurate pipeline yield predictions. Aligning demand generation efforts with recent positive market traction in specific vertical segments, products, and geographies can significantly improve win rates and revenue growth. Continuously tweaking the pipeline generation machine based on recent win-loss data is crucial for capitalizing on tailwinds and addressing competitive challenges. A two-step approach to pipeline generation by identifying the most promising segment/vertical/geo/product combinations and then determining the best channels for those combinations leads to more efficient and targeted efforts. AI is best applied to high-level, critical questions like predicting pipeline yield and identifying opportunities at risk of slipping based on conversational intelligence data. Sky Genie focuses on giving CROs the tools to develop high-confidence, long-term pipeline plans that set them up for success, rather than just chasing current quarter deals. BEST MOMENTS "The single biggest mistake is just relying on rules of thumb. Like I need three X pipeline. I need four X pipeline because you'll be amazed at, you know, how many jobs we've seen dropping to the ground when we actually show them." "I think there's a huge opportunity to align where we're building pipeline with where sales teams are actually able to win as evidenced, not by people doing research on 6sense and things like that." "Instead of making it a one-step answer, it's really a two-step answer. You solve for it one step at a time, and therefore you get to a better, more efficient way of generating the right kind of pipe, using the right channels for exactly the type of pipe you need to generate." "If all you're doing is just chasing current quarter deals, I think you're just setting yourself up for failure, right?" Ryan Staley Founder and CEO Whale Boss ryan@whalesellingsystem.com www.ryanstaley.io Saas, Saas growth, Scale, Business Growth, B2b Saas, Saas Sales, Enterprise Saas, Business growth strategy, founder, ceo: https://www.whalesellingsystem.com/closingsecrets
Researchers now employ artificial intelligence (AI) models based on deep learning to make functional predictions about big datasets. While the concepts behind these networks are well established, their inner workings are often invisible to the user. The emerging area of explainable AI (xAI) provides model interpretation techniques that empower life science researchers to uncover the underlying basis on which AI models make such predictions. In this month's episode, Deanna MacNeil from The Scientist spoke with Jim Collins from Massachusetts Institute of Technology to learn how researchers are using explainable AI and artificial neural networks to gain mechanistic insights for large scale antibiotic discovery. More on this topic Artificial Neural Networks: Learning by Doing The Scientist Speaks is a podcast produced by The Scientist's Creative Services Team. Our podcast is by scientists and for scientists. Once a month, we bring you the stories behind news-worthy molecular biology research. This month's episode is sponsored by LabVantage, serving disease researchers with AI-driven scientific data management solutions that increase discovery and speed time-to-market. Learn more at LabVantage.com/analytics.
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
The Explainable AI Layer of the Cognilytica Trustworthy AI Framework addresses the technical methods that go into understanding system behavior and make black boxes less so. In this episode of the AI Today podcast Cognilytica AI experts Ron Schmelzer and Kathleen Walch discuss the interpretable and explainable AI layer. The Explainable AI Layer Separate from the notion of transparency of AI systems is the concept of AI algorithms being able to explain how they arrived at particular decisions. Continue reading Explainable AI Concepts [AI Today Podcast] at Cognilytica.
Dive into the intricate world of trustworthy AI in this enlightening episode. Discover the multifaceted nature of trustworthiness, from accuracy and reliability to fairness and transparency. Explore the methodologies, technologies, and industry practices shaping trustworthy AI systems. Learn from real-world case studies and envision the promising future of AI that's not just intelligent but also trustworthy. Join us as we unravel the importance of trust in AI for its broader acceptance and effectiveness.----------Resources used in this episode:In AI We Trust: Ethics, Artificial Intelligence, and Reliability [Link].The relationship between trust in AI and trustworthy machine learning technologies [Link].Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims [Link].Trustworthy Artificial Intelligence: A Review [Link].Blockchain for explainable and trustworthy artificial intelligence [Link].Trustworthy AI in the Age of Pervasive Computing and Big Data [Link].From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems [Link].Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims [Link].Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems [Link].Trustworthy AI: From Principles to Practices [Link].Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details
Many AIs are 'black box' in nature, meaning that part of all of the underlying structure is obfuscated, either intentionally to protect proprietary information, due to the sheer complexity of the model, or both. This can be problematic in situations where people are harmed by decisions made by AI but left without recourse to challenge them.Many researchers in search of solutions have coalesced around a concept called Explainable AI, but this too has its issues. Notably, that there is no real consensus on what it is or how it should be achieved. So how do we deal with these black boxes? In this podcast, we try to find out.Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday. Hosted on Acast. See acast.com/privacy for more information.
How can we create better AI that's centered around users? What influence will AI have on products and its users? Svetlana Makarova, AI Group Product Manager at Mayo Clinic, joins us to discuss how AI will reshape product strategy and management. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Svetlana and Jordan questions about AI product strategyUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:[00:01:45] About Svetlana and AI product management at Mayo Clinic[00:07:00] User centric AI[00:11:00] Should we incorporate AI into everything?[00:16:00] How to implement AI in product strategy[00:21:00] Importance of explainable AI [00:24:00] Creating user centric AI[00:29:05] Svetlana's final takeawayTopics Covered in This Episode:1. Importance of User-Centric AI2. Decision-Making Process for Implementing AI3. Product Development Methodology4. Importance of explainable AI in building trustKeywords:AI integration, User-centric AI, Seamless integration, Google, Amazon, Generative AI, Decision-making process, Return on investment, User feedback, Automation, Work shares, Synthetic data, User workflows, Solution approaches, Enterprise scaling, Data platform, Flexible infrastructure, Explainable AI, Mayo Clinic, AI product management, Product strategies, Market introduction, Buzzword, Challenges for enterprises, User needs, AI solutions, Practical advice, career, business. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Christian Martinez, Finance Analytics Manager at Kraft Heinz is an in-demand Conference Speaker and specialist who teaches a top-rated course, with (previous guest) Nicolas Boucher. In this episode discover the secrets about game changing uses of AI and Python for your FP&A career. “We're in the same era as when Excel was first invented,” says Martinez “There were people still using calculators and pens while others shifted to Excel and dramatically improved performance.” In this episode: Why Python and Excel together at long last as a “game changer” for FP&A “Explainable” AI in FP&A How AI is improving overall budgets and forecasting Can non-data science people jump right in with AI? Getting comfortable being uncomfortable His path to a “boutique” course teaching practical application of AI in FP&A Why the best way to fully learn something is to teach it The most awesome uses of AI in FP&A How Gen AI is going to change FP&A in 2024 Waterfall charts in Excel Follow Christian Martinez (LinkedIn): https://www.linkedin.com/in/christianmartinezthefinancialfox/ Links FREE COURSE – PYTHON FOR FP&A AND FINANCE. Curated by Christian Martinez Advanced ChatGPT for Finance course by Christian Martinez and Nicolas Boucher
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Black Box, Explainable AI (XAI), Interpretable AI, explain how these terms relate to AI and why it's important to know about them.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and the need for interdisciplinary collaboration between the computer science, biology, and medical fields. We also explore her recent paper on the use of drug combination therapy, challenges with handling biomedical data, and how they aim to make meaningful contributions to the healthcare industry by aiding in cause identification and treatments for Cancer and Alzheimer's diseases. The complete show notes for this episode can be found at twimlai.com/go/642.
This conversation is about the importance of standards in technology and how the Object Management Group (OMG) develops and maintains standards. Mike discusses the challenges of keeping up with the pace of technological change and how OMG's standards process helps to ensure that standards are stable and well-thought-out. Mike also discusses the potential for standards to help with the development of new technologies, such as artificial intelligence (AI) and blockchain. They argue that standards can help to ensure that these technologies are used in a responsible and ethical way. Here are some key points from the conversation: Standards are essential for interoperability and ensuring that different technologies can work together. The OMG's standards process is designed to be stable and well-thought-out, while still allowing for incremental change. Standards can help to ensure that new technologies are used in a responsible and ethical way. Overall, the conversation highlights the importance of standards in technology and the role that OMG plays in developing and maintaining these standards. Here are some specific examples of standards that were discussed in the conversation: XBRL (eXtensible Business Reporting Language) is a standard for exchanging financial information. ontologies are formal definitions of the meanings of concepts. Explainable AI is a type of AI that can explain its own decisions. The conversation also touches on the following topics: The speed of technological change The challenges of developing standards for new technologies The potential for standards to help with the development of new technologies The ethical implications of new technologies Finally, get your exclusive free access to the Industrial Academy and a series on “Why You Need To Podcast” for Greater Success in 2023. All links designed for keeping you current in this rapidly changing Industrial Market. Learn! Grow! Enjoy! MIKE BENNETT'S CONTACT INFORMATION: Personal LinkedIn: https://www.linkedin.com/in/mikehypercube/ Company LinkedIn: https://www.linkedin.com/company/omg/ Company Website: https://www.omg.org/ PODCAST VIDEO: https://youtu.be/RDeSKJ0FoiY THE STRATEGIC REASON "WHY YOU NEED TO PODCAST": OTHER GREAT INDUSTRIAL RESOURCES: NEOM: https://www.neom.com/en-us Hexagon: https://hexagon.com/ Arduino: https://www.arduino.cc/ Fictiv: https://www.fictiv.com/ Hitachi Vantara: https://www.hitachivantara.com/en-us/home.html Industrial Marketing Solutions: https://industrialtalk.com/industrial-marketing/ Industrial...
Artificial Intelligence (AI) is all the rage now and even small businesses need to embrace it to remain competitive and to grow in today's digital age. AI can help increase productivity, streamline processes, and facilitate more accurate data-based decisions. Welcome Jeremy Bormann - a young entrepreneur and founder of Legal Pythia who's passionate about leveraging technology to improve the legal field. He has an impressive academic law background from both Germany and Scotland. Though Jeremy never practiced law, his internships at top law firms revealed inefficiencies in document management. Out of this frustration, Legal Pythia was born. Jeremy's unique blend of legal knowledge and tech-savvy has enabled him to co-create an Explainable AI (XAI) solution that's transforming document management for not only small law firms, but also insurance companies and other businesses that can spend excessive time on research. In this episode, you will learn how to: Expedite legal document management, Leverage XAI to build transparency and customer trust, Uncover the potential of AI-driven strategies for business expansion and scalability, Eliminate bias when researching or comparing data using XAI, Secure and encrypted data in legal document management, and more! Discover how Legal Pythia's XAI holds the potential to expand and scale your company's operations in ways you never imagined!